id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.16528 | Deep Learning and Computer Vision for Glaucoma Detection: A Review | Glaucoma is the leading cause of irreversible blindness worldwide and poses
significant diagnostic challenges due to its reliance on subjective evaluation.
However, recent advances in computer vision and deep learning have demonstrated
the potential for automated assessment. In this paper, we survey recent studies
on AI-based glaucoma diagnosis using fundus, optical coherence tomography, and
visual field images, with a particular emphasis on deep learning-based methods.
We provide an updated taxonomy that organizes methods into architectural
paradigms and includes links to available source code to enhance the
reproducibility of the methods. Through rigorous benchmarking on widely-used
public datasets, we reveal performance gaps in generalizability, uncertainty
estimation, and multimodal integration. Additionally, our survey curates key
datasets while highlighting limitations such as scale, labeling
inconsistencies, and bias. We outline open research challenges and detail
promising directions for future studies. This survey is expected to be useful
for both AI researchers seeking to translate advances into practice and
ophthalmologists aiming to improve clinical workflows and diagnosis using the
latest AI outcomes. | Mona Ashtari-Majlan, Mohammad Mahdi Dehshibi, David Masip | 2023-07-31T09:49:51Z | http://arxiv.org/abs/2307.16528v1 | # Deep Learning and Computer Vision for Glaucoma Detection: A Review
###### Abstract
Glaucoma is the leading cause of irreversible blindness worldwide and poses significant diagnostic challenges due to its reliance on subjective evaluation. However, recent advances in computer vision and deep learning have demonstrated the potential for automated assessment. In this paper, we survey recent studies on AI-based glaucoma diagnosis using fundus, optical coherence tomography, and visual field images, with a particular emphasis on deep learning-based methods. We provide an updated taxonomy that organizes methods into architectural paradigms and includes links to available source code to enhance the reproducibility of the methods. Through rigorous benchmarking on widely-used public datasets, we reveal performance gaps in generalizability, uncertainty estimation, and multimodal integration. Additionally, our survey curates key datasets while highlighting limitations such as scale, labeling inconsistencies, and bias. We outline open research challenges and detail promising directions for future studies. This survey is expected to be useful for both AI researchers seeking to translate advances into practice and ophthalmologists aiming to improve clinical workflows and diagnosis using the latest AI outcomes.
Glaucoma, Deep Learning, Computer Vision, Machine learning
## 1 Introduction
Glaucoma is the second leading cause of irreversible blindness worldwide, affecting over 70 million people as of 2020 [1]. If left untreated, glaucoma leads to permanent vision loss due to damage to the optic nerve head and retinal nerve fiber layer [2]. However, despite improved understanding and management of glaucoma, it still accounts for approximately 10% of global blindness [3, 4]. This high disease burden motivates the development of enhanced diagnostic techniques to enable early diagnosis and timely intervention to prevent or slow down the further deterioration of vision.
Accurately diagnosing glaucoma remains challenging for several reasons [5]. Firstly, glaucoma is often asymptomatic in its early stages, impeding detection without comprehensive eye examinations. Secondly, current diagnostic modalities like imaging tests and functional assessments have limitations in sensitivity and specificity. For example, while optical coherence tomography effectively captures structural changes in retinal layers, it cannot detect early functional loss. In contrast, visual field testing assesses functional impact but has low sensitivity for structural changes. Finally, the wide variability in glaucoma presentation, from subtle early symptoms to severe late-stage damage, makes definitive diagnoses difficult [2]. The complexity and subjectivity of evaluating diverse examination findings further complicate diagnosis.
This survey aims to provide a comprehensive overview of applying deep learning and computer vision algorithms to enhance glaucoma diagnosis1 by overcoming these challenges. Such automated techniques offer the potential for earlier detection, more consistent quantification of progression, and ultimately preserving vision that would otherwise be lost to glaucoma [6, 7]. Synthesizing recent techniques, results, and open problems can deliver value to both ophthalmology practitioners and AI researchers. For ophthalmology practitioners, it highlights cutting-edge research to improve diagnostics accuracy and integrate intelligent systems into clinical workflows. For AI researchers, it provides a landscape analysis of the state-of-the-art, remaining gaps, and future opportunities to advance glaucoma diagnosis algorithms.
Footnote 1: When we use the term “diagnosis” in the context of papers that focus on deep learning and computer vision, we are referring to the use of AI technology to support medical diagnosis.
The remainder of the paper is organized as follows: Section 2 outlines the employed search protocol to select relevant papers for review, ensuring comprehensive topic coverage. Section 3 discusses clinical terminologies and definitions used in ophthalmology, enabling AI researchers to grasp the essential concepts necessary for interpreting the paper. Section 4 delves into the datasets used for training and testing deep learning and computer vision models and the performance metrics employed to evaluate their effectiveness. Section 5 explores the various types of features, including structural, statistical, and hybrid, extracted for glaucoma diagnosis. Section 6 reviews the latest research on developing end-to-end deep learning models for glaucoma diagnosis, categorizing them based on the type and architecture of the models. This section highlights the ability of deep learning models to integrate multiple modalities, analyze complex data, and provide accurate diagnosis and monitoring. Section 7 focuses on the methods proposed for early glaucoma prediction, shedding light on the advance
ments in prognostic techniques. In Section 8, challenges and potential future directions are discussed, addressing the limitations and paving the way for further research in the field. Finally, Section 9 serves as the conclusion, summarizing the key findings and contributions. It emphasizes the transformative potential of deep learning and computer vision in revolutionizing glaucoma diagnosis and management.
## 2 Search protocol
A systematic search was conducted to identify relevant studies on deep learning and computer vision techniques for glaucoma diagnosis published between 2017 and 2023. This date range was selected to capture the state-of-the-art advancements in this rapidly progressing field.
The following scholarly databases were searched: Web of Science, PubMed, IEEE Xplore, and Google Scholar. Targeted search terms included "glaucoma," "deep learning," "computer vision," "machine learning," "artificial intelligence," and " medical diagnosis" keywords. These were combined using Boolean operators and customized search strings tailored to each database. When available, searches were restricted to titles, abstracts and author keywords to filter potentially relevant papers efficiently. We also gave priority to leading conferences and journals in deep learning, computer vision, medical imaging, and ophthalmology.
After deduplication, the records retrieved underwent two-phase screening. Title/abstract screening evaluated relevance to glaucoma diagnosis using deep learning and computer vision approaches. The full-text review then confirmed papers met the inclusion criteria: (1) written in English; (2) primary focus on automated glaucoma diagnosis/screening using deep learning or computer vision; (3) rigorous machine learning experiments and substantial technical depth. Studies were excluded if they: (1) focused on other ocular diseases, even with glaucoma sub-analysis; (2) primarily contributed clinically focused insights. We also excluded review papers, case reports, and conference abstracts given limited methodological detail.
Additionally, the reference lists of included papers were manually searched to identify any additional relevant studies that might have been missed in the initial search. This way, we could include seminal papers even if published outside the target venues. Data extraction from the selected papers involved capturing key information such as study design, dataset characteristics, deep learning architectures, evaluation metrics, and major findings. This information serves as the foundation for synthesizing the current state of research in glaucoma diagnosis using deep learning and computer vision.
While this search protocol aimed to be comprehensive, some relevant studies may have been inadvertently omitted, given the rapid pace of research in this field. Nonetheless, the systematic selection aimed to identify a representative sample for assessing the state-of-the-art deep learning and computer vision techniques for glaucoma diagnosis.
## 3 Glaucoma: Definition and Diagnosis
Familiarity with key ophthalmic concepts and terminologies [8] is necessary, specifically for AI researchers, to effectively interpret glaucoma diagnosis research. This section defines relevant terms and tests used in clinical glaucoma assessment, with a summary provided in Table I.
Glaucoma is a condition characterized by the degeneration of Retinal Ganglion Cells (RGCs), leading to structural changes in the retina [9]. These changes manifest as (1) the thinning of the Retinal Nerve Fiber Layer (RNFL), Ganglion Cell with the Inner Plexiform Layer (GCIPL), and Ganglion Cell Complex (GCC) profiles, (2) narrowing Neuroretinal Rim (NRR), and (3) cupping of the Optic Nerve Head (ONH) or enlargement of the Cup-to-Disc Ratio (CDR) [2] (see Fig. 1). In addition to these structural changes, glaucoma also causes functional damage, resulting in defects in visual field sensitivity [10].
The process of detecting glaucoma is both complex and time-consuming [12]. To gain valuable insights into the structural and functional changes associated with the disease, medical examinations and clinical expertise are utilized, where imaging techniques play a vital role. The National Institute for Health and Care Excellence in the UK recommends the use of fundus imaging to examine the ONH [13].
Fundus imaging is a method that takes detailed images of the retina and OD. This helps with evaluating the appearance of the optic nerve, detecting vascular changes, and identifying any abnormalities. This imaging modality is a useful diagnostic tool for a variety of ocular conditions, including glaucoma. Optical Coherence Tomography (OCT) is another non-invasive imaging technique that produces highly detailed cross-sectional images of the retina, optic nerve, and other eye structures. It enables clinicians to visualize the retinal layers, measure the thickness of the RNFL, and assess the integrity of the NRR.
While fundus and OCT imaging techniques are commonly used to capture structural changes associated with glaucoma, the Humphrey Visual Field analysis (VF) measures the sensitivity of the visual field. This test, based on standard automated perimetry, allows patients to respond
Fig. 1: Anatomical structures of the human eye and optic nerve relevant to glaucoma detection. [Left] Schematic views, [Right] Fundus and OCT views [11]. This figure was created using images licensed under Creative Commons.
to visual stimuli presented at different locations within their visual field. By mapping the patient's responses, clinicians can identify visual field defects associated with glaucoma and assess the extent of retinal sensitivity loss to provide better ground truth [14, 15, 16].
The integration of fundus, OCT, and VF modalities [17, 18], in addition to expert-level features [19] and biomarkers such as Intraocular Pressure (IOP) [20, 21, 22], can leverage a broader range of inputs to improve the accuracy and enhance the performance of deep learning models, enabling more precise detection, prediction, and management of glaucoma.
## 4 Datasets and Evaluation Metrics
In this section, we present a concise overview of the datasets used in the studies reviewed within this paper, as well as the evaluation metrics employed to assess the performance of trained models in glaucoma diagnosis. To facilitate ease of reference and enhance clarity, we have summarized this information in a table format (refer to Table 22).
Footnote 2: The links to the databases were active at the time of submitting this paper. In case of any inactive links, please contact the authors of the respective papers for updated information.
Various evaluation metrics have been used to assess the performance of glaucoma diagnostic models. To account for the lack of standardization in metrics across datasets, researchers have used both subject-level and finer-grained spatial metrics in their studies. Equation 1 illustrates the commonly used quantitative measures in the reviewed studies. These include accuracy (ACC), sensitivity (SEN), specificity (SPE), precision (PRC), and the F1-score. Here, \(TP\), \(TN\), \(FP\), and \(FN\) represent true positive, true negative, false positive, and false negative, respectively. Additionally, the area under the receiver operating characteristic curve (AUC) metric has been employed to evaluate the performance of glaucoma diagnosis.
\[\text{Accuracy (ACC)} =\frac{TP+TN}{TP+TN+FP+FN}.\] \[\text{Sensitivity (SEN)} =\frac{TP}{TP+FN},\] \[\text{Specificity (SPE)} =\frac{TN}{TN+FP}, \tag{1}\] \[\text{Precision (PRC)} =\frac{TP}{TP+FP},\] \[\text{F1-score} =\frac{2\times\text{SEN}\times\text{SPE}}{\text{SEN}+\text{SPE}}.\]
## 5 Feature Extraction
The feature extraction process involves deriving a set of representative features from the input data. These features can be extracted using various methods, such as conventional statistical or structural techniques (also known as hand-crafted features), deep learning architectures that automatically learn relevant features from the data, or biomarkers identified by domain experts based on their knowledge. The resulting informative features are then used as inputs to train the learning model.
### _Structural Features_
Structural image measurements based on physical characteristics of the optic nerve head help clinicians in quantifying retinal structures relevant to glaucoma. These measurements provide objective information that can be used to determine the severity of glaucoma and monitor its progression. Clinical knowledge can help improve the accuracy and interpretability of machine learning algorithms[39]. One of the most important structural measurements that can be extracted from fundus images and is used by many clinicians is the CDR. A higher CDR indicates a larger Cup and a higher risk of glaucoma [40]. Therefore, many glaucoma screening works focus on accurate OC and OD
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Terminology** & **Abbreviation** & **Definition** \\ \hline Retinal Ganglion Cells & RGCs & Neurons in the innermost layer of the retina which receive and transmit visual information to the brain. \\ Retinal Nerve Fiber Layer & RNFL & Layer of RGCs\({}^{\prime}\) nerve fibers (_i.e._, axons) that comprises the optic nerve and extends into the retina. \\ Ganglion Cell Layer & GCL & Layer of RGCs\({}^{\prime}\) cell bodies. \\ Inner Plexiform Layer & IPL & Layer of RGCs\({}^{\prime}\) dendrites. \\ Ganglion Cell Complex & GCC & Forms the combination of RNFL, GCL, and IPL layers. \\ Ganglion Cell with the Inner Plexiform Layer & GCIPL & Layer that consists of GCL and IPL. \\ & & The structure in the posterior section of the eye that enables the exit of axons of RGCs and the entry/exit of blood vessels. \\ & & & \\ Neuroretinal Rim & NRR & Area of the optic nerve head composed of retinal ganglion cell axons. \\ & & & \\ Optic Disc & OD & The optic disc and optic nerve head are interchangeable terms. \\ & & \\ Optic Cup & OC & Central depression in the optic nerve head. \\ & & & Quantitative measure comparing the size of the optic cup to the optic disc. \\ & & & \\ Intraocular Pressure & IOP & Fluid pressure inside the eye, influenced by the balance of aqueous humor production and drainage. \\ \hline \end{tabular}
\end{table} TABLE I: Ophthalmic terminologies.
segmentation tasks [41, 42, 43, 44, 45, 46, 47, 48, 49].
Soorya et al. [50], for instance, proposed an adaptive threshold framework for segmenting the OC and OD based on geometrical features. Following a clinical approach used by ophthalmologists, the proposed algorithm tracks blood vessels inside the Disc region, identifies the points at which different vessels are bent for the first time, and connects them to obtain the contours of the OC. They further calculated the vertical CDR (the distance between the topmost and bottommost points of the OC and OD) and proposed a threshold based on which they classified an image as
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline
**Dataset** & \multicolumn{3}{c}{**Number of Images**} & \multicolumn{1}{p{113.8pt}}{**Resolution**} & \multicolumn{1}{p{113.8pt}}{**Ground Truth**} & \multicolumn{1}{p{113.8pt}}{**Note**} & \multicolumn{1}{p{113.8pt}}{**Address**} \\ \cline{2-10} & \multicolumn{3}{c}{**Glaucoma Healthy**} & \multicolumn{1}{p{113.8pt}}{**Total**} & \multicolumn{1}{p{113.8pt}}{**Fundus**} & \multicolumn{1}{p{113.8pt}}{Subject-level} & \multicolumn{1}{p{113.8pt}}{label, Segmentation (OC/OD)} & \multicolumn{1}{p{113.8pt}}{} & \multicolumn{1}{p{113.8pt}}{} & \multicolumn{1}{p{113.8pt}}{} \\ \cline{2-10} REFUGE [23] & 121 & 1,079 & 1,200 & 2,124\(\times\)2,056, 1,634\(\times\)1,634 & Subject-level (OC/OD) & label, Segmentation (OC/OD) & GT & - & Link \\ LAG [24] & 4,878 & 6,882 & 11,760 & 500\(\times\)500 & Subject-level (local) & \begin{tabular}{c} Subject-level label, Attention GT maps \\ Combination of RIM-ONE+1, \(\tau\)2 and \(\tau\)3 [26] \\ Images are cropped at ON \\ ORIGA [27] & 168 & 482 & 650 & Various & Subject-level (local) (OC/OD) \\ DRISHTI-GS1 [28] & 70 & 31 & 101 & 2,896\(\times\)1,944 \\ ACRIMA [30] & 396 & 309 & 705 & 2,048\(\times\)1,536 \\ SIGF [31] & - & - & 3,671 & Various & Subject-level (local) & \begin{tabular}{c} Available \\ upon \\ request \\ \end{tabular} \\ HRF [32] & 15 & 15 & - & 3,504\(\times\)2,336 & Subject-level (local) & \begin{tabular}{c} DRISHTI-GS1 is an extension of DRISHTI-GS [29] \\ ON \\ \end{tabular} & Link \\ DRIONS-DB [33] & - & - & 110 & 600\(\times\)400 & Contour of ON & \begin{tabular}{c} Images are cropped at ON \\ 405 Sequential fundus \\ images for glaucoma forecast including an average of 9 images per \\ eye \\ \end{tabular} & Link \\ ODIR-5K [34] & 307 & 1,620 & - & Various & Subject-level label & \begin{tabular}{c} Contains 5,00 images divided into eight categories \\ contains 1087 images divided into 37 categories \\ \end{tabular} & Link \\ JSIEC [35] & 13 & 54 & Various & Subject-level label & \begin{tabular}{c} Contains 1087 images divided into 37 categories \\ \end{tabular} & Link \\ RIGA [36] & - & - & 750 & Various & \begin{tabular}{c} Segmentation (OC/OD) \\ \end{tabular} & GT & No subject-level label & Link \\ \hline \hline \end{tabular} \begin{tabular}{c} **OCT** \\ \hline AGE [37] & - & - & 300 & Various & \begin{tabular}{c} Subject-level label (Angle-closure vs. Open-angle glaucoma), \\ Splevoman\(\dagger\), Scleral (14 ratio of closure to open-angle samples) \\ \end{tabular} & Link \\ \hline \hline \end{tabular} \begin{tabular}{c} **OCT \& Fundus** \\ \hline AFIO [11] & 32 & 18 & 50 & \begin{tabular}{c} OCT: 951\(\times\)456, Fundus: \\ 2,032\(\times\)1,934 \\ \end{tabular} & \begin{tabular}{c} Subject-level label, CDR \\ values \\ \end{tabular} & \begin{tabular}{c} The data is from 26 subjects \\ \end{tabular} & Link \\ \hline \hline \end{tabular} \begin{tabular}{c} **VF** \\ \end{tabular} &
\begin{tabular}{c} **Glaucoma** can be classified into two broad categories, angle-closure and open-angle. The former is considered a more aggressive form of the disease compared to the latter [37]. \\ \end{tabular} & Link \\ \hline \hline \end{tabular}
\end{table} TABLE II: A review of the most commonly used datasets for glaucoma diagnosis. GT: Ground Truth, G: Glaucoma, H: Healthy.
normal/suspect glaucoma/glaucoma. Mvoulana et al. [51] proposed using the K-means clustering algorithm with an intensity-based nearness criterion for pixel classification, followed by a model-based boundary fitting approach employing the circular Hough transform for OC and OD segmentation. They later calculated the CDR value and used a threshold to classify healthy and glaucoma patients. The method achieved 98% accuracy on the DRISHTI-GS1 dataset for the final glaucoma diagnosis.
Deep learning models have shown promising results in various image analysis tasks, including image segmentation [52]. For example, Jiang et al. [53] proposed to use two Faster R-CNNs [54] to segment the OC and OD separately and find the minimal bounding boxes for the two regions. The authors extended their work in [40], where they focused on the joint OC and OD segmentation problem by proposing JointRCNN, an end-to-end region-based convolutional neural network. The JointRCNN consists of four major parts: feature extraction module, Disc proposal network, Disc attention module and Cup proposal network. The feature extraction module is shared by OC and OD segmentation tasks, and atrous convolution [55] is used to improve the feature extraction performance in this module. The Disc and Cup proposal networks generate bounding box proposals, and the Disc attention module is proposed to connect the two networks. To improve the segmentation performance, Fu et al. [42] proposed using polar transformation along with a multi-label deep network (M-Net) for joint segmentation of the OC and OD in retinal images. The proposed M-Net consists of a multi-scale input layer to construct an image pyramid, a U-shape convolutional network to learn the rich hierarchical representation, a side-output layer as an early classifier that produces a companion local prediction map for different scale layers, and a multi-label loss function to generate the final segmentation map. They calculated the CDR value for glaucoma screening and evaluated the performance of the proposed method on the ORIGA and Singapore Chinese Eye Study (SCES) datasets achieving an AUC of 85.08% and 89.98%, respectively. Liu et al. [45] proposed a joint OC and OD segmentation method based on a semi-supervised model to take advantage of both labeled and unlabeled data to improve the segmentation performance. The proposed conditional Generative Adversarial Nets (cGAN)-based architecture consists of a segmentation net, a generator and a discriminator to learn a mapping between the fundus images and the corresponding segmentation maps. Both the segmentation net and the generator in the proposed framework focus on learning the conditional distributions between fundus images and their corresponding segmentation maps. At the same time, the discriminator determines whether the image-label pairs come from the empirical joint distribution. Furthermore, the proposed method performed better than its fully-supervised version on both ORIGA and REFUGE datasets, with AUC values of 86.22% and 90.11% and accuracy rates of 76.57% and 82.78%, respectively.
While segmentation-based methods have shown effective performance, accurate CDR measurement remains challenging due to factors such as overlapping regions, low contrast between regions, shape variability and inhomogeneity of the OD, insufficient labels, and errors in intermediate steps. Zhao et al. [56] proposed a direct CDR estimation method based on a semi-supervised learning scheme that bypasses the intermediate segmentation step. The proposed two-stage cascaded approach consisted of two phases: unsupervised feature representation of fundus image with a Convolutional Neural Network (CNN) and CDR value regression using a Random Forest (RF) regressor. The proposed model achieved an AUC of 90.50% for glaucoma diagnosis on a dataset of 421 fundus images. Zhou et al. [57] proposed an adaptive weighted locality-constrained sparse coding approach for glaucoma diagnosis, which combines locality constraint with sparse constraint and employs a weighted locality constraint constructed by adaptively combining multiple distance measurements. The proposed approach achieved an accuracy of 88.63% and 85.56% in diagnosing CDR for DRISHTI-GS and RIM-ONE-r2, respectively.
To predict the average RNFL thickness previously extracted from Spectral Domain OCT (SD-OCT3) scans and allow for quantification of neural damage, Medeiros et al. [58] proposed a deep learning model based on fundus OD images. They used a private dataset containing 32,820 pairs of OD fundus images and SD-OCT scans to predict average RNFL thickness. They also assessed the ability of predicted and actual RNFL thickness values to discriminate between glaucomatous and healthy eyes and achieved an AUC of 94.00% and 94.40%, respectively. Raja et al. [59] proposed a novel approach to objectively grade glaucoma as an early suspect or advanced stage based on the degeneration of RGCs. They first segmented the RNFL, GCIPL, and GCC regions and extracted their thickness information. Subsequently, they employed a Support Vector Machine (SVM) to evaluate the severity of glaucoma, achieving a high accuracy of 91.17% on the AFIO dataset. Lee et al. [60] also proposed a deep learning framework based on NASNet [61] for diagnosing glaucoma using SD-OCT images. They extracted features from the GCIPL thickness map, GCIPL deviation map, RNFL thickness map, and RNFL deviation map and fed the extracted features into the deep learning classifier for glaucoma diagnosis. The proposed model achieved an AUC of 99.00% with a sensitivity of 94.7% and a specificity of 100.0% on a private dataset with 350 glaucomatous and 307 healthy SD-OCT image sets.
Footnote 3: SD-OCT is a type of OCT that uses a faster scanning speed and higher resolution to produce more detailed images of the eye’s internal structures compared to traditional time-domain OCT.
### _Statistical Features_
Statistical features such as intensity-based, texture-based, morphological, and wavelet-based features are used to extract quantitative measurements from the medical data [62]. In glaucoma diagnosis, these features can help differentiate between normal and abnormal eyes. Claro et al. [63] conducted an extensive study to determine the best set of features for fundus image representation, which included Local Binary Pattern, Gray Level Co-occurrence Matrix (GLCM) [64], Histogram of Oriented Gradients (HOG) [65], Tamura [66], Gray Level Run Length Matrix (GLRM) [67], morphology, and seven CNN architectures, yielding a
30,682-D feature vector. They then used the gain ratio algorithm for feature selection and concluded that a combination of the GLCM and pre-trained CNNs has the best glaucoma diagnosis accuracy of 92.78% on 1675 images of DRISHTIS, GSI, RIM-ONE, HRF, JSIEC, and ACRIMA datasets.
Juneja et al. [68] extracted GLRM and GLCM features from the wavelet-filtered OCT images, along with 18 other statistical features. Thereafter, discriminative features were selected using the gain ratio, information gain and correlation statistical methods. In addition, they used a 3D-CNN architecture to extract features and perform classification. They finally used majority voting and weighted decision fusion strategies to provide the final classification results taking into account K-nearest neighbour (k-NN), RF, SVM and the probability given by the 3D-CNN model. Based on the experimental results, the proposed framework achieved a precision of 95.00%, sensitivity of 97.00% and F1-score of 96% on a private dataset of 1,110 OCT scans (847 glaucoma cases and 263 normal cases). Maheshwari et al. [69] proposed using empirical wavelet transform (EWT) to decompose fundus images and obtaining correntropy features from decomposed EWT components for glaucoma diagnosis. These extracted features are then ranked based on the t-value feature selection algorithm and fed to a least-squares SVM for normal and glaucoma image classification, achieving an accuracy of 98.33% and a specificity of 96.67% on a private and a public database. Nayak et al. [70] proposed an automatic feature extraction method based on a meta-heuristic optimization algorithm called a re-coded genetic algorithm. To extract high-level features from the fundus images directly, the proposed method adopts a strategy based on maximizing the inter-class distance and minimizing intra-class variability. The final feature vectors are then used in conjunction with an SVM classifier for glaucoma diagnosis. The experimental results on a dataset of 1,426 fundus images (589 normal and 837 glaucoma) yielded an accuracy of 97.20%.
Extracting the Region of Interest (ROI) speeds up subsequent processing by excluding irrelevant image regions. As previously stated, the OD is an important ROI region in glaucoma diagnosis. For example, Vinicius dos Santos Ferreira et al. [71] proposed a framework for OD semantic segmentation based on the U-Net model. Texture features were then extracted from both the RGB channels and gray levels of the segmented region using phylogenetic diversity indexes. The proposed approach resulted in an accuracy of 98.50%, with a sensitivity of 98.00%, specificity of 100%, F1-score of 96.00%, and AUC of 98.10% on the RIM-ONE, DRIONS-DB, and DRISHTI-GS datasets for glaucoma diagnosis. Bisneto et al. [72] also proposed a cGAN with a U-Net generator and a PatchGAN discriminator for OD segmentation. They used cGAN in conjunction with taxonomic indexes to extract textural attributes from the segmented OD region for glaucoma classification. Three different classifiers, namely Multilayer Perceptron (MLP), Sequential Minimal Optimization, and RF, were utilized in diagnosing glaucoma on the RIM-ONE and DRISHTI-GS datasets. All of the classifiers achieved 100% accuracy and AUC.
### _Hybrid Features_
Combining structural and statistical features can improve the performance of glaucoma diagnosis models by incorporating different perspectives. While structural features provide information about clinical measurements and anatomical changes in the retina, statistical features provide image-based information. As a result, more comprehensive information for effective glaucoma screening can be obtained by utilizing hybrid features. To this end, Balasubramanian and N.P. [73] extracted structural and statistical features from fundus images and presented correlation-based feature selection algorithms as well as a Kernel-Extreme Learning Machine classifier for glaucoma diagnosis. To extract structural features, they first segmented OD and OC using a Fuzzy C-Means Clustering algorithm and calculated CDR and Cup shape features. They further extracted features like GLCM, Anisotropic Dual-Tree Complex Wavelet Transform (ADTCWT) [74], Fractal Texture Analysis [75], SURF [76], Pyramid HOG [77], Mean Gray-Level, Color Intensity Features, and Super-pixels. The experimental results demonstrated a maximum overall accuracy of 99.61% with 99.89% sensitivity and 100% specificity on the public and private retinal fundus datasets containing 7,280 images. Guo et al. [78] proposed the increasing field of view (IFOV) feature model to fully extract textural, statistical, and other hidden image-based features. In the IFOV model, there are four different image scales, ranging from small to large: OC region, OD region, ROI (cropped image around OD), and global fundus image. They then extracted CDR and statistical features from images in different scales using the Gabor transform and GLCM, followed by feature selection using the adaptive synthetic sampling approach [79]. Finally, the extracted features are used to train a gradient-boosting decision tree classifier for glaucoma screening resulting in 84.30% and 83.70% accuracy on ORIGA and DRISHTI-GS1 datasets, respectively.
Thakur and Juneja [80] presented a set of reduced hybrid features derived from structural and nonstructural features to classify the retinal fundus images. The structural features extracted included CDR and Disc damage likelihood scale. Whereas, non-structural features included GLRM, GLCM, First order statistical, Higher order spectra, Higher order cumulant and Wavelets. They performed feature selection using the wrapper approach and used different classifiers, including k-NN, Neural network (NN), RF, SVM, and Naive Bayes (NB), for glaucoma diagnosis. Among all the classifiers, SVM exhibited the highest performance with an accuracy of 97.20%, a specificity of 96.00%, a precision of 97.00%, and a sensitivity of 97.00% on the DRISHTI-GS and RIM-ONE datasets. Kausu et al. [81] proposed a novel method for glaucoma identification based on time-invariant feature CDR and ADT-CWT features. They first segmented the OD using the Fuzzy C-Means clustering method, and the OC using Otsu's thresholding. An MLP model is finally used for glaucoma classification achieving an accuracy of 97.67% with 98% sensitivity. The dataset used in this paper was collected from the Venu Eye Institute & Research Centre in New Delhi, India, and contained a total of 86 images, 51 of which were healthy and 35 from glaucoma patients.
The Inferior Superior Nasal Temporal (ISNT) rule is one
of the widely used techniques for assessing structural damage to the optic nerve head in clinical practice. According to the ISNT rule, the thickness of the NRR in normal eyes decreases in the following order: inferior region \(>\) superior region \(>\) nasal region \(>\) temporal region, whereas the NRR in glaucomatous optic discs violates this rule [82]. Pathan et al. [83] extracted clinical features from the segmented OD and OC regions, including CDR estimation and ISNT rule verification in the NRR area. They further extracted color and texture features from the NRR area to analyze glaucoma-related changes in fundus images. Three color spaces were used to extract color features, including RGB, CIEL*a*b, and HSV, while textural features were extracted using the GLCM approach. Finally, to diagnose glaucoma from normal samples, they used an SVM, a three-layered NN, and an AdaBoost classifier with dynamic ensemble selection. The SVM algorithm demonstrated the highest accuracy of 95.00% and 90.00% on the DRISHTI-GS1 and a private dataset, respectively. Singh et al. [84] also extracted ISNT regions and CDR, along with 20 statistical features such as Homogeneity, Contrast, and Correlation. They used the combination of four machine learning algorithms (i.e., SVM, K-NN, NB, and a 3-layered MLP) and achieved 95.82% sensitivity, 98.59% specificity with an accuracy of 98.60% on the DRIONS-DB dataset. Martins et al. [85] segmented the OC and OD and calculated morphological features. The extracted features included CDR, vertical length CDR, Rim-to-disc area ratio, which also provides an interpretation of the ONH shape, and ISNT values and rule compliance. The proposed pipeline also included a glaucoma confidence level assessment using a classification network with a MobileNetV2 [86] feature extractor as a backbone. The final decision combines the glaucoma confidence level with the calculated morphological features, yielding an accuracy of 87.00%, sensitivity of 85.00%, specificity of 88.00%, and AUC of 93.00% on a merged dataset of several publicly available datasets.
Different types of ophthalmic images provide information on retinal pathology from various angles, and combining them can aid in glaucoma diagnosis. Chen et al. [17] proposed an automatic method for early glaucoma screening using Enhanced Depth Imaging OCT (EDI-OCT) and fundus images. The method includes segmenting the anterior lamina cribrosa surface in EDI-OCT images with a region-aware strategy and residual U-Net and extracting structural features such as lamina cribrosa depth and deformation. Similarly, in fundus images, scanning lines and brightness compensation are used to segment the OC and OD regions, and the CDR and textural features are extracted. Hybrid features that combine structural parameters from EDI-OCT and textural features from fundus images are then used for training and classification to screen glaucoma in the early stage using gcForest. The proposed method achieved an accuracy of 96.88% with 91.67% sensitivity on a private dataset.
## 6 End-to-end Glaucoma Classification
End-to-end deep learning models have shown promising results in glaucoma classification, outperforming feature-based methods [87, 88, 89, 90, 91, 92, 93, 22, 39]. Because of their ability in incorporating holistic contextual information in the training process, end-to-end deep learning models can potentially reduce the risk of information loss and improve generalizability. For example, Hemelings et al. [94] showed that end-to-end deep learning models take advantage of contextual information outside the optic nerve head region in fundus images to detect glaucoma and estimate the CDR. In this section, we review studies that mainly focus on these models for diagnosing glaucoma disease. We classify these models mainly based on their architectures into Convolutional Neural Networks, Autoencoder-based Networks, Attention-based Networks, Generative Adversarial Networks, Geometric Deep Learning Networks, and Hybrid Networks.
### _Convolutional Neural Networks_
Convolutional neural networks (CNNs) have been widely used for glaucoma diagnosis [95, 96, 97, 98, 99, 38]. In general, these models are used to extract higher-level features from raw image data, with earlier layers focusing on simple features such as colors and edges and later layers identifying more complex shapes and structures. The CNN architecture consists of different types of layers, including convolutional, pooling, and fully connected layers. Convolutional layers apply learned filters to the input image, generating activation feature maps that represent detected features of different complexity and level of detail. The pooling layer is used to reduce the dimensionality of the feature maps to decrease computational complexity. Fully connected layers finally map the extracted high-level features to the desired output classes. Many studies reviewed in this paper employed state-of-the-art CNN architectures, including Inception-v3 [100], ResNet [101], EfficientNet [102], and DenseNet [103] pre-trained on ImageNet [104]. These studies either used the pre-trained architectures without further modifications or slightly modified them to fit the research's objectives. Additionally, some studies developed new CNN architectures entirely from scratch, tailoring the model design specifically for the glaucoma classification task. Table III summarize the papers that used CNN for glaucoma classification.
Wang et al. [109] proposed an end-to-end semi-supervised multi-task learning CNN for classifying OCT B-scan images as glaucoma or normal and investigating the relationship between structural and functional changes in glaucoma eyes. The proposed CNN comprises three components: a shared feature extraction module with a ResNet-18 backbone, a glaucoma classification module, and a VF measurement regression module. To develop and test the proposed method, they also created one of the largest glaucoma OCT image datasets (_i.e.,_ HK dataset) with 975,400 B-scans from 4,877 volumes. Zhao et al. [98] also proposed a Weakly-Supervised Multi-Task Learning (WSMTL) method for accurate evidence identification, OD segmentation, and automated glaucoma diagnosis. WSMTL consists of a skip and densely connected CNN for multi-scale feature representation of fundus structure, a pyramid integration structure for generating high-resolution evidence maps, a constrained clustering branch for OD segmentation, and a fully-connected discriminator for automated glaucoma
diagnosis. The model is trained on weakly labeled data with binary diagnostic labels (normal/glaucoma), and the output is a pixel-level segmentation mask and glaucoma diagnosis label. Liao et al. [108] proposed a clinically interpretable CNN architecture (EAMNet) for glaucoma diagnosis. EAMNet aggregates the features extracted from a CNN backbone with ResBlock at various scales to bridge the gap between semantic and localization information at multiple scales in order to improve glaucoma diagnosis accuracy. Additionally, EAMNet generates refined evidence activation maps that highlight the glaucoma-specific discriminative regions recognized by the network, aiming to provide a more transparent interpretation.
Xue et al. [21] proposed a three-phased framework for 1) screening, 2) detecting glaucoma from normal, and 3) classifying glaucoma into four severity levels. In the first phase, they used IOP to screen out patients with glaucoma. Two distinct ResNet architectures, namely DetectionNet and ClassificationNet, were trained independently during the second and third phases, respectively. In the case of DetectionNet, the fusion of fundus and Voronoi VF images served as input for the classification of normal and glaucoma cases. Conversely, ClassificationNet utilized a Voronoi VF image as input to categorize glaucoma severity into mild, moderate, or severe. Jun et al. [97] also proposed a Transferable Ranking Convolutional Neural Network (TRK-CNN) for the multi-class classification of normal, glaucoma suspect, and glaucoma eyes. TRk-CNN employs DenseNet as the backbone CNN model and combines the weights of the primitive classification model to reflect inter-class information to the final classification phase, where there is a high correlation between classes.
### _Autoencoder-based Networks_
An autoencoder is a type of neural network that uses an encoder-decoder structure to extract important features from input data. The encoder maps the input into a lower-dimensional latent space, and the decoder maps the latent representation back to the original input space. This mechanism allows the autoencoder to capture meaningful data representations, which has significant potential for improving glaucoma detection methodologies. For instance, Raghavendra et al. [90] proposed a two-layer sparse autoencoder to extract effective and important features from fundus images for glaucoma detection. The proposed network consists of two cascaded autoencoders for unsupervised feature learning and a Softmax layer for supervised glaucoma classification. Table IV summarizes the papers that used an autoencoder-based architecture for glaucoma diagnosis.
Pal et al. [113] proposed a deep learning model (G-EyeNet) for detecting glaucoma using an encoder-decoder framework. G-EyeNet comprises an encoder, decoder, and classifier module. The encoder-decoder structure is used for image reconstruction and unsupervised feature learning,
\begin{table}
\begin{tabular}{l l l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Study**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Data type**} & \multirow{2}{*}{**Dataset**} & \multicolumn{6}{c}{**Classification Performance Measures (\%)**} & \multirow{2}{*}{**Code**} \\ \cline{6-9} & & & & **ACC** & & **SEN** & & **SPE** & & **PRC** & **F1-score** & **AUC** \\ \hline
[88] & 18-layer CNN & Fundus & Private (589 N, 837 G) & 98.13 & 98.00 & 98.30 & - & - & - & - & \\ & & & Private (1,586 G, 2,244 & 93.29 & 96.03 & 91.42 & - & - & 98.29 & Link \\
[106]* & & & Fundus & N), RIGA & 95.81 & 98.40 & 94.22 & - & - & 99.49 & \\ & & & & Private (29,466 N, 2,620 & - & 95.60 & 92.00 & - & - & 98.60 & - \\
[96] & & & Private (403 N, 208 S, & 88.94 & 58.37 & 58.37 & 74.47 (S & 79.55 (S & & \\ & & & & Vs. N), & 89.33 & & Vs. N), & & \\
[97] & TRk-CNN & Fundus & Private (403 N, 208 S, & 88.94 & 90.36 & 94.94 & 94.94 & 92.59 (G & - & - \\ & & & & (G Vs. & (G Vs. & (G Vs. & Vs. N) & & \\ & & & & N) & & & N) & & \\
[106] & & & Private (Training: 1,424 & - & - & - & - & - & 96.50 & - \\ & & & G, 1,818 N) & & - & - & - & - & - & 96.50 & - \\
[107] & 3-layer CNN & Fundus & Private (1,542, 786 N, & 87.90 & - & - & - & - & 94.00 & - \\
[98] & & & Private (1,542, 786 N, & 87.90 & - & - & - & - & 94.00 & - \\
[108] & & & Private (2,966 N, 2,961 N) & - & - & - & - & - & 92.00 & - \\
[109] & & & Private (1,695 N, 1,201 & - & - & - & - & - & 88.00 & - \\ & & & & & 85.50 & & & & & 88.10 & \\ & & & Private (1,695 N, 1,201 & (G Vs. & & & & (G Vs. & \\ & & & mild, 1,607 moderate, & 8), & & - & - & - & - & 95.60 & - \\ & & & & 1,868 severe) & & (Multi- & & & & (Multi- & \\ & & & class1) & & & & & & & (class) & \\
[38]* & 7-layer CNN & VF & Rotterdam & - & - & - & - & 87.40, & & - & \\ & & & Budapest & - & - & - & 98.60 & - & - & \\
[109]* & CNN & OCT & HK (2,926 G, 1,951 N) & 92.70 & - & - & - & 94.10 & 97.70 & - \\ & & & Stanford (806 G, 425 N) & 86.00 & - & - & - & 88.90 & 93.30 & - \\
[99] & ResNet-34 & OCT & Private (2,926 G, 1,951 N) & 91.00 & 89.00 & 96.00 & - & - & 96.90 & - \\
[87] & ResNet-34 & OCT & Private (612 G, 542 N) & - & - & - & - & - & 96.00 & - \\ & & & Private (192 G, 545 N) & 90.40 & - & - & - & - & - & - \\
[110]* & Inception-v3 & OCT & Private (57 G, 44 N) & 91.10 & - & - & - & - & - & - \\
[111] & 6-layer CNN & OCT & Private (1,579 G, 359 N) & - & - & - & - & - & 93.70 & - \\ \hline \hline \end{tabular} \({}^{\ddagger}\) Severity classification between mild, moderate and severe glaucoma
\end{table} TABLE III: Comparison of papers utilizing CNN architectures for glaucoma classification. Studies with (*) mark tested their model on an auxiliary dataset. N: Normal, G: Glaucoma, S: Glaucoma Suspect, A: Advanced glaucoma, E: Early glaucoma, Avg: Average.
while the classifier module uses the latent-space distribution learnt by the encoder to classify glaucoma. The framework is trained using multi-task learning to minimize reconstruction and classification losses. Raja et al. [59] proposed a hybrid autoencoder-based convolutional network framework (RAG-Net\({}_{x2}\)) for segmenting RGC regions and classifying glaucoma. RAG-Net\({}_{x2}\) is first trained to extract RGC regions, particularly the RNFL, GCIPL, and GCC regions. The RAG-Net\({}_{x2}\) encoder end is then used to perform RGC-aware classification of healthy and glaucoma samples.
Hervella et al. [89] proposed a multi-task approach for simultaneous OD and OC segmentation and glaucoma classification. The proposed network consists of an encoder-decoder structure that is shared between tasks and takes advantage of both pixel-level and image-level labels during network training. In addition, they used a multi-adaptive optimization strategy to ensure that both tasks contribute equally to parameter updates during training, avoiding the use of loss weighting hyperparameters. Pascal et al. [112] proposed a multi-task deep learning model to detect glaucoma in retinal fundus images while segmenting the OD and OC and locating the fovea. The proposed model is trained using a U-Net encoder-decoder convolutional network as a backbone architecture and is adapted to handle the four tasks using independent optimizers. The multi-task model outperforms the single task of detecting glaucoma because it leverages related tasks and their similarities to achieve better performance. Ren et al. [114] also proposed a task decomposition framework based on an encoder-decoder architecture for both semantic segmentation of OC and OD and glaucoma classification. The three subsequent subtasks that they proposed are (1) pixel-wise semantic segmentation of fundus images, (2) prediction of OD and OC instance class labels, and (3) classification of glaucoma and normal fundus images. The framework used a sync-regularization to penalize the deviation between the outputs of pixel-wise semantic segmentation and the instance class prediction tasks and outperformed the single-task model.
### _Attention-based Networks_
The attention mechanism in deep learning enables models to selectively focus on the most important parts of input data, enhancing prediction accuracy and efficiency. Using attention in glaucoma screening models intuitively satisfies the need to focus on the key pathological areas rather than other redundant information. In the reviewed literature, the term "attention" is used to describe two distinct concepts. The first concept is attention in terms of models focusing on specific parts of input data through the use of auxiliary information, such as attention maps obtained from domain experts [92, 24], heatmaps generated by Grad-CAM [115], and segmenting disease-related regions in the input [59, 40, 59]. The second concept refers to the attention-based architectures, which are also the basis of Vision Transformers (ViT) [116]. We further discuss these concepts in the following, and a summary of the reviewed paper is presented in Table V.
Li et al. [92] proposed an attention-based CNN for glaucoma detection (AG-CNN), which highlights salient regions by incorporating attention maps to remove redundancy from fundus images. The proposed model comprises three subnets: an attention prediction subnet, a pathological area localization subnet and a glaucoma classification subnet. The attention maps in this paper were collected in the large-scale attention-based glaucoma (LAG) database through a simulated eye-tracking experiment. The ophthalmologists provided attention maps by focusing on the salient regions of the fundus images. The LAG database includes 5,824 fundus images labeled with either positive glaucoma or negative glaucoma. They extended their work in [24] to enlarge the LAG database to 11,760 fundus images. In addition to the supervised method presented in [92], this paper further proposed a weakly supervised learning to incorporate attention maps in a weakly supervised manner for glaucoma detection. George et al. [115] proposed an end-to-end attention-guided 3D CNN model for glaucoma detection and visual field index (VFI) estimation using high-resolution 3D OCT volumes. The proposed model consists of three pathways sharing the same network architecture. The first pathway takes the raw 3D-OCT cube as input and learns global retinal structures relevant to glaucoma detection and VFI estimation. Similarly, the inputs of the other two pathways are computed during training, guided by the 3D Grad-CAM [117] attention heatmaps. Each pathway outputs the class label, and the entire model is trained concurrently to minimize the sum of the three losses. The final output is obtained by fusing the predictions of the three pathways.
Garcia et al. [129] proposed a hybrid neural network with hand-driven features and a deep learning backbone that has skip-connections to include tailored residual and attention modules to refine the automatic features of the latent space. They specifically fed the backbone model with raw OCT B-scans and used a descriptor to extract RNFL thickness-based information as hand-driven features. The model was trained using a few-shot learning technique for discriminating between healthy, early and advanced glaucoma scans. Zhao et al. [91] proposed a student-teacher
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Study**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Data type**} & \multirow{2}{*}{**Dataset**} & \multicolumn{6}{c}{**Classification Performance Measures (\%)**} \\ \cline{6-10} & & & **ACC** & & **SEN** & **SPE** & **PRC** & **F1-score** & **AUC** & **Code** \\ \hline
[89]* & U-Net-based & Fundus & REFUGE & - & - & - & - & - & 97.60 & - \\
[112] & U-Net-based & Fundus & REFUGE & - & - & - & - & - & 94.74 & - \\
[90] & SAE & Fundus & Private (589 N, 837 G) & 95.30 & 95.20 & - & 96.80 & 95.00 & - & - \\
[113] & G-EveNet & Fundus & DRINS-DB & - & - & - & - & 92.30 & - \\
[59] & RAG-Net\({}_{x2}\) & OCT & AFIO & 94.91 & 97.14 & 91.66 & 94.44 & 95.77 & 98.71 & Link \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Comparison of papers utilizing autoencoder-based architectures for glaucoma classification. Studies with (*) mark tested their model on an auxiliary dataset. SAE: Sparse Autoencoder, N: Normal, G: Glaucoma.
framework where both the student and teacher models have identical network architecture consisting of two separate attention pathways (_i.e.,_ spatial and channel attention modules). The spatial attention module identifies salient image regions and prune feature responses, while the channel attention module identifies salient feature channels to preserve the activations relevant to the glaucoma diagnosis task. The proposed framework aims to improve glaucoma diagnosis on imbalanced data by augmenting feature distribution with feature distilling and re-weighting. Guo et al. [119] proposed a multitask teacher-student framework for unbiased glaucoma screenings and visualizations of model decision-making areas. The teacher network in the proposed framework utilizes a ResNet-34 backbone to extract semantic feature maps of different depths for constructing a multi-scale discrimination module and adopts the self-attention mechanism module to make the network pay attention to spatial information and channel information at the same time. This ensures the quality of the generated evidence map and provides reliable preliminary results for glaucoma classification. Meanwhile, the student network, incorporating a dual-branch CNN structure and a collaborative learning module, simultaneously performs glaucoma diagnosis and generates the corresponding evidence map.
The ViT is a deep learning architecture that utilizes a self-attention mechanism to capture the global characteristics of an image. ViT applies the transformer model, which was originally designed for natural language processing, to computer vision tasks [116]. Wassel et al. [120] evaluated the performance of more than seven different ViT baseline models for glaucoma detection on a combined dataset of six publicly available fundus images. They also proposed an ensemble of the best ViT models. Swin [121] achieved the best standalone performance with 92.57% in sensitivity, 96.94% in specificity, and 97.90% in AUC. Xu et al. [128] proposed a Transfer Induced Attention Network (TIA-Net) for automatic glaucoma detection. TIA-Net leverages the fundus feature learned from similar ophthalmic data to extract general features. The channel-wise attention and maximum mean discrepancy are then adopted to extract the discriminative features that fully characterize the glaucomar-related deep patterns. As a result, the proposed method achieves a smooth transition between general and specific features, thus enhancing feature transferability. Song et al. [18] proposed a Deep Relation Transformer (DRT) method for diagnosing glaucoma based on the combined OCT and VF modalities. The proposed framework includes three successive modules: the global relation module, the guided regional relation module, and the interaction transformer module. These modules utilize deep reasoning and transformer mechanisms to explore implicit pairwise relations between OCT and VF information and enhance the representation with complementary information. The proposed DRT approach outperforms existing methods and has the potential to accurately diagnose glaucoma using multimodal data.
### _Generative Adversarial Networks_
Generative Adversarial Networks (GANs) are a type of deep neural network architecture that has the ability to generate
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Study**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Data type**} & \multirow{2}{*}{**Dataset**} & \multicolumn{6}{c}{**Classification Performance Measures (\%)**} \\ \cline{5-8} & & & **ACC** & **SEN** & **SPE** & **PRC** & **F1-score** & **AUC** \\ \hline \multirow{3}{*}{[91]*} & \multirow{3}{*}{CNN + Att} & \multirow{3}{*}{Fundus} & LAG & 97.12 & 95.20 & 98.16 & - & 95.47 & 99.28 \\ & & & REFUGE & 95.25 & 80.00 & 96.94 & - & 78.82 & 93.32 \\ & & & RIM-ONE & 93.96 & 89.74 & 97.12 & - & 90.91 & 97.19 \\
[118] & CNN + Att & Fundus & LAG & 97.12 & 97.21 & 97.07 & - & 96.65 & 99.31 \\
[119] & CNN + Att & Fundus & LAG & 96.70 & 96.10 & 97.00 & - & 95.00 & 99.60 \\ & Swin [121] & & & 93.20 & 92.57 & 93.43 & - & - & 97.77 \\ & & & 94.50 & 89.92 & 96.04 & - & - & 97.90 \\ & & CrossViT [123] & & 6 publicly & 94.30 & 86.73 & 96.94 & - & - & 96.87 \\
[120] & XciT [124] & Fundus & available & 93.55 & 88.6 & 95.23 & - & - & 97.2 \\ & & & datasets\({}^{\ddagger}\) & 91.50 & 85.94 & 93.43 & - & - & 96.00 \\ & & & & 88.00 & 81.70 & 90.10 & - & - & 94.60 \\ & ViT [116] & & & & 87.40 & 77.20 & 90.60 & - & - & 92.60 \\ & & & & 85.50 & 83.82 & 86.15 & - & - & 92.70 \\
[128]* & & & Private (1,005 & 85.70 & 84.90 & 86.90 & - & - & 92.90 \\ & & & & ORIGA & 76.60 & 75.30 & 77.20 & - & - & 83.50 \\
[92]* & AG-CNN & Fundus & LAG & 95.30 & 95.40 & 95.20 & - & 95.10 & 97.50 \\ & & & RIM-ONE & 85.20 & 84.80 & 85.50 & - & 83.70 & 91.60 \\
[24] & AG-CNN & Fundus & LAG & 92.20 & 95.40 & 96.70 & - & 95.40 & 98.30 \\
[129] & CNN + Att & OCT & Private (90 N, & 87.88 & 81.82 & 90.91 & 81.82 & 81.82 \\ & & & 72 E, 57 A) & (Avg\({}^{\ddagger}\)) & (Avg) & (Avg) & (Avg) & (Avg) & (Avg) & - \\
[18] & DRT & OCT, VF & Private (697 G, & 698 N) & 88.30 & 93.70 & 82.40 & - & 88.90 & 93.90 \\
[115] & CNN + Grad- & OCT & Private (427 N, & 3,355 G) & 91.07 & 95.12 & - & 94.73 & 94.88 & 93.77 \\ \hline \hline \end{tabular}
* \({}^{\ddagger}\) & LAG, ODIR-5K, ORIGA, REFUGE, DRISHTI-G51, HRF
* Average categorical results for discriminating between healthy, early and advanced glaucoma samples.
\end{table} TABLE V: Comparison of papers utilizing attention-based architectures for glaucoma classification. Studies with (*) mark tested their model on an auxiliary dataset. N: Normal, G: Glaucoma, S: Glaucoma Suspect, A: Advanced glaucoma, E: Early glaucoma, Avg: Average, Att: Attention module.
new samples from a given probability distribution [130]. The GANs consist of two parts: the generator and the discriminator. The generator is trained to produce realistic samples, while the discriminator is trained to differentiate between synthetic data and real data. Diaz-Pinto et al. [131] proposed a framework for glaucoma assessment by developing a fundus image synthesizer and a semi-supervised learning method using the Deep Convolutional Generative Adversarial Network (DCGAN) [132] architecture. The architecture of the image synthesizer and semi-supervised learning method are identical, except for the last output layer of the discriminator. The DCGAN discriminator is modified in the semi-supervised learning method to function as a 3-class classifier capable of distinguishing between Normal, Glaucoma, and Real/Fake classes. The models were trained on 86,926 cropped retinal images obtained from fourteen publicly available databases.
Guo et al. [93] proposed the use of CycleGAN [133] in a teacher-student framework to reduce the appearance differences between labeled source domain and labeled target domain images, aiming to enhance the accuracy of multiracial glaucoma detection. The proposed framework consists of an inter-image tutor, an intra-image tutor, a student model and a backbone network, which combines the advantages of domain adaptation and semi-supervised learning. The inter-image tutor uses CycleGAN for style transfer and transfers the learned knowledge to the student model by minimizing knowledge distillation loss. This helps to overcome the domain shift problem and improve the performance of glaucoma detection. The intra-image tutor adopts the exponential moving average to leverage the unlabeled target domain and transfers the knowledge to the student model by minimizing prediction consistency loss. The student model not only directly learns knowledge from the labeled target domain images, but also learns the intra-image knowledge and inter-image knowledge transfer by two tutors. Furthermore, the backbone integrates the context features of the local OD region and global fundus image via modified ResNet-50. Comprehensive experimental results on various datasets are shown in Table VI.
### _Geometric Deep Learning Networks_
Geometric deep learning is a machine learning approach that focuses on developing algorithms and network architectures capable of analyzing non-Euclidean structured data, such as graphs, manifolds, and point clouds, by integrating geometric principles and deep learning techniques [142]. Thiery et al. [143] proposed using geometric deep learning to diagnose glaucoma from a single OCT scan of the ONH and compared its performance to that of 3D CNN and RNFL thickness. Using a deep learning model, they first segmented seven major neural and connective tissues from OCT images. After that, each ONH was represented as a 3D point cloud with approximately 1,000 points. Geometric deep learning (PointNet [144]) was then used to diagnose glaucoma from a single 3D point cloud. The proposed geometric deep learning model achieved an AUC of 95.00% on a private dataset consisting of 873 glaucomatous and 3,897 non-glaucomatous OCT scans, outperforming that obtained with a 3D CNN (AUC of 87.00% on raw OCT images and 91.00% on segmented OCT images) and that obtained from RNFL thickness alone (AUC = 80.00%). In another study, Braeu et al. [145] proposed to compare the performance of PointNet with dynamic graph convolutional neural network (DGCNN) [146] for diagnosing glaucoma. Following the same procedure as their previous paper [143], each ONH was represented as a 3D point cloud and used to diagnose glaucoma. They demonstrated that both the DGCNN and PointNet could accurately classify 2,259 glaucomatous from 2,247 Non-glaucomatous OCT scans based on 3D ONH point clouds with an AUC of 97.00% and 95.00%, respectively. Furthermore, they identified critical 3D structural features of the ONH for glaucoma diagnosis, which formed an hourglass pattern mostly located within the NRR in the inferior and superior quadrants of the ONH.
### _Hybrid Networks_
Hybrid networks can help in glaucoma diagnosis by combining the strengths of different types of neural networks. For example, Chai et al. [19] designed a multi-branch neural network (MB-NN) model to exploit domain knowledge and extract hidden features from retinal fundus images. The entire fundus image and the OD region image are the inputs to the first and second branches of the MB-NN model, respectively. The OD region was extracted using Faster-RCNN [54] trained on a separate dataset. Similarly, they integrated domain knowledge into a one-dimensional feature vector and fed it into the third branch of the proposed model. Fu et al. [147] also proposed a disc-aware ensemble network (DENet) for automatic glaucoma screening that integrates the deep hierarchical context of the global fundus image and the local OD region. The network consists of four deep streams, including a global image stream, a segmentation-guided network, a local Disc region stream, and a Disc polar transformation stream. The segmentation-guided network is based on U-shape convolutional network to detect the OD region and guide glaucoma screening on the whole fundus image. The architecture of the other streams is based on the ResNet-50 model. These streams provide complementary information, and their output probabilities are fused to obtain the final classification result. Yu et al. [148] proposed using raw multi-rater gradings to improve the performance of deep learning models for glaucoma classification. Instead of predicting labels from individual raters, the authors proposed a multi-branch structure that generates three predictions with different sensitivity settings for the input images: one with the best sensitivity, one with the best specificity, and one with a balanced fused result. A consensus loss is introduced to encourage consistent results from the sensitivity and specificity branches for consensus labels and opposite results for disagreement labels. Meanwhile, the consistency or inconsistency between the prediction results of the two branches is used to determine the difficulty level of an image, which is further used to guide the balanced fusion branch to focus more on hard cases. Garcia et al. [149] proposed combining CNN and LSTM networks for glaucoma diagnosis from raw SD-OCT volumes. The proposed model consists of a slide-level feature extractor and a volume-based predictive model. The feature extractor utilized residual and attention convolutional modules combined with
fine-tuning techniques. Also, to incorporate spatial information along with the three-dimensional data, the model used the LSTM networks with a sequential-weighting module. This module helps to optimize the LSTM outputs, resulting in a more stable and efficient model learning process.
## 7 Glaucoma Prediction
Glaucoma is a degenerative disease that often exhibits minimal symptoms during its initial stages [150]. Therefore, predicting the risk of developing glaucoma is crucial for early detection and timely intervention to prevent vision loss. Recent studies have demonstrated the potential of deep learning models in predicting the probability of glaucoma development through the analysis of diverse demographic, clinical, and imaging data [150, 151, 22]. Li et al. [150], for example, proposed to use a CNN-based network to predict and stratify the risk of glaucoma onset and progression based on fundus images of 17,497 eyes in 9,346 patients. The model demonstrated the ability to predict patients who may develop glaucoma within a five-year period with an AUC of 90.00%. Thakur et al. [151] also used deep learning models to predict glaucoma development from fundus images in a prospective longitudinal study several years before disease onset. The study reported an AUC of 77.00% for predicting the development of glaucoma 4 to 7 years before the onset of the disease, 88.00% for predicting the development of glaucoma approximately 1 to 3 years before the onset, and 95.00% for detecting glaucoma after the onset.
Li et al. [31] proposed a deep learning model for glaucoma prediction (DeepGF) based on sequential fundus images. They first established a database of sequential fundus images for glaucoma prediction (SIGF), which included an average of 9 images per eye, for a total of 3,671 images. The proposed DeepGF consists of an attention-polar convolution neural network and a variable time interval long short-term memory (VTI-LSTM) network to learn the spatio-temporal transition at different time intervals across sequential medical images of a person. In addition, a novel active convergence training strategy is proposed to solve the glaucoma forecast's imbalanced sample distribution problem. The proposed method demonstrated an accuracy of 80.7%, a sensitivity of 85.70%, a specificity of 80.60%, and an AUC of 87.00%. Dixit et al. [22] proposed using a convolutional LSTM neural network to detect glaucoma progression from a longitudinal dataset of merged VF and clinical data. The dataset comprises 11,242 eyes with at least four VF results and corresponding baseline clinical data, including CDR, central corneal thickness, and IOP for each sample. The proposed method achieved an AUC of 93.93% for glaucoma progression prediction. [152] developed a deep archetypal framework to predict glaucoma approximately four years prior to disease onset. The framework utilizes simplex projections to obtain unsupervised convex representations of the visual fields, which proved clinically meaningful and more discriminative than raw VFs or other classical VF analysis approaches. The proposed model achieved an AUC of 71.00% on a dataset of 7,248 VF tests collected only at the baseline.
## 8 Challenges and Future Directions
The adoption of deep learning for glaucoma diagnosis faces several key challenges that need to be addressed through continued research and cross-disciplinary collaboration. We structure these challenges and promising future directions as follows.
### _Data Challenges_
Developing effective deep learning models for glaucoma diagnosis relies on the availability of comprehensive training datasets. However, assembling high-quality annotated datasets presents several key challenges.
One major challenge is the limited size of publicly available glaucoma datasets compared to common computer vision benchmarks like ImageNet. The prohibitive expertise and cost required for reliable manual annotations further constrains dataset development. Data diversity is another issue, with many datasets lacking varied representations across different demographics, ethnicities, and imaging equipment [153, 154]. These limitations have caused existing public datasets to remain relatively small in scale, which can hinder model performance and restrict real-world applicability. However, alternative training methods
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Study**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Data type**} & \multirow{2}{*}{**Dataset**} & \multicolumn{4}{c}{**Classification Performance Measures (\%)**} & \multirow{2}{*}{**Code**} \\ \cline{5-8} & & & **ACC** & **SEN** & **SPE** & & **F1-score** & **AUC** \\ \hline \multirow{3}{*}{[131]} & \multirow{3}{*}{DCGAN} & \multirow{3}{*}{Fundus} & 14 publicly & \multirow{3}{*}{-} & \multirow{3}{*}{82.90} & \multirow{3}{*}{79.86} & \multirow{3}{*}{84.29} & \multirow{3}{*}{90.17} & \multirow{3}{*}{Code} \\ & & & available & & & & & & \\ & & & datasets\({}^{\ddagger}\) & & & & & & \\ & & & LAG & 98.14 & 98.62 & 98.17 & - & 96.41 & \\ & & & REFUGE & 98.74 & 96.85 & 96.57 & - & 97.06 & \\ & & & ORIGA & 97.64 & 97.13 & 97.62 & - & 96.93 & \\
[93]\({}^{\star}\) & \multirow{3}{*}{Fundus} & \multirow{3}{*}{DRISHTI-GS} & \multirow{3}{*}{98.45} & \multirow{3}{*}{97.66} & \multirow{3}{*}{96.84} & \multirow{3}{*}{-} & \multirow{3}{*}{97.04} \\ & & & DRISHTI-GS & 98.45 & 97.66 & 96.84 & - & 97.04 & \\ \cline{5-8} & & & ACRIMA & 96.52 & 97.62 & 96.72 & - & 97.26 & \\ \cline{5-8} & & & RIM-ONE-r1 & 97.48 & 98.31 & 97.14 & - & 97.60 & \\ \cline{5-8} & & & RIM-ONE-r2 & 97.46 & 98.23 & 96.57 & - & 96.40 & \\ \cline{5-8} & & & RIM-ONE-r3 & 96.46 & 97.18 & 96.82 & - & 96.56 & \\ \hline \multicolumn{8}{l}{\({}^{\ddagger}\) ORIGA-light, DRISHTI-GS1, RIM-ONE, sjchoi86-HRF, HRF, DRIVE [134], MESIDOR [135], DR KAGGLE [136], STARE [137], e-ophtha [138], ONHSD [139], CHASEDB1 [140], DRIONS-DB, SASTRA [141], ACRIMA} \\ \end{tabular}
\end{table} TABLE VI: Comparison of papers utilizing GAN-based architectures for glaucoma classification. Studies with (\({}^{\star}\)) mark tested their model on an auxiliary dataset
such as transfer learning [128, 48], zero/few-shot learning [129, 155], and knowledge distillation [156, 91, 119] offer potential solutions to mitigate the impact of limited dataset size. Additionally, active learning techniques can optimize the annotation process by selectively identifying the most useful and ambiguous samples for labeling. Incorporating uncertainty estimation into the sample selection process [157] further focuses active learning on improving model performance on underrepresented classes.
Imbalanced class distribution poses another persistent problem, with normal cases dominating many current datasets. This can skew model performance toward low sensitivity and high false negative rates [158]. Techniques like data augmentation [159, 160], weighted sampling [161], and generation of synthetic minority over-samples [162] may help mitigate imbalance. However, augmentation requires judicious implementation with clinician input to prevent medical inaccuracies and ensure fidelity [163].
Healthcare data also carries ethical and legal obligations around privacy that researchers must proactively address [164]. Laws and cultural norms around medical data sharing vary geographically, necessitating localized considerations. Policymakers have a role in developing balanced frameworks that promote research and innovation while protecting patient privacy. Techniques like federated learning [165] can also enable collaborative model development without raw data sharing. However, the onus remains on researchers to be privacy stewards and ensure compliance with applicable national and regional privacy laws in their research studies [166].
### _Model Development Challenges_
Deep learning models need to detect subtle signs of early-stage glaucoma, which is challenging even for experienced clinicians [167]. Heterogeneity in early disease appearance further complicates this task. Researchers are exploring various model architecture techniques to improve performance for early detection. For instance, attention mechanisms have shown promise in focusing models on salient retinal regions, enhancing informative feature extraction, and representation learning [24].
Integrating multi-modal data and multi-label learning may further improve performance by leveraging interrelated tasks like segmentation and diagnosis [18]. However, this poses challenges such as optimal data fusion, balancing diverse tasks, and continual learning as new data emerges. To address these issues, researchers are exploring specialized multi-modal architectures, adaptive optimization strategies, dynamic network expansion, and meta-learning [168].
Despite the potential benefits, the opaque nature of deep learning models remains a challenge. The importance of employing or proposing explainable AI (XAI) techniques to explain predictions and feature importance from deep learning models using different image modalities like OCT, Fundus, and VF has been recognized by researchers [169, 106, 94]. These studies employed post-hoc explanation techniques like SHAP values [170], locally interpretable model-agnostic explanations [171], integrated gradients [172], occlusion sensitivity [173], saliency maps [174], and contrastive explanations [175] to identify influential segments, patterns, and abnormalities in fundus images that lead the models to predict glaucoma. They found that XAI methods can highlight relevant regions like the optic disc, retinal nerve fiber layer defects, and areas of hemorrhaging that most inform the models' predictions [176, 177]. However, further development of XAI techniques tailored for glaucoma is still necessary, as most methods are model-agnostic and not optimized for glaucomatous features. Most work has focused on post-hoc methods rather than real-time explainability during model development and evaluation. To enhance understanding and trust in AI-assisted glaucoma screening, the field needs to create glaucoma-specific XAI solutions that provide human-interpretable explanations in real time. Another pragmatic approach involves clinician-researcher collaboration to assess XAI techniques and identify medically relevant insights [108].
### _Clinical Translation Challenges_
For clinical adoption, glaucoma diagnosis models must demonstrate consistent performance across diverse populations and imaging equipment. However, demographic and acquisition differences can affect model generalizability. Wider collaborations between academia, healthcare providers, and industry could facilitate large-scale external
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Study**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Data type**} & \multirow{2}{*}{**Dataset**} & \multicolumn{6}{c}{**Classification Performance Measures (\%)**} & \multirow{2}{*}{**Code**} \\ \cline{5-8} & & & & **ACC** & **SEN** & **SPE** & **F1-score** & **AUC** \\ \hline \multirow{3}{*}{[147]\({}^{\star}\)} & \multirow{3}{*}{DENet} & \multirow{3}{*}{Fundus} & ORIGA, SCES & \multirow{3}{*}{84.29} & \multirow{3}{*}{84.78} & \multirow{3}{*}{83.80} & \multirow{3}{*}{-} & \multirow{3}{*}{91.83} & \multirow{3}{*}{Code} \\ & & & & & & & & & \\ & & & & & & & & \\ & & & & & & & & \\
[19] & MB-NN & Fundus & Private (1,023 & & & & & \\ & & & G, 1,531 N) & & & & & \\ & & & Private (2,952 & & & & \\
[148]\({}^{\star}\) & Multi-rater & \multirow{3}{*}{Fundus} & ORIGA, SGS6N) & & & & & \\ & deep model & & & & & & & \\ & & & & & & & & \\ & & & & & & & & \\
[149] & CNN & \multirow{3}{*}{+} & ORCT & Private (144 G, 176 N) & & & & & \\ & & & & & & & & \\ \cline{5-8} & & & & & & & & \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Comparison of papers utilizing hybrid architectures for glaucoma classification. Studies with (\({}^{\star}\)) mark tested their model on an auxiliary dataset. N: Normal, G: Glaucoma, SIMD! Singapore Indian Eye Study.
validation across multiple centers and populations to identify failure modes. The models should integrate seamlessly into ophthalmic workflows with efficient computations on edge devices [178]. Physician trust is integral for uptake, requiring initiatives to improve model explainability and transparency. Frameworks for uncertainty estimation could also indicate situations where clinician oversight is necessary.
Careful design of human-AI interaction mechanisms will be vital, allowing physicians to accept, reject or modify model recommendations. Regulatory agencies play a crucial role in establishing standards for rigorous clinical validation of deep learning systems before approval [179]. Beyond accuracy, aspects like usability, interoperability, cybersecurity and patient privacy must be addressed to ensure safety and effectiveness [180]. Overall, a patient-centered approach with clinician partnership in the model development lifecycle will be key for clinical translation.
### _Future Outlook_
Advancements in glaucoma diagnosis will require synergistic progress across multiple disciplines. On the data front, innovations in sensor and imaging technologies can enable the acquisition of informative multi-modal datasets. In parallel, advancements in deep learning architectures, optimization algorithms, and computing hardware will allow more sophisticated analysis of these rich datasets.
Several promising research directions could accelerate translating of these innovations into clinical practice. Meta-learning approaches may enable rapid model adaptation from limited annotated examples, mitigating data constraints. Adversarial techniques can improve model robustness to input perturbations. Reinforcement learning offers the potential for optimizing glaucoma management policies. Recurrent neural networks could integrate longitudinal patient data for enhanced monitoring.
Ultimately, sustained collaborative efforts spanning medicine, engineering and computer science will be key to realizing artificial intelligence's potential for enhancing glaucoma outcomes. Initiatives to promote open datasets, model repositories and evaluation benchmarks will facilitate collective progress. Interdisciplinary teams should lead technological development, ensuring clinical applicability and integration into practice. With concerted efforts, AI-enabled glaucoma care could soon transition from promise to reality.
## 9 Conclusion
In this paper, we have provided a comprehensive overview of the state-of-the-art research applying deep learning and computer vision techniques for glaucoma diagnosis. Through a systematic literature review methodology, we synthesized the existing work across diverse architectural categories, including CNNs, autoencoders, attention networks, GANs, and geometric deep learning models.
The review highlighted promising capabilities demonstrated by these techniques in analyzing fundus, OCT, and visual field data. Tasks like classification, segmentation, and prediction of glaucoma have shown strong results across a wide range of experiments. We also discussed different feature extraction approaches, covering structural, statistical, and hybrid techniques for identifying informative glaucoma biomarkers from retinal imaging data. However, key challenges remain around limited dataset size and diversity, class imbalance, optimizing models for early disease detection, integrating multi-modal data, and translating solutions to clinical practice.
Ongoing efforts are beginning to address these gaps through transfer learning approaches, data augmentation techniques, attention mechanisms, multi-task learning, model explainability, and physician collaboration. Nonetheless, realizing the full potential of AI in transforming glaucoma care will require sustained cross-disciplinary teamwork.
From our perspective, open datasets, model repositories, and evaluation benchmarks will be critical to accelerate collective progress. Ultimately, an integrated approach spanning medicine, engineering and computer science will be essential for developing and validating solutions ready for real-world clinical deployment. This review highlights the tremendous opportunities for us at the intersection of ophthalmology and artificial intelligence. Overall, we aimed to provide a comprehensive overview and analysis of the state-of-the-art in this exciting and high-impact emerging field.
## Acknowledgment
**Funding:** This work was supported in part by the European Research Council (ERC) through the Horizon 2020 research and innovation program under grant agreement number 101002711. Mohammad Mahdi Dehsbibi received partial funding from this source.
**Author Contributions:** All authors contributed to the conception of the idea, defined the scope of the survey, and developed the survey methodology. Mona Ashtari-Majlan conducted the literature review, identified research gaps, analyzed survey data, and wrote the paper. All authors participated in the discussion of results, provided feedback on the manuscript, and assisted in writing and editing.
**Competing Interests:** The authors declare no competing interests.
**Data and Materials Availability:** All data presented in this study is available within the main text.
|
2302.00594 | Inching Towards Automated Understanding of the Meaning of Art: An
Application to Computational Analysis of Mondrian's Artwork | Deep Neural Networks (DNNs) have been successfully used in classifying
digital images but have been less successful in classifying images with
meanings that are not linear combinations of their visualized features, like
images of artwork. Moreover, it is unknown what additional features must be
included into DNNs, so that they can possibly classify using features beyond
visually displayed features, like color, size, and form. Non-displayed features
are important in abstract representations, reasoning, and understanding
ambiguous expressions, which are arguably topics less studied by current AI
methods. This paper attempts to identify capabilities that are related to
semantic processing, a current limitation of DNNs. The proposed methodology
identifies the missing capabilities by comparing the process of understanding
Mondrian's paintings with the process of understanding electronic circuit
designs, another creative problem solving instance. The compared entities are
cognitive architectures that attempt to loosely mimic cognitive activities. The
paper offers a detailed presentation of the characteristics of the
architectural components, like goals, concepts, ideas, rules, procedures,
beliefs, expectations, and outcomes. To explain the usefulness of the
methodology, the paper discusses a new, three-step computational method to
distinguish Mondrian's paintings from other artwork. The method includes in a
backward order the cognitive architecture's components that operate only with
the characteristics of the available data. | Alex Doboli, Mahan Agha Zahedi, Niloofar Gholamrezaei | 2022-12-29T23:34:19Z | http://arxiv.org/abs/2302.00594v1 | Inching Towards Automated Understanding of the Meaning of Art: An Application to Computational Analysis of Mondrian's Artwork
###### Abstract
Deep Neural Networks (DNNs) have been successfully used in classifying digital images but have been less succesful in classifying images with meanings that are not linear combinations of their visualized features, like images of artwork. Moreover, it is unknown what additional features must be included into DNNs, so that they can possibly classify using features beyond visually displayed features, like color, size, and form. Non-displayed features are important in abstract representations, reasoning, and understanding ambiguous expressions, which are arguably topics less studied by current AI methods. This paper attempts to identify capabilities that are related to semantic processing, a current limitation of DNNs. The proposed methodology identifies the missing capabilities by comparing the process of understanding Mondrian's paintings with the process of understanding electronic circuit designs, another creative problem solving instance. The compared entities are cognitive architectures that attempt to loosely mimic cognitive activities. The paper offers a detailed presentation of the characteristics of the architectural components, like goals, concepts, ideas, rules, procedures, beliefs, expectations, and outcomes. To explain the usefulness of the methodology, the paper discusses a new, three-step computational method to distinguish Mondrian's paintings from other artwork. The method includes in a backward order the cognitive architecture's components that operate only with the characteristics of the available data.
Keywords:classification of art computational methods cognitive architecture.
## 1 Introduction
Classifying items based on their defining characteristics as well as identifying these characteristics has been a major research topic in Machine Learning (ML) [38]. Driven by applications in computer vision and text processing, like text summarization and translation, numerous ML algorithms have been devised including both procedural methods as well as data modeling techniques.
The latter methods compute the parameters of parameterized models to minimize the error between training sets and model predictions. Deep Neural Networks (DNNs) pertain to this approach, including Convolutional Neural Networks (CNNs) a popular type of DNNs.
CNNs have been successful in classifying digital images [50]. However, recent work explored the CNN's capability to classify images with meanings that are not linear combinations of their visualized features [103]. This work showed that CNNs cannot correctly distinguish artwork presenting complex, hard-to-grasp non-exhibited properties (NEXP), even though they perform well for art objects described mainly by exhibited properties (EXP). EXP are visual features, like form, color, scene, and NEXP represent meaning, like an artist's intention and an observer's perception [103]. For example, paintings from the Renaissance period display a rich set of EXPs, and abstract paintings, because of their abstraction, include NEXPs that are hard to learn. Even though there has been a hope that DNNs capability to be universal approximators will somehow support the picking-up of EXP combinations that describe well NEXPs too, experiments showed that such descriptions are not learned with current CNNs [103].
It is unclear what features are missing from present CNNs, so that they can effectively identify and use NEXPs in classification of artwork. In spite of artificially generated art [11], e.g., using Genetic Algorithms and other evolutionary algorithms, it is arguable if the produced outputs are similar to the artwork created by humans and analyzed by domains, like aesthetics [10]. It has been argued that art expresses human experiences [10, 61, 62], which is obviously not the case with generated artwork. The meaning of human experiences relates to human intention and understanding, which depend on numerous historical, economic and social factors [10, 61]. For example, intentional historical theory argues that an artist had an intention to create art of a certain kind [52]. Then, this kind must be inferred (understood) by the viewer [12]. It is possible that dissimilar artwork (including dissimilar EXPs) still belongs to the same kind [12]. Hence, it is important to understand how knowledge about meaning (semantics) is used by humans during the process of creating (intention) and understanding (perception) art.
We believe that the significance of creating computational methods towards understanding how NEXPs (e.g., meaning) operate in art is well beyond creating immediate applications, like automatically generating museum inventories, artwork explanations, and interactive avatars to guide visitors. NEXPs are tightly connected to abstract representations, reasoning, and understanding of abstract and ambiguous expressions, which are different in nature than processing and learning EXPs (i.e. using features on form and structure for classification), the focus of current AI methods. Abstract reasoning and understanding ambiguous expressions are critical in human problem solving by individuals and teams of individuals.
Starting from the observation that current DNN approaches are arguably insufficient to tackle semantic abstractions [103], this work focused on identifying the capabilities that must be added to DNNs to improve their capabilities
of semantic processing. The used methodology identifies the missing features by comparing the process of understanding artwork with the process of understanding and creating electronic circuits, a well-known domain of creative problem solving. Our previous work proposed a cognitive architecture, a computational structure loosely based on human cognitive reasoning, to automatically synthesize electronic circuits [53]. We leveraged this work to propose a cognitive architecture meant to understand the abstract paintings of Piet Mondrian. The comparison of the two processes referred to two semantic layers: The first layer has eight elements: goals, concepts, ideas, rules, procedures, beliefs, expectations, and outcomes. These elements are part of the cognitive process during problem solving and can be further linked to more detailed cognitive activities, like memory, concept learning, representation and combination, affect, insight, and so on. The eight elements are discussed and compared for the process of understanding and performing circuit design and the process of understanding Mondrian's paintings. The second layer describes the solving process and includes five elements: the nature of the problem, knowledge representation in the memory, the attention and prediction subsystem, the reasoning subsystem, and knowledge updating. The comparison of the elements of the two semantic layers indicates what capabilities must be modified or added to a cognitive architecture geared towards understanding Mondrian's paintings as compared to the architecture in [53]. The functional requirements and evaluation metrics for these capabilities can be then stated.
We argue that this work is a step towards connecting a computational approach to understanding abstract paintings by Mondrian to cognitive activities, even though it is not a connection to the neural activity of the brain, as in neuroaesthetics [10, 66, 105]. Subsequent work can attempt to relate the cognitive activities to DNN processing, similar to [23]. We believe that Mondrian's work is appropriate for this goal: EXPs are less important as it is visually simple (e.g., uses vertical ad horizontal lines and surfaces colored with fundamental colors, e.g., red, blue, yellow, and white), but complex in terms of its meaning defined using NEXPs. Note that other work discusses problem solving in domains, like physics and mathematics [71, 91, 98], but does not propose equivalent algorithmic methods even though they discuss common-sense strategies to solve problems.
The paper has the following structure. Section 2 presents related work. Section 3 offers an overview of the model used to identify the requirements of computational methods to classify using meaning. Section 4 focuses on a case study that compares the computational needs for electronic circuit design and Mondrian's paintings. A discussion follows in Section 5. Conclusions end the paper.
## 2 Related Work
Modern ML methods, like CNNs, have been proposed to automatically analyze artwork, including activities, like style recognition, classification, and generation [49, 86, 95, 106]. As large training sets are often hard to assemble for art, the traditional approach pre-trains a CNN using large databases of images, e.g.,
ImageNet, and then retrains only the output and intermediate layers using art images [49, 73, 95, 106].
Style recognition finds the artistic style of artwork using mainly visual attributes, like color and texture [49, 73, 95, 106]. Seven different CNN models were tested for three art datasets to classify genres (e.g., landscapes, portraits, abstract paintings, etc.), styles (i.e. Renaissance, Baroque, Impressionism, Cubism, Symbolism, etc.), and artists [106]. The method uses mostly color. It achieves for some styles an accuracy similar to human experts. However, other styles are hard to recognize, like Post Impressionism and Impressionism, Abstract Expression and Art Informed, or Mannerism and Baroque [49]. CNN are also suggested to recognize non-traditional art styles, like Outsider Art style [73].
CNNs are used to identify the author from a group of artists by learning the visual features of the artist's work [95]. The method utilizes features, like texture, color, edges, and empty areas [95]. However, it is necessary to also use higher-level features, like localized regions or semantic features, e.g., scene content and composition [78].
Work on uncovering semantic information about artwork intends to understand the content of art objects, like the orientation of an object, the objects in a scene, and the central figures of an object [37, 51, 85]. Object orientation, e.g., deciding if a painting is correctly displayed, uses simple, local cues, image statistics, and explicit rules [51, 57, 104]. The methods have been reported to be as effective as human interpretation for some painting styles [51]. They perform better in portrait paintings than in abstract art. The distinguishing features among classes include localized parts of large objects, low intra-class variability of the parts, and specific semantic parts, such as wheels for cars and windows for buses. Generative Adversarial Networks (GANs) are proposed for hierarchical scene understanding [102]. Early layers are likely to learn physical positions, like the spatial layout and configuration, the intermediate layers detect categorical objects, and the latter layers focus on scene attributes and color schemes.
## 3 Model Components
Creating works of art, like paintings, can be seen as the process of solving open-ended problems [10], similar to open-ended problem-solving in engineering, like imagining and devising new functional capabilities that are beyond those of the current solutions. Both represent instances of human creative activities. Subsection 3.1 compares conceptually the two types of problem-solving endeavors. Next, we presented the elements used in the comparison.
Culture includes the goals, concepts, ideas, rules, procedures, beliefs and outcomes of a certain population, and the expectations of their outcomes [10]. (1) Goals are high-level objectives (problems) in response to needs. For example, it has been mentioned that one of the goals for art is to produce pleasure [66, 105] or to make a donor proud [10]. (2) Concepts are the building blocks of knowledge. They are characterized by common, defining features and by features that distinguish them from other concepts [20, 21]. (3) Ideas are sets of related con
cepts that serve/produce a certain purpose, e.g., enable a certain situation or create a certain output. (4) Rules indicate the way of relating concepts to each other, and (5) procedures are sequences of rules that produce a desired outcome. (6) Beliefs are ideas and rules that are somewhat constant (invariant) within a certain culture. (7) An outcome is an expression (materialization) of cultural components in an object of art as well as the degree to which the new expression differs or improves of previous similar expressions. (8) An expectation correlates with the priority associated to a cultural component, and subsequently to an emotion. Multiple expectations can exist within a population, which results in multiple priorities and emotions. Expectations can be common to a population or can be different depending on the subjects experiences [10, 66].
**Example**: We referred to Baxandall's discussion of the painting "Baptism of Christ" by Piero della Francesci [10]. Baxandall presents the cultural environment in which the painting was produced. (1) It includes the painter's goal in response to a customer's request, which refers to a place of display (e.g., an altar piece), a topic (i.e. Christ's baptism), and a specific artist to create the artwork. As Baxandall explains in [10], the painting was meant to have three functions: "to narrate scripture clearly, to arouse appropriate feeling about the narrated matter, and to impress that matter on the memory" (page 106). (2) Concepts include all items that form the "language" of a culture, like Christ, angels, baptism, water, and so on, and their associated meanings, including symbolisms. (3) Ideas are a set of concepts, in this painting, the idea that Christ's baptism by a human indicates his humility [10]. The meaning of the idea might result through inference based on its concepts, through the using of analogies and metaphors, and so on. (4) Rules include the application of mathematical principles to painting, such as rules related to perspective, proportions, and Euclidean analysis of forms [10]. Onians offers an interesting presentation of the evolution of the rules embedded into artwork, starting from empirical rules defined by ancient Greeks and Romans, and followed by rules based on geometry, the physiology of the eye and brain, psychology, and neuroscience [66]. (5) Procedures include the established templates and routines for devising a painting, like selection of colors, its design, and composition [10]. Procedures also include the way of assigning purpose and intention to an art object, such as a certain way of interpreting its meaning through causal inference [10]. (6) Beliefs refer to certain widely accepted meanings and facts, like the interpretation of Scripture. (7) The outcome is the actual painting in this example. (8) As Baxandall explains, expectations include the following attributes: "clear, moving, memorable, sacramental, and creditable image of the subject" (page 106 in [10]). Expectations can also refer to the characteristics of the physical place of display, like its size, shape, etc..
Arguably, the state-of-the-art in technology is the equivalent of culture, as it includes the goals pursued by a certain technological domain, its laws, theories and models, procedures, and designs. Expectations are based on the functionality and performance of the previous designs. Similarly to culture, expectations define a priority of the components defining the state-of-the-art, and an emotion originating in the satisfaction of the expectations through the actual designs.
Multiple expectations are possible as community members can have different predictions on how previous designs will shape future outcomes.
**Example**: We referred to the electronic circuit designs discussed in [21, 42, 53, 94]. The state-of-the-art, which describes the context in which the circuits were discussed, includes the following elements that are analogous to the elements discussed for the previous example. (1) The goal was to design circuits with open-ended functions, like the problem description referred to using state-of-the-art technology for embedded systems and a set of building blocks (BBs), like sensors and actuators, to improve campus life [94]. However, once the desired functionality is identified, the associated performance requirements become defined too. Performance requirements are numerical values that must meet minimum (maximum) limits or are minimized (maximized) as part of the design process. (2) Concepts are the BBs used in design, like MOSFET transistors, resistors, capacitances, and subcircuits. Similar to concepts in art, they can have multiple meanings, e.g., as a MOSFET transistor can have multiple functions (behaviors) depending on its region of operation, however, the semantics of these meanings is much simpler than in art. Also, their meanings can be described in a precise, formal way using mathematics or logic. (3) Ideas represent a set of BBs with a precise meaning, like subcircuits built out of MOSFETs. (4) Rules indicate how BBs should be connected to each other in a design, such as the rules that describe the connections of the four MOSFET transistor terminals. (5) Procedures represent the steps to be pursued to solve a design problem, like the steps to create a circuit structure (schematics) and the steps to size the MOSFET transistor. (6) Beliefs refer to the paradigms considered to be valid for the considered state-of-the-art, such as its functional and performance capabilities as well as its advantages and disadvantages compared to other alternative state-of-the-art in technology. (7) Outcomes is the set of all circuit designs that have been created for the given state-of-the-art. (8) Expectations refer to the degree to which a designed electronic circuit meets its functional and performance requirements.
### Semantics of Creative Activities in Painting and Electronic Circuit Design
This section compares the semantic elements of the creative processes in painting and electronic circuit design.
**(1) Goals:** High-level goals are expressed through the set of concrete problems that must be solved. These problems are characterized by four variables: topic, relationship to previous problems, physical constraints, and authorship.
The topics of paintings might encompass a broad range that goes from a well-defined set of ideas and symbolisms, i.e. the meaning that results from Scripture for Christian religious paintings, to a broad range of emotions and meanings that attempt to establish a dialog between the artwork and viewers, like in abstract painting [10]. Therefore, while the end goal of a painting is to produce pleasure to the viewer [10, 66], it can achieve it by using a broad range of meanings, on one
end based on a well-defined meaning (_closed semantic space_) and at the other end based on primary cues that originate a sensation of pleasure in the brain (_physiological space of the brain_), like color [66, 105].
In engineering, problem framing is an important design step and mainly refers to defining the precise functionality of the solution and its expected performance [28]. Problem-framing can co-evolve with the solving process [10, 22]. New problems are often defined with respect to previously developed designs (_incremental design_). While there are relationships to previous work, there is arguably more flexibility in defining the topic of a new art object, as there are no concrete functional or performance requirements other than being original and improving the well-being and pleasure of customers [10] (_physiological space of the brain_). As opposed to art where topic identification is usually decided by the customer and/or artist [10], polls are used in circuit design to identify appealing functions, or the actual opportunities might become evident only after a number of attempts, such as in mobile computing. However, as opposed to art, after the function of a design is well defined, the degree to which the goal is attained can be characterized through well-defined metrics on cost, accuracy, speed, energy and power consumption, reliability, and so on [30].
The place of display of a painting can be well defined, like it was for "Baptism of Christ", or might be initially undefined, as in the case of a painting an artist creates on his / her own. However, in both cases, the physical dimensions of the painting are set before the artist commences the painting. Similarly, the physical attributes of an electronic design, i.e. weight and size, are decided after its functionality and envisioned way of use are decided. As opposed to art, where physical constraints are fixed (_constraint satisfaction problem_), e.g., the painting must entirely occupy the assigned 2D surface, in electronic design, the goal might be to minimize the size and weight (_minimization problem_).
Finally, the author of an artwork is well defined (_fixed_) while the authors of an electronic circuit design do not indicate a certain person but rather refer to a group of persons that possess a required skill set (_constraint satisfaction_).
#### (2) Concepts:
Concepts are collections of items that share common features and are distinguishable from other concepts [21, 25, 59]. Studying concepts has been a main topic in cognitive psychology [16]. There are various concept models, like using prototypes to represent concepts [77], class membership models based on primitives and defining conditions [4, 16, 25, 77], class membership models using graded membership [99], and concept typicality and vagueness [65, 68]. Concept formation and learning discusses topics, like concept alignment [47], concept similarity [33] and dissimilarity [72], perceptual symbols [8], natural categories [75, 76], ad-hoc categories [5], goal-based concepts [6], incremental formation [29], and complex concept formation [64]. Work also studied the difference between physical concepts and its image in the memory [99], situated simulation [9] and thematic relations [54], categorization adaptation [2], and concept typicality [39]. Next, the differences between concepts in paintings and electronic
circuit design were discussed with respect to their possible meanings, descriptions, and meaning understanding.
In [10], Baxandall states that "In systems like classical mythology and Christian theology, matured and elaborated over centuries, almost anything can signify something - trees, rivers, the various colors, groups of twelve, seven, three and even one; many things can signify various things" (page 132). For example, he indicates seven different meanings for Baptist, and the various meanings of a dove, plants, forms, and colors (_meaning transfer from another space_) [10]. The many meanings of the concepts originate a very large of possible meanings for the composition of forms (concepts) in a painting (_size of the semantic space_). Moreover, different concepts can have similar meanings, therefore there is significant redundancy in the semantic space (_redundancy of the semantic space_). In contrast, the concepts (e.g., BBs) in an electronic design can have multiple meanings, but the number of possible meanings is small. For example, a MOSFET transistor can act as a switch that is on or off or as an amplifying element. However, these meanings (behaviors) are few and in general restricted to a certain BB. Moreover, the meaning of the concepts in paintings is tightly integrated with the other concepts to achieve unity and balance, including requirements expressed in the laws of aesthetics [19] and Gestalt theory [3] (_integration_). A concept's meaning must be consistent with that of the entire composition, and new meanings can result in this process. BBs can be tightly coupled to each other, like in electronic designs of small size, like analog circuits, but modularization through local and hierarchical coupling of BBs was proposed to manage the complexity of large size design, thus, to achieve scalability (_scalability requirement_). The meaning of concepts in art can continuously change over time and from one culture to another due to dialog between the artist, participants (_members of the same culture as the artist_), and observers (_members of a different culture_) [10] (_dynamics of meanings_). A circuit design has a precise meaning that rarely changes over time (_unless new applications are discovered for it_), however, there can be a translation process from one state-of-the-art to another when designs are migrated across different fabrication technologies [30]. The translation is then always complete, while certain painting features might be difficult to map to a new culture, like the concept of "commensurazione" as explained in [10].
In design, the meaning of a BB is often specified using formal descriptions, like closed-form mathematical expressions (e.g., differential equations), logic formulas (i.e. first or higher-order logic), and executable / simulatable specifications in a programming language (like VHDL-AMS) [30]. These descriptions are often grounded in laws of physics. When such descriptions are not available, the BB meaning is expressed by enumerating the behavior in the main cases, like the corner points of a MOSFET device. In general, it can be argued that methods have been devised to produce tractable descriptions of behavior, so that they cover as much as possible of the possible behavior of a BB. Unexpected BB behaviors are unwanted as they usual end-up in failures, and additional design features might be incorporated into an electronic circuit to avoid such situations (_maximize the deterministic, fully known semantics_ of BBs). In contrast, the
features of the concepts in painting are rarely fully and well-defined, as enabling the inference of multiple meanings of a painting's concepts and composition is a main goal of art [66, 105]. Having new, unusual concept features, like a purple tree or a strange posture of an angel, can express new meanings, and are part of the creative process in art. Also, the level of detailing of a concept (and vice versa its level of abstract representation) can be important in painting for conveying a certain message, like emphasizing a certain aspect. In contrast, BBs in a complete engineering design are fully specified, even though partial descriptions can be used during the previous drafting stages. The physical representation of concepts in paintings can be subjected to geometrical rules, like placement of objects and perspective, which is similar to devising the structure of a circuit and sizing the BBs in electronic design. However, such rules are not imposed, e.g., in modern art, while they are a strong requirement in circuit design.
Finally, the process of understanding the meaning of a concept in painting involves inference and using symbols, analogies, and metaphors. Insight is gained in the process. Baxandall explains that this process is similar to formulating hypotheses, and then validating them [10]. The process is a sequence of steps, in which the meaning of a composition produces a hypothesis that is applied top-down to understand the meaning of concepts. Any identified inconsistencies serve then bottom-up as cues to modify the hypothesis and re-execute the process. The end result is a story that explains a painting. There are similarities with understanding the meaning of BBs in electronic designs, as humans might formulate hypotheses about the purpose (e.g., function) of unknown BBs, and then verifying the hypotheses, followed by readjusting a hypothesis if needed and repeating the process. However, it can be argued that the formulated hypotheses are simpler due to the significantly less ambiguous meaning of BBs than concepts in painting. This likely impacts the utilized strategies to find valid hypotheses.
#### 3.2.1 (3) Ideas:
Ideas are sets of related concepts assembled to satisfy a certain purpose, like to enable something or to create an output or consequence. Hence, an idea has an associated meaning. The purpose and meaning can be different for different cultures, including changes over time. For example, the idea of Christ's baptism expresses the cleansing of sins in Christian religions with all the associated consequences, like the "unbaptised are damned" (in [10], page 123). Or, the idea that a painting meant to be an altarpiece has a precise function. Similarly, in electronic circuit design, collections of related BBs (concepts) form a design with a precise purpose, like functionality and performance. Idea representation and organization has been studied in psychology [1, 7, 70].
The discussed comparison of ideas in painting and electronic circuit design considered the following dimensions: the type of ideas as defined by the nature of their purpose, meaning, origin, characteristics, grouping, and evolution (change). Moreover, ideas can be explicit or implicit. Explicit ideas are stated and communicated to others, hence become part of the shared culture or state-of-the-art in circuit design. For example, the laws of proportions and perspective were ideas that were stated and shared by medieval painters. Similarly, textbook descrip
tions of subcircuits represent ideas on how to obtain a certain functionality and performance. Implicit ideas refer to situations in which the idea produces a well-defined purpose without that this purpose was consciously intended by the artist or designer (_emergence_). Also, a painter might prefer a certain color or organization of a composition. Similarly, designers might prefer a specific subcircuit, even though other possibly superior alternatives exist.
Regarding their type, ideas in art can serve different purposes, like inquiry, hypothesizing, explanation (including causality), critique, constraint, generalization, detailing, expectations, purposing, intention, confirmation, support, reinforcement, messaging (propagation), expression of emotions, and so on. For example, ideas can express how a certain intention is produced through the selected colors and organization. Depending on their purpose, e.g., problem solving or knowledge communication, ideas in circuit design can have the same types, with less emphasis on expression of emotions.
It can be argued that meaning can be defined as a black box model, if it only explains the purpose of an idea (what), or a white box model, if it explains how the purpose is achieved (how). For example, Gombrich argued the importance of forming a visual image of an idea [35]. The meaning of ideas result through different processes in painting and circuit design. In art, the meaning is tightly dependent of the context, such as the time when an artwork was created and interpreted, and the perspective [36, 66]. Different meanings can result depending on the viewer's perspective. New insight is likely to result. Therefore, ambiguity is arguably a desirable features, as it supports novel, constructive and creative reinterpretations of an art object. Moreover, physiological and psychological reaction can be important, such as body reactions, unconscious reactions to shapes and colors, and empathy [34, 66]. Therefore, it can be argued that the meaning of an artwork is not self-contained, but itself results from the subjective interpretation of the observer embedded in his/her culture. Instead, the meaning of ideas in circuit design is well-defined, mostly self-contained, and less dependent on possible interpretations. While multiple meanings can exist, alternatives are usually few, known, and precisely defined.
Ideas also differ by the process used to assign them a meaning, i.e. to understand them. Ideas in paintings are understood through processes that involve hypothesis formulation and verification, using symbols, analogies and metaphors, abstraction, utilizing exaggerations, paradoxes and contradictions, satire, allusions, logic inference, identifying associations, and so on [10]. Hence, it is important to formulate, focus, and pursue multiple meanings through cognitive activities, like separation, classification, and understanding the whole before focusing on the parts (top-down reasoning) [66]. The importance of sequential understanding has been also argued, especially for modern art [10]. Note that there is a physiological biasing of the process due to the brain's built-in priority in focusing on forms, like face and eyes [66]. The importance of idea organization in the memory, as well as the differences between experts and amateurs have been also explained in the literature [66].
Idea characteristics include the following attributes: Precision refers to the degrees of ambiguity, such as in the case of metaphors, as well as if an idea has a quantitative or qualitative evaluation. Ideas can describe different degrees of abstraction, and can have different levels of rigidity depending on their flexibility to support changes and to be combined (related) with other ideas. Their invariance describes the degree to which ideas remain unchanged as a result of the idea combination process, e.g., their meaning remains the same in spite their change. They can have different degrees of validity, like ideas which are always true (trains), are valid under certain circumstances, and are always false. They have a certain organization and structure, including a hierarchical structure. Ideas can have different social attributes too, cogency (degree of authority), visibility, importance, and impact for a culture.
An idea is usually part of a larger group of ideas, therefore it has characteristics with respect to the ideas of the entire group. Idea similarity describes the similarity of two ideas with respect to their concepts, structure, or meaning. Hence, similarity expresses the degree to which ideas are aligned with each other, including situations in which an idea is a transformed or evolved descendant of another idea. Ideas described by different sets can have similar meanings (synonyms). Alternatively, distinct ideas are described by their degree of differentiation. The capability to compare two ideas with respect to a metric, such as their utility, supports the definition of an idea's quality. Familiarity is the frequency with which an idea has been repeated within a culture. Ideas can be organized (structured) in an ontology, such as hierarchies of clusters of similar ideas [21]. Groups of ideas can be described by patterns, which express conditions valid for all ideas in the group. Ideas can be characterized by their consistency, e.g., the degree to which their meanings are logically not conflicting (contradicting) with each other, continuity which is the possibility of understanding them as an evolving sequence, and integrality which is the degree to which the set specifies a unitary, complete system (ensemble). Ideas can be described by a degree of unexpectedness (including oddity), like whether they can be predicted based on the context or other ideas, and degree of redundancy, such as their meaning is articulated by other ideas too.
Ideas can be learned through experience, from others, or obtained through insight. Ideas can change over time and can be common, opposite, different, and partially different for the members of the same culture or of different cultures. The degree of idea similarity can depend on conditions of the context. Baxandall explains that there is a continuity over time between ideas in art, such as an idea can be related to previous ideas [10]. Also, ideas in art can be continuously reinterpreted for any new time period [10].
**(4) Rules:** Rules present a way to connect concepts, ideas, or other rules with each other. The latter two kinds are called metarules. Concept combinations have been studied in cognitive psychology as a mechanism to relate separate concepts [63]. There are two types of combinations: property-based and relation-based combinations [100]. Property-based combinations transfer features from
one concept to another while the interpretation is plausible [67]. Relation-based combinations connect two nouns through modifiers that relates to causes, structure, purpose, and location [26]. The parameters that influence the selection of different relation types has been also studied, like cueing, stimuli sequencing, memory [27].
Rules are described in our model by the following elements that are detailed next: conditions for applying the rule, including constraints, a rule's structure and elements, the expected goals and real outcomes of applying a rule, the characteristics of a rule, its interpretation, and its origin and gradient (change).
The conditions for applying a rule describe the situations (conditions) under which a rule becomes available to be used and then selected to be used. In general, models distinguish between making a rule available (cuing / activating it) [26] and selecting (deciding) the rule from the set of available rules [26, 27]. Constraints can cause the activation of a rule, like a certain structure of a painting's physical display (like the specific place of display of an altar piece in a church) which imposes rules on the structuring of the composition [10]. Rules are also activated by a certain pursued goal and established beliefs about how the goal can be achieved. For example, art was dominated for some time by the belief that paintings must be nature-accurate images, which triggered the need to apply rules on proportion and perspective, so that 3D images were accurately presented on 2D surfaces [10]. The using of rules is also decided by the meaning associated in certain contexts and cultures, like the connection between the importance of an object and the centrality of its representation in a composition. Universal rules are always true, even though they might not be always selected, such as in situation when the author wants to communicate a paradox or an oddity.
A rule's structure and elements indicate its constitution. Rules can connect concepts, features of concepts, and ideas to a certain outcome, or connect rules into higher level structures, like hierarchical structures. For example, rules can express the structure of a composition or design. When higher-level structures are created, the connection can use all the components of the lower elements (lossless case) or only some (loss case) while possibly adding new elements that are not present in the lower structures (extension case).
The expected goals and real outcomes of applying a rule can be global, if it refers to an entire image, or local, if it relates to certain parts and details. Depending on their structural activity, rules can serve to decompose and aggregate an entire, to produce associations between concepts and ideas, to compare and differentiate, to generalize instances and to de-generalize (instantiate) abstractions. With respective to their cognitive goal, rules can serve as part of producing inquiry, hypothesis, analysis, discussion, pairing and comparison (including alignment, similarity, separation / difference, and prominence / superiority), explanation, necessity, evaluation, insight, articulation and persuasion, reinforcement (memorizing), support, predictions, expectations, achieving a certain sentiment, impression, and emotion, and so on. Depending on the preciseness of their expectations and goals, rules can introduce different levels of ambiguity
(including metaphors, symbols, analogies, and allusions), backward references to old goals, and new meanings and reinterpretations of an existing rule. Specific rules can communicate truth. Rules can be used to communicate repetition, patterns, or movement, such as placing opposite colors at the opposing ends of horizontal and vertical lines [18]. Rules can also suggest correspondences between different images, between an image in nature and its corresponding artistic image, between outputs and goals, between attention and action.
The characteristics of a rule include whether a rule is explicit or implicit, externalized or internalized. Explicit rules consciously identify the involved elements (i.e. concepts, ideas or other rules), while the precise nature of their connection might (i.e. through mathematical expressions) or might not be defined (e.g., through qualitative or approximate expressions). Implicit rules are executed without the awareness of the user. Externalized rules are those described in a communication media, like formal rules, speech, image, and so on. Internalized rules do not have such a description. Rules can be deterministic or stochastic, which is a consequence of their activation and selection processes. Self-contained rules are fully expressed only based on their description. Rules can be also approximate (if they describe reality within a certain error range), precise (if they express the desired relation without unnecessary or redundant information), structuring (if there is an organization of large rules, i.e. using hierarchical structures), robust (if they are supported by a large set of real-world situations suggesting a certain organization of the experience), flexible (if they can be changed into other rules), invariant over conditions, including time (if the rule remains valid for a broad set of changing conditions, i.e. contexts and cultures), and relatable to other rules (if there is a sequence of rules that establishes the connection between two rules, i.e. deductive reasoning, inference, contradiction, or exploring alternatives).
The interpretation of a rule indicates the meaning of a rule, as described by its structure, concepts, effects, and conditions of application. As a rule can have multiple meanings and ambiguity, depending on the degree of consistency (validity) of its meanings, rules are needed to assign and update rigor, address unexpected features, strangeness / peculiarities, oddities, seeming, paradoxes, and their connection to attention. The meaning of a rule can change depending on the broader context and the user's subjective experience and perspective. Rules can also link material, natural elements to internal, subjective representations, such as in using lines, positions, and colors the achieve harmony, balance, and ultimately beauty [62], or communicating the feeling of grandiosity through a rule that decides the viewing position of the viewer with respect to the painting [62]. Rules can also decide the target of the viewer's attention during observation, such as through large surfaces of the same color or by placing objects in the central part of a painting. Rules can produce a global viewing of an entire painting (overview) or guide viewing towards local details, such as through the crowding and dimensions of the emerging forms and surfaces. For example, many, small patches of colors can suggest an interpretation focus on details, while few, large surfaces encourage a global view of the entire painting.
Rules can originate in math and sciences or in various social conventions. For example, it is argued that the rules used in art were grounded over time in geometry, physiology of the eye and brain, psychology, and neuroscience. Alternatively, rules have been grounded in beliefs, like rules on the origin of harmony and beauty [62], or communicating a certain moral or ethical message [10]. Rules can be learned from others, like mentors, or learned through own experience, like using certain brush strokes to convey a certain message or the way of constructing compositions to achieve the desired visual harmony and balance of the composition [62]. Rules can be also set by the client of an art object. Another aspect is the degree to which new rules are embraced by the larger community, the degree of propagation, and the degree to which a rule changes over time as a result of personal and collective experience and depending on the conditions and context it is applied, the rule's flexibility. An interesting interplay arises between the goals of an outcome and rule modification (evolution). For example, the goal of creating nature-accurate paintings supported the continuous search for rules that would accurately represent a 3D image on a 2D surface. Note that this problem requires finding rules to project from a higher dimension space into a space of lower dimension with minimum information loss.
#### 3.2.2 (5) Procedures:
Procedures represent problem-solving sequence of steps, where each step applies rules to concepts and ideas. Problem solving has been a main research topic in psychology [31, 44, 82, 83, 87]. The studied topics include the nature (e.g., clauses like what (declarative), how (procedural), etc.) and organization of memory structures (including differences between experts and novices [15, 17, 41, 81, 91] and memory cuing [15, 40]), learning [46, 89], such as concept formation [15], new rule induction [15] and priority formation [91], dual-process reasoning [69], solving heuristics and their motivations [14, 31], problem solving processes through problem decomposition based on categorization using similarity with previous problems [15, 55, 81], solution synthesizing through an ordered, sequenced tackling [55] of categorized schemata, templates and selected knowledge snippets [32, 48, 83, 91, 101] and solution verification through redundant perspectives [40], and understanding, including insight [13, 80, 90, 93] and knowledge restructuring [56, 96].
The comparison of the procedures used in problem solving in art and circuit design refers to the following elements: the devising of the step sequence to solve a problem, connection to previous and future problem solving instances, and learning and getting insight while conducting the procedure.
There is abundant literature on problem solving procedures in mathematics, science, and engineering [71, 98]. These approaches are geared towards solving well-defined and ill-defined problems with known requirements and constraints. For example, the steps suggested in [71] require first identifying the problem variables and structure, followed by relating it to similar, previously solved problems. Problem solving using analogies is often used too [97]. If the problem cannot be solved, the process should attempt solving a simpler yet related problem, created either by decomposing the original problem (divide-and-conquer) or by
simplifying the problem while keeping its main variables and unknowns (problem approximation). Depending on how problems are simplified and then their solutions integrated into the final solutions, the process uses various nuances of concept combinations, in which property combinations decide the features of the concepts (BBs) used in the solution [67], and relation-based combinations decide how concept functions are integrated together [100]. Using methods grounded in logic, such as inference, is an example of relation-based combination. However, other ways to achieve relation-based combinations are possible too, as explained in [42, 53]. Verifying the correctness of the solution ends the process. Relation-based combinations have been used in some artistic genres, like commensurazione. Art theory presents rules on how paintings should be executed, like using of hue and luster [10]. Also, analogies with other painting were utilized as well as analogies originated in scientific theories [10]. Artists use nature as inspiration, as they select and amplify certain aspects [62]. Insight is also important. However, procedures in art also pursue a generative process [10, 62], in which existing images are deconstructed and reconstructed to produce a new meaning, like the equivalence between natural and spiritual [62]. Such a procedure is similar to hypothesis design and testing [10, 62], in which a new hypothesis is tested against previous artwork and the artist's rational (intentional) behavior [10]. Intentionality suggest the existence of causal relationships that use the spatial and temporal structure of a composition to support the story told by a painting [10]. This procedure creates not only a new artistic object, but also a new expression language, as explained by Shapiro: "discriminating the good in an unfamiliar form that is often confused by the discouraging mass of insensitive imitations" [79](pp. 16). The pursued constraints guarantee the novelty of the created art [61, 79]. For example, Mondrian used constraints like using only vertical and horizontal lines, pure colors, and opposition of colors to remove vagueness and tragic, thus, offering a precise meanings to his paintings [61]. The selection of the features to be highlighted is arguably based on intuition [61] to achieve the artist's intentions related to broad societal goals and beliefs as well as artistic requirements, like balance, harmony, and order [62, 79]. It can be argued that the trial-and-error steps created mutated descendants of previous artistic features [10, 79, 84], however, these descendants are only tokens of the new expression language that is created. An important characteristics of the language used in abstract art, including Mondrian's work, is its capability to describe ambiguity, so that understanding ambiguity is tractable (solvable).
Depending on how the decomposed subproblems are tackled and integrated, there are two opposite approaches in problem solving, top-down and bottom-up procedures. Both consider a hierarchical description of concepts, ideas, and relations. Top-down procedures assume that there is a general plan that defines the main concepts and their relations. Details are progressively added until the design is completed. Top-down design is popular in circuit design [30, 60]. In painting, commensurazione assumes a three-step, top-down process: devising the general plan that includes profiles and contours, defining relations through proportions between contours, and devising the features, like detailing and color
ing [10]. Another instance of top-down design is to define creating artwork as the process of assigning values to the variables of a visual template (e.g., position, dimension, color, and cues to engage the viewer, like expected visual scanning), so that the desired meaning is communicated [62]. Bottom-up procedures focus on devising first the detailed concepts, which are then gradually integrated into the overall solution. A broad range of procedural approaches can be imagined by combining top-down and bottom-up solving of the subproblems into which a problem is decomposed.
Applying a procedure to solve a problem is connected to previous and future work, including influence of earlier paintings and other traditions [10]. The current application continues previous applications of the procedure, and is continued by future applications [79]. Therefore, it has been argued that an artist's work is a logical development of previous art, including the work of others [61, 79]. However, the continuation also includes a distinction between two art objects [10, 61]. Therefore, a procedure extend beyond solving the current problem, like creating a certain painting, to pursuing a broad idea (goal) of the artist [10]. The reinforcement (through repetition) of artistic features with a certain purpose can establish beliefs, like elimination of form in painting serving the purpose of cementing freedom of expression [61](pp. 38).
Finally, learning and getting insight while conducting the procedure is critical considering the evolutive nature of the process. Learning can be conscious or unconscious [61, 62]. Mondrian argues that art is an evolution process during which the artist uses multiple points of view, discovers new ways to express certain ideas, and learns about them by comparing them with known features (including features from other domains, like architecture [62]) and further meditating (generalizing) about their expression power [62]. For example, the artist can learn about the effect amplifying or reducing illumination on the expressed meaning [62]. Other learned elements include new constraints, rules, and invariant relationships. A better understanding of the artist's intentions and goals might also result by observing the meanings obtained through the new features and concept associations [10]. More precisely, Mondrian argues the importance of "looking deeply" to "perceive abstracty" [62]. This suggests a repeated analysis of an artwork during which any newly acquired knowledge is used for the next analysis iteration [61]. Learning can also refer to the execution of a physical painting, like materials, textures, detailing, finishing, and so on [62].
#### 4.2.2 (6) Beliefs-(7) Expectations:
Beliefs are invariant ideas, rules, or their characteristics. These invariants over a certain time period can apply to an individual, group, or an entire culture. Hence, the large variety of ideas, rules, and characteristics implies that there is a large variety of beliefs too. The proposed model argues that beliefs are necessary in real life to tackle the huge complexity of the semantic space, if all variables are free. Many variables are correlated, and therefore analyzing the entire space of possibilities for a new idea, rule, or routine might be difficult in a reasonable amount of time. Instead, beliefs lock some of the free variables through priorities, importance, specific meanings (in
cluding symbolism, like the religious meaning of water and doves), a certain way of solving a problem, and so on, hence significantly reducing the semantic space of possible meanings, and making problem solving more tractable. Due to their existence over longer periods of time, beliefs are expected to produce a certain cognitive development of the members, such as adhering to common ideas, and also certain priorities, habits and skills to be used in the future.
As they are invariant over a period of time, beliefs are likely to produce a certain set of central objectives (problems, needs), a specific way of communication, a particular set of metrics through which an artwork is evaluated to achieve its set intentions, as well as distinct way in which the observers are expected to interpret an artwork. Moreover, beliefs that exist over longer time periods are likely to produce knowledge that is systematized into top-down sequences of activities for the creation of a new painting, like the painting process that has three steps, general planning, painting the profiles and contours, and coloring [10]. In this top-down problem-solving process, general planning decides the overall composition, the position, size, and relations between the main forms, followed by detailing the overall composition through precise profiles and contours. Finally, coloring completes the detailing of the composition. This top-down process requires less experimenting through trial-and-error to identify the best painting outcome, as many of the related variables, like an optimized way of communicating a message through a composition were already decided. In this scenario, arguably, the main solution in achieving the intention of the customer was finding a painter that possessed the required knowledge and skills, not so much an artist that would innovate. It might explain the importance of prestige and the transmitting of the craft through mentorship.
Beliefs can be explicit, such as the meaning of religious episodes as explained by Scripture, or implicit, i.e. following certain actions of a community, even though the beliefs behind those actions might not be known. Beliefs can evolve over time, such through refinements and adaptations, as their future expressions do not conflict with the present forms. Alternatively, beliefs can be replaced over time by newer beliefs that contradict them.
Beliefs can originate in mathematics, sciences, religion, philosophy, and social norms of a culture. For example, the invariant ideas and rules used in paintings of the romantic period are based in geometry of planes, proportions, and perspective. Similarly, beliefs used in the paintings of the Viennese school of the XIX century are arguably grounded in insight from psychology [45]. Religion imposed beliefs not only about the precise meaning of the episodes in the Bible, but also about the fact that their representation through art must be visually precise, pions, and memorable [10]. Beliefs over the superiority of rationality over emotions lead to goals of eliminating the subjective in an attempt to show absolute beauty [62]. Social norms imposed beliefs, like the preference of religious and historical themes over landscape [10] and depicting in detail rich garments to suggest a status of prestige and power of the clients that ordered a certain artwork. Beliefs can also originate in constraints related to previous artwork, such as the goal of not painting motion [61] and the subsequent beliefs about
the connection between natural, emotion and diminishing beauty, the limitations of natural representations, the importance of lines, colors, and integration of duality to achieve harmony and balance, the purpose of art, the evolution of life from natural to abstract, and the characteristics of the societies of the future [62].
**(8) Outcomes:** Outcomes refers to completed circuit designs or finished paintings in the case of Mondrian's work.
## 4 Case Study: Understanding Mondrian's Paintings Vs. Understanding Electronic Circuit Designs
### Circuit Design
**Addressed problems**. Electronic circuit design usually implies solving ill-defined problems, for which solutions must tackle conflicting requirements, like satisfying or improving one requirement simultaneously worsens another requirement [20]. Conflicting requirements originate performance tradeoffs in circuit design, and are an essential aspect of the design process [30]. Typical tradeoffs between conflicting requirements are amplification Vs. bandwidth and stability, and speed Vs. low power consumption. The meaning (semantics) of the utilized design elements, e.g., BBs (including MOSFET transistors) and design rules, is defined to a large degree by laws of physics, even though there is some ambiguity due to electrical properties that were previously minor (e.g., effects due to shrinking sizes of MOSFET transistors) or unwanted poles and zeros when connecting subcircuits together [92]. Main economic constraints of design require producing solutions in the shortest amount of time, with the lowest costs, and with as few errors as possible. Utilizing novel design features is not justified unless it has been proven that current solutions cannot address the application needs. Therefore, reuse of previous design features is often pursued. Moreover, design solutions can be objectively compared with each other based on their numerical performance [30], leaving little room for subjective interpretations.
Figure 1 illustrates the cognitive architecture (called InnovA) that was proposed for devising new electronic circuits by computationally mimicking the cognitive activities in problem solving [53].
**Memory**. The cognitive architecture has a memory system organized as three parts: First, the Memory system includes the design knowledge available for problem solving. It is organized as Long-term memory that includes all design knowledge, Short-term memory that keeps the recently used knowledge, and Episodic memory that stores the previous experience of using specific design knowledge in problem solving. Second, the Semantic memory represents the meaning of the design knowledge stored in the memory system. It is represented using three structures: the Associative structure groups the design elements into hierarchical sets based on the similarity of the elements in a set. Associations connect subcircuits with similar functions (synonyms) or similar structures (homonyms). Connections to goals structure indicates the purpose of
each design element, and Causal sequences structure presents how the design elements are linked together to produce a design solution. Third, Subjective memory stores beliefs and preferences about using specific design knowledge (e.g., subcircuits) in a design solution. They are important in assessing the importance and hence in ordering the way in which unwanted features of a solution are addressed, like noise and nonlinear behavior. Emotions module mimics the purpose of human emotions during decision making, like controlling the switching between pursuing a global or a local view of the design problem. For example, addressing very precise design needs focuses the process on certain local parts of a design, similar to how local processing is achieved through negative emotions, like anxiety and frustration. In contrast, positive emotions, like easiness, encourage a global perspective on the design problem [74]. Context-dependent memory module retains the elements of Subjective memory that have been used for solving the current problem.
In addition, the architecture has three subsystems: (i) the attention and prediction subsystems to relate (compare) a new problem or solution to previous similar instances and to predict the impact of the differences both in terms of overall operation and outcomes, (ii) the reasoning subsystem to produce an explained solution (e.g., a circuit structure) for a problem, and (iii) the knowledge extension subsystem, which creates new subcircuits (BBs) for future problem solving, and restructures the knowledge to incorporate the new information that was acquired during design. The operation of the cognitive architecture real
Figure 1: The cognitive architecture InnovA for the design of new electronic circuits.
izes three feedback loops. The three subsystems and the associated loops are presented next.
**Attention and prediction subsystem**. It is the part highlighted in blue in Figure 1. The process starts by having the Attention window module being activated by unexpected features of the design requirements or circuit design, which is being analyzed as a potential solution to a posed design problem. For example, the design requirements that supported the devising discussed in [43, 58] required creating a solution for low voltage, low power applications and sufficient amplification and speed, e.g., slew rate, requirements. Attention is focused on the conflicting requirements, such as low voltage Vs. low power, and amplification and speed Vs. low power. Addressing one of the requirements adds supplementary constraints on the opposing requirement. For example, a solution might use sub-structures (BBs) from two different circuits: adaptive biasing class AB input and a three-stage with frequency compensation to improve stability. Based on their previous usages, hence the knowledge about the causal connection of the sub-structures (module Predictions about causality) to their outcomes (module Predictions about outcomes), adaptive biasing class AB input with common-mode feedback is expected to offer high amplification and speed (e.g., slew rate). Similarly, the three-stage structure with frequency compensation is expected to improve the stability of the solution. Making predictions about causality and outcomes might involve the Context-dependent memory module, which includes beliefs, priorities, and emotions specific to the application that is being addressed. As the current application might be similar to previous applications, Context-dependent memory module is connected also to Subjective memory module that stores the beliefs, priorities, and emotions previously acquired during design.
**Reasoning subsystem**. The part highlighted in green in Figure 1 indicates the reasoning part to create a solution that is verified to satisfy the requirements. The substructures selected through attention and prediction become part of Population of solutions, which includes all the circuit elements that can be relevant in devising the solution. Module Produce alternatives using incremental operations and combinations modifies the selected substructures to adjust them to the problem requirements, and/or combines them to build new designs (concept combinations). Incremental operations create local changes to a substructure without changing the main nature of the substructure. Module Select alternatives selects among the alternatives available to solve a problem the alternative that is further considered at the current step. Module Simulation produces a complete characterization of the design, hence fully describing its meaning (semantics) with respect to the problem requirements. Module Understanding needs compares the characteristics of the current design with the requirements, and then identifies the causes for their mismatches, including the reasons for the unsatisfied requirements.
**Updating**. Module Identify new BBs recognizes new substructures that are generated by adjusting present BBs. A BB is a subcircuit that can be used for solving a wider set of problems, hence is not customized only for the current
application. Module Knowledge restructuring changes the semantic memory because its current structuring does not lead to finding a solution that addresses the application requirements.
The architecture memory and subsystems implement three nested loops as shown on the figure. The inner most loop, called loop (i), aims at finding the bottlenecks of a solution. This is critical in ill-defined problem solving, as devising solutions that tackle opposing requirements is the main challenge of such problems. It repeatedly identifies unexpected or unusual features, which are then used through reasoning to understand the next design need to be addressed. This loop can lead to situations in which new subcircuits (BBs) are being created as a result of incremental changes of the existing design features because of the needs that must be tackled. A second loop emerges, called loop (ii), because a new subcircuit draws attention and supports new predictions, which then subsequently can be used in adjustments by incremental operations and in combinations with other subcircuits. The outermost loop, called loop (iii) in the figure, is executed if the two previous loops fail in addressing the conflicting requirements by using the available subcircuits or by creating new BBs. In this case, the design process must consider a higher abstraction level that would present the source that causes the unreconcilable performance requirements. The attention and prediction subsystem focuses on the cause and then reasons about solutions that would reduce the impact and nature of the causes on the overall solution characteristics. This loop produces knowledge restructuring.
### Mondrian's Paintings
**Addressed problems**. Mondrian offers a detailed presentation of the ideas behind his work, including the factors that originated the originality of his paintings [61, 62]. For example, he stated in [61] that he "disliked particular movement" (pp. 10) and that he wanted to paint "not bouquets, but a single flower a time" (pp. 10). Mondrian belonged to the abstractionist movement in art, and was inspired by a number of art styles, like Impressionism, Fauvism, and Cubism [61]. However, his goal of creating original artwork led to the observation that Cubism, to which he initially participated, does not eliminate all triggers of subjective feelings, like natural forms, and hence, a logical extension would be to pursue representations that achieve pure, timeless beauty by presenting pure reality that is void of particulars [61, 62]. The goal for art would be to describe human condition in a modern age, such as a mechanized age dominated by materialism [62]. His beliefs, ideas, and their expression in paintings evolved over time, as he explored new ideas and approaches.
[61] argues that art styles follow a continuous evolution process towards a "cleaner content of art" (page 17). The evolution process incorporates new ideas in science, philosophy, and society. Therefore, restructuring should reflect this evolution. It offers legitimacy to a new art style within a specific cultural frame [10]. Moreover, there is a consistency (coherence) of the artwork within a certain style with respect to agreeing with the beliefs and goals of other work of the style, as well as a consistency with the artists previous work [10]. Finally, there is a
necessity element that justifies the need for restructuring and new beliefs and goals as requirements to express the new characteristics of a society [10].
**Problem-solving process**. It can be argued that Mondrian's efforts to create new paintings is an open-ended problem-solving process that was based on a number of constraints on the ideas to be pursued and avoided, as well as his personal goal to produce paintings that are original and contribute to the mission of art as opposed to other domains like architecture and decorations. While his work was not rooted from the beginning in a precise set of rules and elements to be used in constructing new outcomes, it was guided by the elements that were not be utilized in the creative process, thus by a precise evaluation mechanism. The focus on distinguishing from previous work leads to evolving new elements that articulate this distinction, and which lead to the identification and exploration of a new solution space that can be only partially predicted by the previous work. Solving the open-ended problem posed by the goal of creating original art in-tune with the current society meant devising a new expression language (not only particular paintings), including the elements, rules, and objectives of the language. The uncovered solution space is formed by all paintings created using this expression language. The ambiguity of the BBs, ideas, and rules is broad, as Mondrian's work proposes a reinterpretation of the meaning (semantics) of lines, forms, and colors in a way that departs from their traditional meanings based on nature. The interpretation of his effort is subjective, even though newer work attempts to offer a more quantitative evaluation based on the neural processing of the brain [66, 105]. While the new meanings support expressions beyond visual elements, they raise significant challenges in terms of understanding the semantic ambiguity that emerges, so that new meanings are possible based on an observer's interpretation [66]. Hence an outcome does not have a deterministic meaning anymore, but is rather a guiding template to generate new interpretations by the viewers.
Figure 2: Open-ended problem-solving process corresponding to Mondrian’s painting.
Figure 2 depicts this process of solving open-ended problems. The process continuously uses two kinds of constraints: elements and rules to be avoided as they express limitations of previous artistic styles, and artistic elements and rules to be pursued as they reflect the natural evolution (progress) of the artistic style with which the artist identifies himself with. These two kinds of constraints support then the evolving of new artistic elements and rules that are used by the artist to produce new artwork. The artwork is evaluated to understand its characteristics within the artist's goals, and then analyzed to understand its broader meaning and potential to further support original work. The analysis can lead to an adjustment of the artist's goals, constraints, and meanings of the utilized concepts. Note that the process does not only generate individual art objects, but it produces a new language to express the solution space of the open-ended problem.
Figure 3 depicts a possible cognitive architecture for getting insight into Mondrian's work. It corresponds to the problem solving flow in Figure 2. To easy the comparison with the architecture for circuit design, the presentation of the architecture was devised similarly to the architecture in Figure 1.
Figure 3: Possible cognitive architecture for insight into Mondrian’s paintings.
**Memory**. The memory system of the cognitive architecture in Figure 3 has the same broad structure as that of the cognitive architecture for electronic design in Figure 1, but the stored knowledge and knowledge organization is different, e.g., shown for the Semantic memory module. Instead of a hierarchical structure organized based on similarities, causal connections to goals, and causal sequences to create a design solution like in Figure 1, the semantic memory is based on Instances of paintings, i.e. paintings P1, P2,..., Pk, which represent the artwork with unique and deciding influence on the artist. For example, Mondrian mentions Futurism, Dadaism, Surrealism, and Cubism as currents that influenced him (page 18 in [61]), so the module would include a sample of paintings of these artistic currents. However, these instances are not organized in a hierarchical structure along features (like in circuit design), but instead are kept as separate instances. Instances materialize the constraints, like features to be pursued and to be avoided, and hence, they can be encoded over time into generalizations and abstractions that can originate new beliefs and goals. Module General beliefs and goals includes explicit ideas and facts largely accepted by a society, and which originate in science (e.g., quantum physics and the theory of relativity), philosophy and aesthetics (like dualism and harmony [62]), and society (i.e. the dual nature, natural and spiritual of men [62]). In addition to explicit beliefs, implicit beliefs are those accepted without a motivation, like trends in a society, like an increased emphasis on material aspects of life. Goals represent the broader goals set for society, such as the desire to build a happier, more intellectual society [62]. This module corresponds to Societal characteristics in Figure 2. Module Beliefs and goals of peers provides focused, art-related beliefs and goals of the art community to which the artist participated, like beliefs about the role and color in artistic expression or the purpose of art, such as its role in the formation and preservation of beauty [62]. Their purpose is to articulate the specifics of current art through constraints, like differences from previous art styles and means to emphasize these differences [88], needs, i.e. limitations that an art trend attempts to address as compared to previous art styles, goals set for the present style, and the meaning assigned to specific forms of representation, like line and color. They store the Artistic purpose and Artistic constraints in Figure 2. From a formal point of view, they are rules expressed using various logic systems.
Modules Subjective memory and Context-dependent memory include the artist's subjective beliefs and goals, which are instantiated, adapted, and modified from the general beliefs and goals as well as those of his/her peers. For example, Mondrian's writings [61, 62] mention a large set of goals, constraints, needs, meanings and goals, which were pursued in his artwork. Some of his beliefs and goals are as follows: constraints - avoid natural forms and colors as they produce subjective, tragic reactions, and deny normal perspective to force a new way of understanding art; needs - find a new way of expression to avoid the diminishing appreciation of natural beauty, use pure color (like black, white, red, blue, yellow), and devise abstract representations based on lines but no forms; meaning - establish the equivalence of beauty with harmonious, equilibrated du
ality, use straight lines to avoid the variability in nature, and utilize horizontal and vertical lines to express opposite relations; and goals - remove the individuality to produce universality, describe the immensity of nature by expressing expansion, rest and unity, and express the multiple facets of truth. The two modules store Elements and rules to be avoided and Elements and rules to be pursued in Figure 2.
**Attention and prediction subsystem**. The subsystem is highlighted in blue in the figure. Cues are differences between the current image (e.g., a Mondrian painting) and instances in the semantic memory as well as beliefs in the subjective memory. Cues guide attention (through module Attention window). Cues that draw attention pertain to the embedding of the architecture in the real world as well as to learned and observed features. Cues related to embedding mimic elements that are hardcoded in the brain [105], like cue centrality, size, colors, contrast and opposition, and globality Vs. detailing. Learned cues include situations that have been discussed in the literature, e.g., using unfinished lines in a composition [18], or interpreting a painting as an image observed through a window (analogy). Cues can also represent unexpected observations, like suddenly noticing different shades of white of neighboring surfaces. For example, the granularity of the painting focuses attention on the entire image, while crowded images focus attention on the details. Other cues are large and central elements, like having long, black lines or bright, red patches in the middle of an image. Cues can also be known (in the semantic memory) or previously seen, visual elements (in the subjective memory).
As a result of cuing, concepts are retrieved from the semantic memory (module Cued concepts) together with their associated meaning, also from the semantic memory (module Predictions about meaning). For example, colors introduce a certain feeling, like black, purple and orange can produce a negative feeling of impersonality and lifelessness. Feelings induced by colors can also suggest movement, like the positioning of opposite colors on the diagonal, or 3D stacking, like the placing side by side of black and yellow. Similarly, paintings with few lines and a lot of white spaces create feelings of simplicity, order, and lightness, while crowded images of many colors produce feelings of difficulty and tension. The association of the predicted meanings to the cued concepts happens automatically as stored in module Painting related knowledge, such as Associative structure and Connections to goals. The subsystem corresponds to activities Understand their characteristics within the artist's goals and Analyze their broader potential and meaning in Figure 2.
Cues have an important role in understanding the meaning of an image, as they act as starting points in piecing together that meaning. A cue decomposes an image into components that can have an associated meaning. For example, vertical and horizontal black lines are used to separate yellow, red, or black surfaces, which create feelings, like heat, positivity, or movement in 3D. Lines do not act towards producing a composition made from forms, like in traditional painting, but as separators between elements with meanings.
Each meaning of a painting element acts as a hypothesis that is further validated or modified as the remaining elements of a painting are understood. Predictions about meaning might incorporate laws of logic, geometry, and physics, i.e. the formation of shadows and how shadows relate to the positioning of layered surfaces. Also, using the continuation principle to explain the appearance of surfaces of color can lead to insight about the positioning of the surfaces, such as which surface is on top, and which is at the bottom. However, ambiguity of this meaning can emerge if the continuation principle is partially limited, so that surfaces are stripes. Analyzing the ambiguity can produce multiple meanings for the same image. Cues can force new interpretations, like instead of seeing the intersecting horizontal and vertical lines as a cross, they are used together with color to suggest the idea of movement [18]. This subsystem corresponds to module Evolve new artistic elements and rules in Figure 2.
**Reasoning subsystem**. The subsystem is highlighted in green, and it uses the meanings of the elements identified based on the cues to produce a meaning for the entire painting. Population of meanings module stores the meanings that have been identified for the cued concepts, as explained in the previous paragraphs. A meaning to be further considered in reasoning is selected for integration (Select meaning of concepts module). This meaning can be independently identified for the cued element, such as if a decision was made to sample different parts of the image, or it can be selected in conjunction with the meanings of the previously analyzed elements, such as the considered semantics includes horizontal, stacked layers of surfaces of color. The integration of new meanings into a previously hypothesized meaning serves to reinforce the correctness of the hypothesis. Integration must be coherent with the previously assumed meanings and the meaning of the cued concept (e.g., no contradictions), legitimate with respect to all stored beliefs, and consistent with previous semantic integrations (module Coherent, legitimate & consistent integration?). The subsystem corresponds to Evolve new artistic elements and rules, Understand their characteristics within the artist's goals, and Analyze their broader potential and meaning in Figure 2.
The cognitive architecture implements three feedback loops. The first loop attempts to find the starting points for deciphering the meaning of a painting. If the meaning selection for a concept is not successful, the architecture scans for new cues (module Scan for cues), which can lead to different semantic interpretations. The scanning process can jump to the next dominant cue or follow a systematic scanning of the entire image, such as from left to right and top to bottom. The second feedback loop searches for the structure that integrates the meanings of the cued elements. If the integration of the individual meanings is unsuccessful, the process considers different integrations (combinations) or different meanings for the concepts. The third loop reinterprets the meanings of concepts and restructures their integration, if the first two loops failed or if additional meanings were to be found.
**Updating**. Figure 4 summarizes the process of updating the semantic memory over time. It refers to two generic art styles, called \(A1\) and \(A2\) in the fig
ure. Artwork is created along the beliefs and goals set for style \(A1\). By evaluating the expression capabilities of the produced artwork, the limitations of style \(A1\) are understood, including the way these limitations origin in the beliefs and goals set for the style. Eventually, a bottleneck for style \(A1\) is reached, when it cannot further evolve to create new, original artwork beyond that already created. Belief and goal restructuring follows after reaching the bottleneck starting from the identified limitations of the style \(A1\)[61]. Restructuring includes identifying constraints on what should and what should be not pursued by a new art style, stating of distinguishing features that supports differentiating it from previous styles, and stating of new goals. The new art style, called style \(A2\), develops its own beliefs and goals that are in sync with the restructuring elements. Within this new art style, an artist adopts his/her own personal beliefs and goals, which match to some degree the beliefs and goals of style \(A1\), but might also include different beliefs and goals based on the subjective interpretation of the artwork within the style. New personal artwork is created to reflect the personal beliefs and goals, which through analysis and evaluation leads to further adaptation (evolution) of the personal beliefs and goals, as well as new insight about the understanding, including limitations, of another artwork. This process corresponds to activities Identify limitations of current art, Set constraints, Identify limitations in peer's work, Identify natural evolution (progress), and Adjust goals, constraints, meanings in Figure 2.
## 5 Discussion
This section summarizes the main differences between problem-solving in electronic circuit design and understanding Mondrian's paintings as a starting point in identifying the computational methods that could attempt to characterize Mondrian's work similar to a human expert. Previous work showed that current
Figure 4: Updating the semantic memory.
Deep Neural Networks, like Convolutional Neural Networks, cannot classify well abstract artwork [103]. There is a large body of work on computational methods for automated circuit design [30], but there is significantly less research on algorithmic approaches to characterizing art. The goal is to identify the new features required to process artwork as compared to other Computer Aided Design (CAD) activities.
**Addressed problems**. There are fundamental differences between the nature of ill-defined problems, like in electronic circuit design, and open-ended problems, such as creating new artwork. Ill-defined problems pose conflicting requirements and constraints, but which are usually known, such as expressed through numerical thresholds or ranges. Instead, open-ended problems impose a significant departure from the current solution space, as creating novel solutions is a main goal. There are usually no numerical descriptions for this departure. Also, there are no requirements for reducing cost and minimizing errors, like in engineering design.
Due to their numerical descriptions, ill-defined problems in engineering can be objectively evaluated through mathematical methods based on physical models and precisely defined metrics, such as by using numerical simulators. They support introducing precise quality metrics, procedures to compare solutions, and to build surfaces that reflect the nature of tradeoffs between conflicting requirements, like Pareto surfaces. There are no similar approaches for open problem solving in art. There are currently no equivalent methods based on theories in science, which could lead to mathematical evaluations, even though recent work in neuroscience and neuroaesthetics suggests that such an effort might be possible to some degree [45][105]. Evaluation of artwork, including its meaning, degree of innovation, and comparison between individual paintings, is performed by experts.
**Problem solving process**. Solving ill-defined problems in engineering involves mainly searching for the desired features in the space of previous solutions, and if searching fails then creating (generating) new solutions to address the issue that produced the failure. Each solution has a well-defined meaning and purpose, like satisfying precise requirements. The process uses three steps: (i) searching among previous solutions to find the features that are likely to address the problem requirements by finding the best balance between competing requirements, and then integrating these features, (ii) if step (i) fails, using analogies (which are abstractions) previously used in similar problems, e.g., problems that posed the same kind of constraints even though their numerical values were different, and (iii) if steps (i) and (ii) fail, generating new solutions for specific needs guided by the limitations observed in the current solutions. Hence, the solving process does not have to generate novel solutions unless they are required, such as after the current solution space was exhaustively explored to lead to the conclusion that it cannot tackle a specific need. The nature of problem solving as well as engineering constraints, i.e. requirements rooted in day-to-day operation, minimizing cost and number of errors, emphasize reusing previous work and solutions, hence impose a Bayesian memory character to the problem solving process.
In contrast, solving open-ended problems in Mondrian's artwork is less of a search among previous solutions, but mainly a generative process that produced a new way of visual expression, and thus uncovered an entire solution space represented by the paintings of this space. Each painting has an ambiguous meanings and purpose for an observer, even though the artist had well purpose when creating the work under his attempt to meet the overall goals of art. Hence, a painting acts as a template that guides the association of new meanings by an observer. The generative process is based on the need to be novel and unique, thus, to be distinguishable from previous artwork. This is achieved through a set of well-defined beliefs, goals, and examples of elements and features that must be avoided. The problem-solving process evolves a sequence of features that are based on previous work by the author while still meeting the avoidance criteria, and are legitimate, consistent, and necessary. The process has less of a Bayesian memory character, but more of a restructuring (redefinition) of the expression mechanism used in creating art.
Note that work on computational methods in circuit design has proposed using evolutionary mechanism, like Genetic Algorithms, to devise new circuits [30]. However, these evolutionary processes do not contemplate the legitimacy, consistency, and necessity of each incremental modification step. These solutions have a low level of creativity [24] and have not been adopted by circuit designers, even though they meet their requirements.
**Memory**. As discussed, the memory system for circuit design stores associations between design elements, connections between design elements and their roles, like the purpose they serve, and causal sequences describing procedures to solve a design problem. Associations indicate design elements with similar functions in a design and similar structures. Connections to roles serve as precise cues to access the memory depending on the specific design needs. The structure and meaning of each element are precisely defined by numerical values, like the values of physical concepts (i.e. charge, current, voltage), and metrics. Hence, each element is unique, and the similarity of their structure and functions also represents an approximation. Design elements are connected with each other through theories and laws in mathematics, science, and previous design experiences. The preference for reusing existing design elements supports understanding their capabilities and limitations. Abstractions in hierarchical knowledge representations are created, in which the abstractions describe general principles, which are instantiated by distinct design elements with different capabilities and limitations. The hierarchical structure supports design through high-level reasoning. Hence, the nature of memory organization supports a design process that evolves from bottom-up design during the initial stages to top-down design during the latter stages, as the hierarchical knowledge structure is created.
The memory system for understanding Mondrian's paintings includes specific instances of paintings and painting features, but they are not grouped in hierarchical structures. Each instance is unique, and representative for a certain objective or subjective purpose. There can be, however, a clustering of the paintings based on their topic, structure (composition), or specific features (i.e.
color). Separately, the memory system stores beliefs and goals from mathematics, science, and society as well as constraints, needs, meanings, and goals of peers and the artist. The last memory component changes through the completed artwork over time, some of the changes being explicitly articulated as a result of new insight while others remain implicit. There is a causality link between certain artistic features and subjective attention, feelings, and emotions as well as broader about the purpose of art, like pleasure and prestige. The connections to attention serve as cues and the connections to emotions and feelings to activating the broad meanings used in understanding a painting. However, the causality link is less based on explanations. The memory system has fewer solving procedures describing a causal sequence to create a new painting.
The three memory subsystems are not related to each other through laws of mathematics and science (like in circuit design) but through analysis and discussions by experts. However, the connections between the three subsystems might not be fully explainable. The understanding of an artwork is consistent with the main beliefs and goals of the artist, but there can be inconsistencies with other beliefs and goals, including previous features that the artist stopped using, such as curved lines, which Mondrian stopped using in his later work.
Artistic features and paintings can be ambiguous as there is no numerical definition grounded in the theories and laws of mathematics and science. The meaning of a feature can be extended or redefined by using it in a new context or for a new purpose. The degree to which the new meaning is valid depends on social aspects, like its acceptance by peers and public as well as its reusing to create new paintings.
**Attention and prediction**. In circuit design, attention goes to associating the design features, such as BBs, to the performance requirements. BBs are identified based on their structural similarity (e.g., form) based on BBs that are agreed on by the design community, hence vetted solutions. The prediction part assigns meaning to the identified BBs by causally connecting the BBs to the expected performance. Attention is also drawn to changes of the BBs from their previous structures, such as adding MOSFET transistors to the BB or sharing MOSFETs between different BBs. Assigning meaning to the changes by causally associating them to the changes in performance as demanded by requirements. Meaning assignment considering the information learned about capabilities and limitations from previous designs. Considering that new solutions reuse or adapt previous design features, the BB meaning has a Bayesian memory character as previous roles (purposes) in design are critical in understanding future meanings too. Therefore, understanding a design can be done only in the context of the design knowledge of previous solutions. The meaning of BBs is deterministic (i.e. with minimal ambiguity) as it is based on laws and theories in mathematics and sciences. Moreover, a complete meaning for all conditions and situations can be produced. This meaning does not change over time.
Attention for Mondrian's work is drawn by visual cues, like color, size, centrality, contrasts, and unusualness. Recognizing these cues does not require any specific training, even though art training makes cue identification more effective
and robust (e.g., certain cues are not missed). Cues can be of different kinds, including local and global cues, like the granularity of the grid in Mondrian's work. Cues also induce certain emotions and feelings to the observer. Thus, they can induce the overall template used to understand the meaning of a painting, like joy, energy, sadness, or motion. As identifying cues is important in understanding the meaning of a painting, the procedure for cue searching is critical, such as sequential search or search after repetitive patterns.
Cues start the process of decomposing a painting into its composing elements or forcing a new perspective by forcing the viewer outside his / her comfort zone, such as contemplating another understanding than previous ones. Hence, cues might serve to annul the Bayesian memory character. Meaning can also change over time based on new beliefs and goals.
The meaning of the elements composing a painting is ambiguous and depends on subjective interpretation. During the understanding process, hypotheses about their meaning are formulated and validated during the further understanding of a painting. Multiple meanings can result. The composition of the element meanings can follow a hypothesis testing process, in which the main hypothesis of what the composition means integrates the meanings associated to the composing elements, or a process that integrates bottom-up the element meanings, which are separately identified. The meaning of a painting acts as a semantic template that can accommodate different interpretations by distinct observers.
**Reasoning**. In circuit design, reasoning creates and verifies a circuit solution by explaining its operation. Reasoning is mostly performed locally through incremental modifications and combination of existing sub-structures and BBs to solve the requirements not met by the current design. The overall solution structure within which changes are made stays the same for most situations. The incremental steps are selected from a set of alternatives from previously devised solutions. Causal reasoning justifies a solution by indicating how a design feature is needed to accommodate the problem requirements. Restructuring the global structure of a design solution is justified only after the solution space of an existing structures are completely analyzed, hence changes are driven by understanding the limitations of the existing designs. The repetitive nature of the process can support the replacement over time of the initial, bottom-up design process with a more top-down process.
In art, the reasoning process is guided by the cues on which the observer's attention focuses. The cued concepts have an associated meaning that is used in reasoning and can also suggest a global meaning into which the meaning of other identified concepts is integrated. Reasoning implements hypothesis testing but without having an evaluation method based on mathematical or scientific theories and methods or numerical evaluations using metrics. Instead, the correctness of meaning integration is based on criteria, like legitimacy, consistency, and necessity, which are reinforced by previous, successful integrations.
**Update**. In circuit design, new knowledge about BB and substructure meanings and design procedures is learned by comparing the requirements of the cur
rent problem with the requirements of previously solved problems. Knowledge updates are justified by their causal connections with their roles in circuit design. The capabilities and limitations of BBs, substructures, and design procedures are updated as new designs are completed. New BBs and other circuit substructures are also devised and learned. Further understanding of the design challenges of ill-defined problems results by relating the differences in requirements and the specific designs to address the differences. Over time, solving similar problems supports the understanding of the effectiveness of the constructed solutions.
In art, previous using of artistic ideas and features explores the capabilities and limitations of the artist's approach and the style of the paintings, and then connects the capabilities and limitations to beliefs and goals. Knowledge update also includes the constraints about ideas and features that should not be pursued by the artist, hence supporting the identification over time of elements that should be pursued, as part of the artist's goals, beliefs, and expressive language. Note that these constraints are very different in nature than constraints in circuits design, as they do not indicate that the entire solution space eliminated through constraints was already explored.
### ML Approach to Identify Mondrian's Work in a Set of Paintings
Sections 3.1 and 4.2 give a comprehensive description of the characteristics (e.g., ontology) of the eight components and the computational flow of a cognitive architecture that uses the components to understand artwork, such as Mondrian's paintings. This subsection explains how the components and flow can be utilized as a starting point to devise new computational processing, such as distinguishing artwork that is mainly based on non-exhibited properties (NEXP), like in modern art. As explained in [103], NEXPs correlate less to visual features, hence are hard to process using traditional DNNs. Hence, it is important to devise accurate and robust processing flows that can exploit the available data while minimizing the impact of missing information.
Figure 3 illustrates a comprehensive cognitive architecture that we proposed to analyze and understand Mondrian's abstract paintings. However, implementing the architecture is challenging because of the implications on the architecture operation due to context dependency (e.g., culture, peers, and personal experience), emotions, and implicit processing. These elements influence knowledge organization and recall from the memory, including general beliefs and goals, perceived needs, meaning and goals by peers as well as identified constraints, and the agent's subjective associative connections, connection to goals and causal sequences. Subsequently, they impact the operation of the attention and prediction subsystem, reasoning subsystem, and updating of the architecture. These dependencies suggest the difficulty to devise an ML-based approach to understand and classify abstract painting, like Mondrian's work.
This subsection presents a possible methodology that attempts to address the problem of distinguishing Mondrian's work from other paintings by incorporating activities that can be reliably performed with arguably less information about context, emotions, and implicit processing. It relies on the observation
that artists, including Mondrian, identified explicit constraints that distinguished them from previous work and guided their work, including the considered topics, overall concepts, and implementations [61, 88]. They are due to the different beliefs and preferences of the artist. These constraints produced observable features of paintings, which can be found by comparing them with previous work. Following the reverse path from the output of the cognitive architecture in Figure 3, the second step uses the observed differences to identify the rules and procedures that were likely used by the artist to create the observed differences. Finally, the third step identifies any invariants and patterns on how the artist used rules and procedures over time relate to the constraints that Mondrian used to distinguish his work from previous paintings. Among the characteristics in Section 3.1, the three steps are based only on those that can be extracted from visual features. Figure 5(a) illustrates the three steps.
The first step should identify the differences between a painting by Mondrian and previous paintings by other artists and by Mondrian himself. The identified differences pertain to the category of concepts and ideas, discussed in Subsection 3.1. Without using information about context, emotions, and implicit knowledge, the concept characteristics that can be identified based on visual features relates mainly to concept description, like features and connection between features. It is difficult to precisely identify the concept meaning and the related meaning understanding process. However, some insight about concept meaning can be found, without understanding what that meaning is, since meaning depends on the used visual features and the other co-occurring concepts. Hence, partial information on concept similarity and integration with other concepts can be found. The specific metrics that can be computed include the following: feature similarity and differences with previously used concepts, new features, new concepts, frequency of concept occurrence in previous paintings, frequency of concept co-occurrences, new co-occurrences, and previous co-occurrences that were dropped in future work. The characteristics of ideas enumerated in Sec
Figure 5: Proposed ML flow to identify Mondrian’s paintings.
tion 3.1 are partially covered by the metrics on groups of concepts, even though it is hard to infer more detailed insight on idea purpose, meaning, origin, characteristics, grouping, and evolution.
The second step should find the rules and procedures that were likely used by Mondrian to produce the observed differences. A set of rules used to create a new painting, e.g., \(Painting(t)\) in Figure 5(b), is expressed by the new concepts and features added to the painting as compared to previous paintings by Mondrian, i.e. \(Painting(t-1)\), and the concepts and features that were eliminated in the new painting conditioned by the concepts and features that kept being used, hence remained invariant. The relations between new and previous concepts and features are also captured. From the rule characteristics enumerated in Section 3.1, the visual painting features utilized over time can be used to monitor when a rule is selected and used, the used concepts and features, presence of new concepts and features, inclusion of unexpected cues that could guide the viewer's attention, its degree of relatedness to other rules, its changes over time, its flexibility, and if it is global or local, deterministic or stochastic, flexible or invariant. Other characteristics about rules and procedures are hard to extract, like cuing, connection to goals and beliefs, achieved decomposition and aggregation, produced effects like associations, comparisons, generalizations, etc., purpose, i.e. inquiry, hypothesis formulation, analysis, explanation, etc., addressing of ambiguity, reinterpretation, rule origin and gradient, degree of subjectivity, and dependency on context.
The third step identifies the invariants of the rules and procedures applied by the artist for his sequence of paintings. Figure 5(c) illustrates the step. Each rule identified in the previous step is characterized by the four shown components: component _what_ refers to the concepts, features and relations that were added and eliminated in a new painting as compared to the previous, component _when_ describes the unchanged concepts, features, and relations that co-occur with the new ones, component _understanding of how_ presents the sequence of individual changes that are the difference of a new painting from the previous paintings, and component _degree flexibility_ expresses the amount of change between the sequence of individual changes for the current painting and the sequences of individual changes for the most similar paintings. Invariants are elements that tend to remain constant for the four components or the co-occurrence of elements from the four categories.
## 6 Conclusions
This paper aims to discuss and identify missing capabilities of popular parameterized computational models in Machine Learning, like Deep Neural Networks (DNNs), so that their semantic processing capabilities can possibly address activities beyond traditional classification tasks. Our previous work showed that existing DNNs cannot tackle well classification using semantic information (like abstractions and ambiguities) that is not a linear combination of visual features. The discussed work identifies the missing features by comparing the process
of understanding artwork with the process of understanding electronic circuit design. Like art, circuit design is also a creative problem-solving activity, and for which our previous work proposed a cognitive architecture, a computational structure that loosely mimics human problem solving. The comparison methodology considers two semantic layers. The first layer tackles eight components, which are discussed in detail: goals, concepts, ideas, rules, procedures, beliefs, expectations, and outcomes. These elements are part of the cognitive process during problem solving and can be tied to the parts of a cognitive architecture for that activity, like memory, concept learning, representation and combination, affect, insight, and so on. The second layer describes a cognitive architecture for problem solving. It incorporates five elements: the nature of the problem, knowledge representation in the memory, the attention and prediction subsystem, the reasoning subsystem, and knowledge updating. The methodology was used to devise a computational method that can separate Mondrian's paintings from other paintings. Future work will further investigate the discussed ideas.
|
2310.20455 | Simple cuspidal representations of symplectic groups: Langlands
parameter | Let $F$ be a non-archimedean local field of odd residual characteristic. We
compute the Jordan set of a simple cuspidal representation of a symplectic
group over $F$, using explicit computations of generators of the Hecke algebras
of covers reflecting the parabolic induction under study. When $F$ is a
$p$-adic field we obtain the Langlands parameter of the representation. | Corinne Blondel, Guy Henniart, Shaun Stevens | 2023-10-31T13:50:27Z | http://arxiv.org/abs/2310.20455v1 | # Simple cuspidal representations of symplectic groups: Langlands parameter
###### Abstract.
Let \(F\) be a non-archimedean local field of odd residual characteristic. We compute the Jordan set of a simple cuspidal representation of a symplectic group over \(F\), using explicit computations of generators of the Hecke algebras of covers reflecting the parabolic induction under study. When \(F\) is a \(p\)-adic field we obtain the Langlands parameter of the representation.
###### Contents
* 1 Framework and method
* 2 Simple cuspidals
* 3 The quadratic or trivial character
* 4 The simple cuspidal of \(\operatorname{GL}(2N,F)\)
* 5 Langlands parameters for simple cuspidals
## Introduction
Let \(F\) be a non-archimedean local field of residual characteristic \(p\), and let \(G\) be the group \(\operatorname{Sp}(2N,F)\). The local Langlands conjecture for \(G\) attaches to a cuspidal representation1\(\pi\) of \(G\) a parameter of a Galois nature, or equivalently an irreducible representation \(\Pi\) of \(\operatorname{GL}(2N+1,F)\). When \(F\) has characteristic zero, the conjecture was established by Arthur [2], and Moeglin [20] has shown that \(\Pi\) can be determined via the reducibility points of certain parabolically induced representations involving \(\pi\).
The method presented here to achieve this assumes that \(p\) is odd, and uses types and covers a la Bushnell-Kutzko [10] to obtain the reducibility points. It was tested with success on \(\operatorname{SL}(2,F)\) as early as 2009, in a joint project of the three authors initiated in January 2009 in Vienna. The initial goal was a full description of the L-packets of \(\operatorname{Sp}(4,F)\) containing cuspidal representations by means of types and covers. In those years, say 2009 to 2011, we did quite a lot of computations and completed a nice table presenting all cuspidal representations of \(\operatorname{Sp}(4,F)\), as classified in [6], with the size of their packet and their neighbours in it. Some computations were done, but the expected tediousness of the others made us choose a more conceptual way that we eventually wrote down in [5]. We will explain this more precisely in a moment, let us just say that nonetheless, we accepted the idea that sometimes tedious computations can be useful to produce exact results, and we decided that the case of simple cuspidals of symplectic groups alone deserved such a treatment, along with the necessary work. This is the object of the present paper.
So let \(\pi\) be a cuspidal representation of our symplectic group \(G\). We need first to recall the main result in [5]. The _Jordan set_\(\operatorname{Jord}(\pi)\) of \(\pi\) is the (finite) set of pairs \((\sigma,s)\) made of a self-contragredient cuspidal representation \(\sigma\) of a group \(\operatorname{GL}(k,F)\) for some \(k\), and a real number \(s\geq 1\), such that, viewing \(\operatorname{GL}(k,F)\times G\) as a maximal Levi subgroup of a suitable symplectic group \(H\), the normalised parabolically induced representation of \(\sigma|\det|^{s}\otimes\pi\) to \(H\) is reducible. When \(F\) has characteristic zero, Moeglin has shown that the Jordan set of \(\pi\) determines the Langlands parameter of \(\pi\).
Theoretically \(\operatorname{Jord}(\pi)\) can be computed using types and covers, thanks to the results of Bushnell and Kutzko that transform parabolic induction in the groups into induction of modules over Hecke algebras, from the Hecke algebra of a type for the inertial class of \(\sigma|\det|^{s}\otimes\pi\) to the Hecke algebra of a cover of this type in \(H\)[10]. First of all one can associate to the representation \(\pi\) a finite family \(\mathscr{F}_{\pi}\) of _self-dual simple characters_ and show that if \((\sigma,s)\) belongs to \(\operatorname{Jord}(\pi)\), then \(\sigma\) contains a simple character in the endoclass of the square of an element of \(\mathscr{F}_{\pi}\). Then, having thus restricted the quest, we study the cover of a type for the inertial class of \(\sigma|\det|^{s}\otimes\pi\) for such a \(\sigma\): the Hecke algebra of this cover has two generators which satisfy a quadratic relation computable in a finite Hecke algebra deduced from the situation. Here results of Lusztig in finite reductive groups come into play, and eventually lead to a full description, not of \(\operatorname{Jord}(\pi)\) itself, but of the _inertial Jordan set_ of \(\pi\), which is the multiset \(\operatorname{IJord}(\pi)=\{([\sigma],s)\mid(\sigma,s)\in\operatorname{Jord}( \pi)\}\) (where \([\sigma]\) designates the inertial class of \(\sigma\)). Indeed, the knowledge of the finite reductive groups built from the underlying stratum of the cover, of the level zero part of the cuspidal type and of Lusztig's results (see [18]), produces with a reasonable amount of computations (in particular of some characters with trivial square coming from the compact subgroups involved in the construction) the quadratic relations satisfied by the generators, and eventually the inertial Jordan set.
This is to be compared with the method presented here, that leads to the exact Jordan set if we are willing to pay the price of possibly very long computations, on a case-by-case basis. This explains quite plainly why we changed path towards [5]. Yet obtaining exact results
is definitely a respectable goal, more easily attainable whenever we deal with intertwining operators in one-dimensional spaces, i.e. when \(H^{1}=J^{1}\) (in the standard notation of simple characters etc.), which occurs for simple cuspidals. Actually the computation below equally applies whenever we deal with a stratum for which the extension field \(F[\beta]/F\) is totally ramified of maximal degree \(2N\), and it should apply for other classical groups as well: it has been used successfully in unitary groups in [7].
There are alternatives to the computations that follow, some are described in [5]. Indeed, once we know the inertial Jordan set of \(\pi\) in _loc.cit._, we may sometimes fully determine some parts of the Jordan set itself by working on the Galois side, see [5, SS7] and section 5 below. The computations presented here may nonetheless be necessary in severe cases where the ambiguity cannot be solved.
In the first section we present the method used to find the exact elements of the Jordan set. It is essentially an elaboration on the fundamental commutative diagram of [10] - the heart of the theory of covers - in the case of a maximal Levi subgroup in a classical group. This diagram translates parabolic induction from \(P\) to \(G\) into induction of Hecke algebra modules, relying on a uniquely defined homomorphism of algebras \(t_{P}\). Roughly speaking, when inducing from a maximal parabolic in a classical group, this morphism \(t_{P}\) sends a generator of the Hecke algebra on the fixed Levi component \(M\) of \(P\) to the product of two generators, say \(T_{0}\) and \(T_{1}\), of the Hecke algebra over \(G\). This equality amounts to normalising the corresponding intertwining operators consistently. This normalisation, in turn, allows for pinpointing the self-dual representation with "highest reducibility value" in the inertial class of the inducing representation (Theorem 1.10).
In the second section we recall the definition of simple cuspidal representations in a symplectic group and we fix the notation for the particular simple cuspidal \(\pi\) of \(\operatorname{Sp}(2N,F)\) the Jordan set of which we want to compute. In particular we describe the underlying simple character \(\psi_{\beta}\). We know from [5] (among other sources!) that this Jordan set is
\[\operatorname{Jord}(\pi)=\{(\epsilon_{1},1),(\sigma,1)\}\]
where \(\epsilon_{1}\) is a character of \(F^{\times}\) with trivial square and \(\sigma\) is a cuspidal representation of \(\operatorname{GL}(2N,F)\) attached to the simple character \(\psi_{2\beta}\). (Proposition 2.2).
In the third section we compute the character \(\epsilon_{1}\) and in the fourth the simple cuspidal representation \(\sigma\), using Theorem 1.10 and precise computations of the coefficients of the quadratic relations satisfied by the generators of the Hecke algebra of the cover. At the end of section 4 we explain how the case of a general simple cuspidal of \(\operatorname{Sp}(2N,F)\) is easily deduced from the particular case that we have studied and we state the general result (Theorem 4.16).
In the final section we go from Jordan set to Langlands parameter when \(F\) has characteristic zero, or whenever the known results on the local Langlands correspondence for \(\operatorname{Sp}(2N,F)\)
allow for such a move. We also discuss how our present result for simple cuspidal representations of \(\operatorname{Sp}(2N,F)\) can also be obtained on the basis of the inertial Jordan set produced in [5] together with a result of Lapid giving the \(\varepsilon\)-factor at \(\frac{1}{2}\), whereas there are cuspidal representations for which this additional information is not sufficient.
**Acknowledgements.** The authors take the opportunity to signal the work [22] of Gordan Savin, that determined the Jordan set of generic level zero cuspidal representations of classical groups, a reference which inadvertently was absent from our paper [5].
They wish to thank the organisers of the workshop _Langlands Program: Number Theory and Representation Theory_ in Oaxaca, Mexico, for inviting each of them to give a talk there in December 2022. This gathering gave them the necessary impulse to finish writing this important step of a long overdue project.
The third author was supported by EPSRC grants EP/H00534X/1 and EP/V061739/1.
## 1. Framework and method
### Covers and parabolic induction
We go back to the founding paper by Colin Bushnell and Philip Kutzko [10]. (We use a mild variant of [10] as explained in [3], since we normalize parabolic induction and we use right-modules over the Hecke algebras, defined without contragredients.)
We fix \(F\) a non-archimedean local field of odd residual characteristic \(p\), we fix \(G\) the group of \(F\)-points of a reductive algebraic group defined over \(F\) and write \(\mathfrak{R}(G)\) for the category of smooth complex representations of \(G\). From now on all representations will be implicitly smooth and complex.
We fix \(M\) a Levi subgroup of \(G\), \(P\) a parabolic subgroup of \(G\) with Levi factor \(M\), \(U\) the unipotent radical of \(P\), and \(U^{-}\) the unipotent radical of the parabolic subgroup \(P^{-}\) opposed to \(P\) with respect to \(M\). We fix a cuspidal inertial class \(\mathfrak{s}_{M}=[M,\sigma]_{M}\) in \(M\), which is the set of all twists \(\sigma\chi\) of the irreducible cuspidal representation \(\sigma\) of \(M\) by an unramified character \(\chi\) of \(M\), and denote by \(\mathfrak{s}_{G}=[M,\sigma]_{G}\) the corresponding inertial class in \(G\), containing all pairs \(G\)-conjugate to some \((M,\sigma\chi)\). We consider the functor \(\operatorname{Ind}_{P}^{G}\) of normalized parabolic induction from the Bernstein block \(\mathfrak{R}^{\mathfrak{s}_{M}}(M)\) in \(\mathfrak{R}(M)\) to the Bernstein block \(\mathfrak{R}^{\mathfrak{s}_{G}}(G)\) in \(\mathfrak{R}(G)\).
Assume that we have a type \((J_{M},\lambda_{M})\) for \(\mathfrak{R}^{\mathfrak{s}_{M}}(M)\): so \(J_{M}\) is a compact open subgroup of \(M\), \(\lambda_{M}\) is an irreducible representation of \(J_{M}\), hence finite-dimensional, acting on a space \(V_{\lambda_{M}}\), and all representations in \(\mathfrak{R}^{\mathfrak{s}_{M}}(M)\) are generated by their \(J_{M}\)-isotypic component of type \(\lambda_{M}\). Then by [10, Theorem 4.3], forming the Hecke algebra
\[\mathcal{H}(M,\lambda_{M})=\{f:M\to\operatorname{End}(V_{\lambda _{M}})\mid f\text{ compactly supported and}\\ \forall g\in M,\ \forall j,k\in J_{M},\ f(jgk)=\lambda_{M}(j)f(g) \lambda_{M}(k)\},\]
we have an equivalence of categories
\[\mathfrak{R}^{\mathfrak{s}_{M}}(M)\quad\stackrel{{ E_{\lambda_{M}}}}{{ \longrightarrow}}\quad\text{Mod-}\mathcal{H}(M,\lambda_{M}),\qquad E_{\lambda_{ M}}(\omega)=\text{Hom}_{J_{M}}(\lambda_{M},\omega),\]
where the structure of right-\(\mathcal{H}(M,\lambda_{M})\)-module on \(\text{Hom}_{J_{M}}(\lambda_{M},\omega)\) is given by
\[\phi\cdot f(w)=\int_{M}\omega(g^{-1})\phi(f(g)w)dg\quad(f\in\mathcal{H}(M, \lambda_{M}),\ \phi\in\text{Hom}_{J_{M}}(\lambda_{M},\omega),\ w\in V_{\lambda_{M}}). \tag{1.1}\]
We further assume that we have a \(G\)-cover \((J_{G},\lambda_{G})\) of \((J_{M},\lambda_{M})\): a similar pair in \(G\) with an Iwahori factorization \(J_{G}=(J_{G}\cap U^{-})(J_{G}\cap M)(J_{G}\cap U)\), with \(\lambda_{G}\) trivial on \(J_{G}\cap U^{-}\) and \(J_{G}\cap U\), with \(J_{G}\cap M=J_{M}\) and \((\lambda_{G})_{|J_{M}}=\lambda_{M}\), and with a strong additional condition that provides an explicit injective homomorphism of algebras
\[t_{P}:\mathcal{H}(M,\lambda_{M})\longrightarrow\mathcal{H}(G,\lambda_{G})\]
see [10, Definition 8.1]. Then [10] culminates with the assertion that \((J_{G},\lambda_{G})\) is a type for \(\mathfrak{R}^{\mathfrak{s}_{G}}(G)\)[10, Theorem 8.3] and with the following commutative diagram that transforms parabolic induction from \(\mathfrak{R}^{\mathfrak{s}_{M}}(M)\) to \(\mathfrak{R}^{\mathfrak{s}_{G}}(G)\) into module induction over Hecke algebras [10, Corollary 8.4]:
\[\begin{array}{ccc}\mathfrak{R}^{\mathfrak{s}_{G}}(G)&\stackrel{{ E_{\lambda_{G}}}}{{\longrightarrow}}&\text{Mod-}\mathcal{H}(G,\lambda_{G})\\ \text{Ind}_{P}^{G}\uparrow&&\uparrow(t_{P})_{*}\\ \mathfrak{R}^{\mathfrak{s}_{M}}(M)&\stackrel{{ E_{\lambda_{M}}}}{{ \longrightarrow}}&\text{Mod-}\mathcal{H}(M,\lambda_{M})\end{array} \tag{1.2}\]
where, given a right \(\mathcal{H}(M,\lambda_{M})\)-module \(Y\), the \(\mathcal{H}(G,\lambda_{G})\)-module \((t_{P})_{*}(Y)\) is the module \(\text{Hom}_{\mathcal{H}(M,\lambda_{M})}(\mathcal{H}(G,\lambda_{G}),Y)\).
### The equivalence of categories for cuspidal blocks
We focus on the functor \(E_{\lambda_{M}}\). By definition, irreducible objects in \(\mathfrak{R}^{\mathfrak{s}_{M}}(M)\) form a single orbit under the group \(X(M)\) of unramified characters of \(M\), acting through \((\omega\chi)(g)=\chi(g)\omega(g)\) (\(\chi\in X(M)\), \(\omega\in\mathfrak{s}_{M}\), \(g\in M\)). The underlying space \(E_{\lambda_{M}}(\omega\chi)=\text{Hom}_{J_{M}}(\lambda_{M},\omega\chi)\) is the same as \(E_{\lambda_{M}}(\omega)\) because \(\omega\) and \(\omega\chi\) coincide on \(J_{M}\), but those two spaces differ as modules over \(\mathcal{H}(M,\lambda_{M})\). The group \(X(M)\) also acts on \(\mathcal{H}(M,\lambda_{M})\) by \((\chi f)(g)=\chi(g)f(g)\), the action of \(f\in\mathcal{H}(M,\lambda_{M})\) on \(E_{\lambda_{M}}(\omega)\) is the action of \(\chi f\) on \(E_{\lambda_{M}}(\omega\chi)\).
When \(M\) is a maximal Levi subgroup of a classical group \(G\) and \(p\) is odd, cuspidal representations of \(M\) are known to satisfy the following conditions, slightly stronger than [10, (5.5)]:
**Hypotheses 1.3**.: _The type \((J_{M},\lambda_{M})\) satisfies the following._
* _The intertwining of_ \(\lambda_{M}\) _is contained in a compact mod center subgroup_ \(\hat{J}_{M}\) _of_ \(M\)_, containing_ \(J_{M}\) _as its unique maximal compact subgroup._
* \(\lambda_{M}\) _extends to_ \(\hat{J}_{M}\) _and for any such extension_ \(\widehat{\lambda}_{M}\) _the representation_ \(\text{c-Ind}_{\hat{J}_{M}}^{M}\,\widehat{\lambda}_{M}\) _is irreducible and cuspidal._
* _There exists an element_ \(\Pi_{J_{M}}\) _of_ \(M\) _such that_ \(\hat{J}_{M}=\Pi_{J_{M}}^{\mathbb{Z}}J_{M}\) _and_ \(\Pi_{J_{M}}^{\mathbb{Z}}\cap J_{M}=\{1\}\)_._
From now on we assume that Hypotheses 1.3 hold. Then the Hecke algebra \(\mathcal{H}(M,\lambda_{M})\) is commutative [10, Proposition 5.6], its irreducible modules are one-dimensional, they identify with characters. More precisely \(\mathcal{H}(M,\lambda_{M})\) is supported on \(\varPi_{J_{M}}^{\mathbb{Z}}J_{M}\) and isomorphic to \(\mathbb{C}[\Psi,\Psi^{-1}]\) where \(\Psi\) has support \(\varPi_{J_{M}}J_{M}=J_{M}\varPi_{J_{M}}\). This element \(\Psi\) is unique up to a non-zero scalar and characterized by the intertwining operator \(\Psi(\varPi_{J_{M}})\in\mathrm{End}(V_{\lambda_{M}})\). Furthermore, if we pick an extension \(\widehat{\lambda}_{M}\) of \(\lambda_{M}\) as in (ii), then the restriction of \(\widehat{\lambda}_{M}\) to a compact subset of \(\varPi_{J_{M}}^{\mathbb{Z}}J_{M}\) clearly belongs to \(\mathcal{H}(M,\lambda_{M})\), in other words \(\Psi(\varPi_{J_{M}})\) is a scalar multiple of \(\widehat{\lambda}_{M}(\varPi_{J_{M}})\). We would rather think about this the other way around: the Hecke algebra \(\mathcal{H}(M,\lambda_{M})\) does not depend on a particular choice of extension of \(\lambda_{M}\), so we fix a normalization of its generator \(\Psi\) in an independent way, i.e. we consider the non-zero intertwining operator \(\Psi(\varPi_{J_{M}})\) chosen once and for all; in turn the extensions of \(\lambda_{M}\) can be thought of relatively to \(\Psi(\varPi_{J_{M}})\). We introduce the following notation:
**Definition 1.4**.: We fix a normalization of \(\Psi\) through the choice of an intertwining operator \(\Psi(\varPi_{J_{M}})\). Let \(\omega=\mathrm{c}\text{-}\mathrm{Ind}_{\hat{J}_{M}}^{M}\,\widehat{\lambda}_{M}\) be an irreducible cuspidal representation of \(M\) belonging to \(\mathfrak{s}_{M}\), where \(\widehat{\lambda}_{M}\) is an extension of \(\lambda_{M}\). We let \(\zeta(\omega)\) be the scalar such that
\[\widehat{\lambda}_{M}(\varPi_{J_{M}})=\zeta(\omega)\ \Psi(\varPi_{J_{M}}).\]
We observe more closely the bottom line of diagram (1.2). The functor \(E_{\lambda_{M}}\) attaches to each cuspidal representation \(\omega\) in \(\mathfrak{s}_{M}\), a character \(\widetilde{\omega}\) of \(\mathcal{H}(M,\lambda_{M})\) that represents the action of the algebra on \(E_{\lambda_{M}}(\omega)\). This character is uniquely determined by its value on \(\Psi\) that we compute using formula (1.1), with \(\phi\in\mathrm{Hom}_{J_{M}}(\lambda_{M},\omega),\ v\in V_{\lambda_{M}}\):
\[\phi\cdot\Psi(v)=\int_{J_{M}}\omega(\varPi_{J_{M}}^{-1})\omega(g^{-1})\phi( \lambda_{M}(g)\Psi(\varPi_{J_{M}})v)dg=\mathrm{vol}(J_{M})\ \omega(\varPi_{J_{M}}^{-1})\phi(\Psi(\varPi_{J_{M}})v)\]
Since we have \(\omega=\mathrm{c}\text{-}\mathrm{Ind}_{\hat{J}_{M}}^{M}\,\widehat{\lambda}_{M}\), the action of \(\omega(\varPi_{J_{M}}^{-1})\) stabilizes the image of \(\phi\) and acts on it as \(\widehat{\lambda}_{M}(\varPi_{J_{M}}^{-1})\), so that \(\phi\) actually belongs to \(\mathrm{Hom}_{\hat{J}_{M}}(\widehat{\lambda}_{M},\omega)\). We get, after fixing the Haar measure on \(M\) giving \(J_{M}\) volume 1:
\[\phi\cdot\Psi(v)=\omega(\varPi_{J_{M}}^{-1})\phi(\zeta(\omega)^{-1}\widehat{ \lambda}_{M}(\varPi_{J_{M}})v)=\zeta(\omega)^{-1}\phi(v).\]
**Proposition 1.5**.: _Let \(\omega\) be an irreducible cuspidal representation of \(M\) belonging to \(\mathfrak{s}_{M}\) and write \(\omega=\mathrm{c}\text{-}\mathrm{Ind}_{\hat{J}_{M}}^{M}\,\widehat{\lambda}_{M}\) for some extension \(\widehat{\lambda}_{M}\) of \(\lambda_{M}\), characterized by_
\[\widehat{\lambda}_{M}(\varPi_{J_{M}})=\zeta(\omega)\ \Psi(\varPi_{J_{M}}).\]
_The action of \(\mathcal{H}(M,\lambda_{M})\) on \(E_{\lambda_{M}}(\omega)\) is given by the character2\(\widetilde{\omega}\) defined by_
Footnote 2: This notation is convenient in our context but should not be confused with the usual notation for the contragredient representation. We will not use the latter.
\[\widetilde{\omega}(\Psi)=\zeta(\omega)^{-1}.\]
Twisting \(\omega=\text{c-Ind}_{J_{M}}^{M}\,\widehat{\lambda}_{M}\) by \(\chi\in X(M)\) amounts to twisting \(\widehat{\lambda}_{M}\) by \(\chi_{|\hat{J}_{M}}\), hence we have \(\zeta(\omega\chi)=\chi(\Pi_{J_{M}})\ \zeta(\omega)\) and:
\[\widetilde{(\omega\chi)}(\Psi)=\chi(\Pi_{J_{M}})^{-1}\ \widetilde{\omega}(\Psi). \tag{1.6}\]
### The normalization of \(\Psi\)
From now on, we restrict to the case studied in [19], when \(G\) is a classical group and \(M\) is a maximal Levi subgroup of \(G\). The Levi subgroup \(M\) identifies with a direct product \(M=\text{GL}(k,F)\times M_{0}\) where \(M_{0}\) is a classical group of the same sort as \(G\). Unramified characters of \(M\) have the form
\[(g,m_{0})\longrightarrow|\det g|^{s}\qquad\text{ with }s\in\mathbb{C},\ g\in \text{GL}(k,F),\ m_{0}\in M_{0}.\]
The representation \(\sigma\) of \(M\) decomposes as \(\sigma=\tau\otimes\pi\) where \(\tau\) is a cuspidal representation of \(\text{GL}(k,F)\) and \(\pi\) a cuspidal representation of \(M_{0}\); the irreducible objects of \(\mathfrak{s}_{M}\) are the \(\tau|\det|^{s}\otimes\pi\) with \(s\in\mathbb{C}\). The type in \(M\) for \(\mathfrak{s}_{M}\) has the form \((J_{M}=J\times J_{0},\lambda_{M}=\lambda\otimes\lambda_{0})\) where \((J_{0},\lambda_{0})\) is a type constructed by the third author for \(\pi\) and \((J,\lambda)\) is a Bushnell-Kutzko type for \(\tau\). In particular, we have \(\pi=\text{c-Ind}_{J_{0}}^{M_{0}}\,\lambda_{0}\), and Hypotheses 1.3 hold for \((J,\lambda)\): there are a compact mod center subgroup \(\widehat{J}\) of \(\text{GL}(k,F)\) and an extension \(\widehat{\lambda}\) of \(\lambda\) to \(\widehat{J}\) such that \(\tau=\text{c-Ind}_{\widehat{J}}^{\text{GL}(k,F)}\,\widehat{\lambda}\), and there is an element \(\varpi_{E}\) such that \(\widehat{J}=\varpi_{E}^{\mathbb{Z}}\times J\) with \(\varpi_{E}^{\mathbb{Z}}\cap J=\{1\}\) (see [9, SS6]: the construction of \((J,\lambda)\) involves a finite extension \(E\) of \(F\) inside \(M_{k}(F)\) such that the intertwining of \(\lambda\) is \(E^{\times}J\); we choose a uniformizing element \(\varpi_{E}\) of \(E\) and remark that the ramification index of \(E\) is uniquely attached to \(\pi\)). Then Hypotheses 1.3 hold for \((J_{M},\lambda_{M})\) with \(\widehat{J}_{M}=\widehat{J}\times J_{0}\), with \(\widehat{\lambda}_{M}=\widehat{\lambda}\otimes\lambda_{0}\) and with \(\Pi_{J_{M}}=(\varpi_{E},1)\). We let \((J_{G},\lambda_{G})\) be the \(G\)-cover of \((J_{M},\lambda_{M})\) built in [19].
We have \(|N_{G}(M)/M|=2\) and we further assume (see also [10, SS11]) that the elements of \(N_{G}(M)\) normalize \(\mathfrak{s}_{M}\), which means that for some \(s\in\mathbb{C}\), the representation \(\tau|\det|^{s}\) is self-dual (that is, equivalent to its contragredient in the symplectic and orthogonal cases, or equivalent to the contragredient of the conjugate representation in the unitary case). Theorem 1.2 in [19] essentially gives the following.
**Proposition 1.7**.:
1. _There are two elements_ \(s_{0}\) _and_ \(s_{1}\) _of_ \(N_{G}(M)\backslash M\)_, belonging to open compact subgroups of_ \(G\)_, that normalize_ \((J_{M},\lambda_{M})\) _and satisfy_ \(s_{0}s_{1}=\Pi_{J_{M}}\)_._
2. _The Hecke algebra_ \(\mathcal{H}(G,\lambda_{G})\) _is a two-dimensional module over_ \(\mathcal{H}(M,\lambda_{M})\) _generated, as an algebra, by elements_ \(T_{0}\) _and_ \(T_{1}\) _of respective supports_ \(J_{G}s_{0}J_{G}\) _and_ \(J_{G}s_{1}J_{G}\)_._
3. _The generators_ \(T_{0}\) _and_ \(T_{1}\) _can be normalized to satisfy quadratic relations of the following shape:_ \[(T_{i}+1)(T_{i}-q^{r_{i}})=0,\quad i=0,1,\text{ with }r_{0},r_{1}\geq 0.\]
Indeed \(T_{0}\) and \(T_{1}\) are defined up to non-zero scalar by their support, normalizing them means normalizing their values at \(s_{0}\) and \(s_{1}\) respectively, that are intertwining operators in the space of \(\lambda_{M}\). This in turn provides a normalization for \(\Psi\): we impose
\[T_{0}T_{1}=t_{P}(\Psi). \tag{1.8}\]
We make a quick comment here: the generators come from the \(2\)-dimensional Hecke algebras of two finite reductive groups, hence the quadratic relations, that depend on the level zero part of the cover \((J_{G},\lambda_{G})\). They can be obtained through computations _a la Lusztig_ in those finite groups, as in [5, SS5], and computation of a _twisting character_[5, 3.12 Theorem].
Proposition 1.7 implies that \(\mathcal{H}(G,\lambda_{G})\) has four characters, counted with multiplicity: the value of a character at \(T_{i}\) is \(-1\) or \(q^{r_{i}}\), \(i=0,1\). Hence the characters of \(\mathcal{H}(M,\lambda_{M})\) that induce reducibly to \(\mathcal{H}(G,\lambda_{G})\) are exactly the restrictions through \(t_{P}\) of those four characters, their values at \(\Psi\) belong to
\[\{1,-q^{r_{0}},-q^{r_{1}},q^{r_{0}+r_{1}}\}. \tag{1.9}\]
Now we recall what we know about reducibility on the group side. The inertial class \(\mathfrak{s}_{M}\) contains exactly two self-dual representations, say \(\tau_{a}\otimes\pi\) and \(\tau_{b}\otimes\pi\). For each of those there is a unique non negative **real** number \(s_{a}\) or \(s_{b}\) such that, for \(s\in\mathbb{R}\):
\[\text{Ind}_{P}^{G}\,\tau_{a}|\det|^{s}\otimes\pi\text{ reduces }\iff s=\pm s_{a}\]
and the same for \(\tau_{b},s_{b}\). So the four irreducible representations in \(\mathfrak{s}_{M}\) (counted with multiplicities) that do NOT induce irreducibly are
\[\{\tau_{a}|\det|^{-s_{a}}\otimes\pi,\ \tau_{a}|\det|^{s_{a}}\otimes\pi,\ \tau_{b}|\det|^{-s_{b}}\otimes\pi,\ \tau_{b}|\det|^{s_{b}}\otimes\pi\}.\]
By Proposition 1.5 they correspond to the following characters of \(\mathcal{H}(M,\lambda_{M})\), given by their value at \(\Psi\):
\[|\det\varpi_{E}|^{s_{a}}\widehat{\tau_{a}\otimes\pi}(\Psi),\ |\det\varpi_{E}|^{-s_{a}} \widehat{\tau_{a}\otimes\pi}(\Psi),\ |\det\varpi_{E}|^{s_{b}}\widehat{\tau_{b}\otimes\pi}(\Psi),\ |\det \varpi_{E}|^{-s_{b}}\widehat{\tau_{b}\otimes\pi}(\Psi).\]
This set of four values is identical to (1.9), we deduce that one of \(\tau_{a}\), \(\tau_{b}\), say \(\tau_{a}\), satisfies
\[|\det\varpi_{E}|^{s_{a}}\widehat{\tau_{a}\otimes\pi}(\Psi)=1\text{ and }|\det\varpi_{E}|^{-2s_{a}}=q^{r_{0}+r_{1}}\]
and the other one satisfies
\[|\det\varpi_{E}|^{s_{b}}\widehat{\tau_{b}\otimes\pi}(\Psi)=-q^{\inf(r_{0},r_ {1})}\text{ and }|\det\varpi_{E}|^{-2s_{b}}=q^{|r_{0}-r_{1}|}.\]
We find the corresponding self-dual representations of \(\operatorname{GL}(k,F)\) with Proposition 1.5:
\[\tau_{a} =\text{c-Ind}_{\widehat{J}}^{\operatorname{GL}(k,F)}\,\widehat{ \lambda}_{a}\text{ with }\widehat{\lambda}_{a}(\varpi_{E})\otimes I_{V_{\lambda_{0}}}=|\det \varpi_{E}|^{s_{a}}\ \Psi(\varPi_{J_{M}}),\] \[\tau_{b} =\text{c-Ind}_{\widehat{J}}^{\operatorname{GL}(k,F)}\,\widehat{ \lambda}_{b}\text{ with }\widehat{\lambda}_{b}(\varpi_{E})\otimes I_{V_{\lambda_{0}}}=-q^{-\inf(r_{0},r_{1})}|\det\varpi_{E}|^{s_{b}}\ \Psi(\varPi_{J_{M}}).\]
We write for convenience \(\equiv\) for "equal up to a positive scalar". The last touch is done by coming back to \(T_{0},T_{1}\), since \(\Psi\) has been normalized by (1.8), which is equivalent, up to a positive scalar that we don't need, to \(T_{0}(s_{0})T_{1}(s_{1})\equiv\Psi(\varPi_{J_{M}})\)[3]. We get finally:
**Theorem 1.10**.: _We fix a cuspidal representation of the classical group \(M_{0}\) and a self-dual cuspidal inertial class \(\mathfrak{s}_{k}\) in \(\operatorname{GL}(k,F)\), hence a cuspidal inertial class in the Levi subgroup \(M=\operatorname{GL}(k,F)\times M_{0}\) of the classical group \(G\). We fix as above a type \((J_{M},\lambda_{M})\) for this inertial class and a cover \((J_{G},\lambda_{G})\). We normalize the two generators \(T_{0}\) and \(T_{1}\) of the Hecke algebra \(\mathcal{H}(G,\lambda_{G})\) as in Proposition 1.7, namely so that they satisfy quadratic relations_
\[(T_{i}+1)(T_{i}-q^{r_{i}})=0,\quad i=0,1,\text{ with }r_{0},r_{1}\geqslant 0.\]
_The self-dual cuspidal representation in \(\mathfrak{s}_{k}\) with the highest reducibility value is the self-dual cuspidal representation \(\operatorname{c-Ind}_{\widehat{J}}^{\operatorname{GL}(k,F)}\widehat{\lambda} _{a}\) characterized by_
\[\widehat{\lambda}_{a}(\varpi_{E})\equiv T_{0}(s_{0})T_{1}(s_{1}).\]
_The other self-dual cuspidal representation in \(\mathfrak{s}_{k}\) is \(\operatorname{c-Ind}_{\widehat{J}}^{\operatorname{GL}(k,F)}\widehat{\lambda} _{b}\) with \(\widehat{\lambda}_{b}(\varpi_{E})\equiv-T_{0}(s_{0})T_{1}(s_{1})\)._
What we mean by "highest reducibility value" is: the representation having reducibility at \(s_{a}\), since \(s_{a}\geqslant s_{b}\), with equality if and only if one of \(r_{0},r_{1}\) is \(0\). We recall that our goal is to determine, for a given cuspidal \(\pi\) of \(M_{0}\), the finite set of pairs \((\tau,s)\) with \(\tau\) a cuspidal representation of some \(\operatorname{GL}(k,F)\) and \(s\in\mathbb{R}\), \(s\geqslant 1\), such that the normalized induced representation of \(\tau|\det|^{s}\otimes\pi\) reduces. We explained in [5] how to construct this set, except possibly for an ambiguity between \(\tau_{a}\) and \(\tau_{b}\), in our notations above, that in some cases we couldn't solve. Theorem 1.10 gives a way to solve the ambiguity. If one of \(r_{0},r_{1}\) is \(0\) we have no ambiguity to solve: indeed \(s_{a}=s_{b}\) so either \((\tau_{a}\otimes\pi,s_{a})\) and \((\tau_{b}\otimes\pi,s_{b})\) both belong to the set or neither of them does. Otherwise \(s_{a}>s_{b}\) and we produce the unique representation with reducibility at \(s_{a}\).
**Corollary 1.11**.: _Theorem 1.10 holds if \(T_{0}\) and \(T_{1}\) are only normalized in such a way that they satisfy quadratic relations \(T_{i}^{2}=b_{i}T_{i}+c_{i}\) with \(b_{i}\geqslant 0\) and \(c_{i}>0\), for \(i=0,1\)._
Indeed such normalizations differ from the previous one by positive constants, that will not change the result, except when a coefficient \(b_{i}\) is \(0\), in which case both self-dual representations in the inertial class have highest reducibility value.
In the next sections, we give examples in which the computation is made easier by the fact that the intertwining operators are scalars.
## 2. Simple cuspidals
### Definitions and notation
We start with the necessary notation to describe simple cuspidal representations of symplectic groups, defined by Gross and Reeder [15, SS9.2].
We let \(F\) be a non-archimedean local field of odd residual characteristic \(p\), with ring of integers \(\mathfrak{o}_{F}\), maximal ideal \(\mathfrak{p}_{F}\), residual field \(k_{F}\) of cardinality \(q_{F}=q\). We write \(x\mapsto\overline{x}\) for the natural quotient map \(\mathfrak{o}_{F}\to k_{F}\), and \(\operatorname{val}(x)\) for the valuation of an element \(x\) in \(F\), normalized so that \(\operatorname{val}\) has image \(\mathbb{Z}\). We fix an additive character \(\psi:F\to\mathbb{C}^{\times}\) with conductor \(\mathfrak{p}_{F}\). We also fix for convenience a uniformizing element \(\varpi_{F}\) of \(F\). We let \(\tilde{G}=\operatorname{GL}(2N,F)\)
with centre \(\tilde{Z}\simeq F^{\times}\), and \(G=\operatorname{Sp}(2N,F)\), the subgroup of \(\tilde{G}\) preserving the alternating form \(h_{2N}\) on \(F^{2N}\) given by:
\[h_{2N}(\left(\begin{smallmatrix}x_{1}\\ \cdot\\ x_{N+1}^{\cdot}\\ x_{2N}^{\cdot}\end{smallmatrix}\right)\!,\left(\begin{smallmatrix}y_{1}\\ \cdot\\ y_{N}\\ y_{N+1}\\ y_{2N}\end{smallmatrix}\right))=x_{1}y_{2N}+\cdots+x_{N}y_{N+1}-x_{N+1}y_{N}- \cdots-x_{2N}y_{1}.\]
The matrix of the form \(h_{2N}\) written in \(N\times N\) blocks is \(\left(\begin{smallmatrix}0&J_{N}\\ -J_{N}&0\end{smallmatrix}\right)\) where \(J_{N}\) is the \(N\)-by-\(N\) matrix with \(1\)'s on the antidiagonal and \(0\)'s elsewhere. The adjoint of a \(2N\) by \(2N\) matrix written in \(N\times N\) blocks as \(\left(\begin{smallmatrix}A&B\\ C&D\end{smallmatrix}\right)\) is \(\left(\begin{smallmatrix}D^{T}&-B^{T}\\ -C^{T}&A^{T}\end{smallmatrix}\right)\) where \(A\mapsto A^{T}\) is the transposition with respect to the _anti_diagonal.
The standard Iwahori subgroup \(\tilde{I}_{2N}\) of \(\tilde{G}\) is the fixator of the strict lattice chain \(\Sigma_{2N}\) in \(F^{2N}\) consisting of the columns of the order \(\mathfrak{A}_{2N}=\left(\begin{smallmatrix}\mathfrak{o}_{F}&\mathfrak{o}_{F}& \cdots&\mathfrak{o}_{F}&\mathfrak{o}_{F}\\ \mathfrak{p}_{F}&\mathfrak{o}_{F}&\sigma_{F}&\cdots&\mathfrak{o}_{F}\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \mathfrak{p}_{F}&\cdots&\mathfrak{p}_{F}&\mathfrak{o}_{F}&\sigma_{F}\\ \mathfrak{p}_{F}&\mathcal{p}_{F}&\cdots&\mathfrak{p}_{F}&\sigma_{F}\end{smallmatrix}\right)\) and their \(\varpi_{F}^{\mathbb{Z}}\)-multiples.
The Jacobson radical of \(\mathfrak{A}_{2N}\) is \(\mathfrak{P}_{2N}=\left(\begin{smallmatrix}\mathfrak{p}_{F}&\mathfrak{o}_{F}& \cdots&\mathfrak{o}_{F}&\mathfrak{o}_{F}\\ \mathfrak{p}_{F}&\mathfrak{p}_{F}&\sigma_{F}&\cdots&\sigma_{F}\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \mathfrak{p}_{F}&\cdots&\mathfrak{p}_{F}&\mathfrak{p}_{F}&\mathfrak{p}_{F} \\ \mathfrak{p}_{F}&\mathcal{p}_{F}&\mathcal{p}_{F}&\mathcal{p}_{F}\end{smallmatrix}\right)\) with \(\mathfrak{P}_{2N}^{2}=\left(\begin{smallmatrix}\mathfrak{p}_{F}&\mathfrak{p}_{ F}&\mathfrak{o}_{F}&\cdots&\mathfrak{o}_{F}\\ \mathfrak{p}_{F}&\mathcal{p}_{F}&\mathcal{o}_{F}&\cdots\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ \mathfrak{p}_{F}&\cdots&\mathfrak{p}_{F}&\mathfrak{p}_{F}&\mathfrak{p}_{F} \\ \mathfrak{p}_{F}^{\mathbb{Z}}&\mathcal{p}_{F}&\cdots&\mathfrak{p}_{F}&\mathfrak{p} _{F}\end{smallmatrix}\right)\), giving rise to subgroups \(\tilde{I}_{2N}(1)=1+\mathfrak{P}_{2N}\) and \(\tilde{I}_{2N}(2)=1+\mathfrak{P}_{2N}^{2}\) of \(\tilde{I}_{2N}\).
The successive maps \(\mathrm{I}_{2N}+(x_{i,j})\mapsto(x_{i,j})\) and \((x_{i,j})\mapsto(\overline{x}_{1,2},\overline{x}_{2,3},\cdots,\overline{x}_{ 2N-1,2N},\overline{\varpi_{F}^{-1}x_{2N,1}})\) induce isomorphisms
\[\tilde{I}_{2N}(1)/\tilde{I}_{2N}(2)\stackrel{{\simeq}}{{ \longrightarrow}}\mathfrak{P}_{2N}/\mathfrak{P}_{2N}^{2}\stackrel{{ \simeq}}{{\longrightarrow}}k_{F}^{2N}.\]
Taking now the intersections with \(G\) we get the standard Iwahori subgroup \(I_{2N}\) of \(G\), with two subgroups \(I_{2N}(1)\) and \(I_{2N}(2)\), and an isomorphism:
\[I_{2N}(1)/I_{2N}(2) \stackrel{{\simeq}}{{\longrightarrow}}k_{F}^{N+1}\] \[(x_{i,j}) \longmapsto(\overline{x}_{1,2},\cdots,\overline{x}_{N-1,N}, \overline{x}_{N,N+1},\overline{\varpi_{F}^{-1}x_{2N,1}}).\]
The center of \(G\) is \(Z\simeq\{\pm 1\}\). The _affine generic characters_ of [15, SS9.2] are those characters of \(ZI_{2N}(1)\) whose restrictions to \(I_{2N}(1)\) have the form
\[(x_{i,j})\longmapsto\psi\left(\alpha_{1}x_{1,2}+\cdots+\alpha_{N-1}x_{N-1,N}+ \alpha_{N}x_{N,N+1}+\alpha_{2N}x_{2N,1}\right)\]
with \(\operatorname{val}(\alpha_{i})=0\) for \(i=1,\cdots,N\), and \(\operatorname{val}(\alpha_{2N})=-1\). They compactly induce irreducibly to cuspidal representations of \(G\) called _simple cuspidal_ representations of \(G\).
### Description in terms of strata
The chain \(\Sigma_{2N}\), of period \(2N\), can be scaled and translated into a unique lattice sequence \(\Lambda_{2N}\) in \(F^{2N}\) of period \(4N\) and duality invariant \(d=1\), the usual convention in [26]. That is, for \(k\in\mathbb{Z}\), the dual lattice
\[\Lambda_{2N}(k)^{\sharp}=\{X\in F^{2N}\mid h(X,\Lambda_{2N}(k))\subseteq \mathfrak{p}_{F}\}\]
is equal to \(\Lambda_{2N}(1-k)\). Note that \(\Lambda_{2N}(0)=\Lambda_{2N}(1)=\left(\begin{smallmatrix}\mathfrak{o}_{F}\\ \mathfrak{o}_{F}\\ \mathfrak{o}_{F}\\ \mathfrak{o}_{F}\end{smallmatrix}\right)(N\) entries \(\mathfrak{o}_{F}\), \(N\) entries \(\mathfrak{p}_{F}\)).
According to [11, SS2], the natural filtration of \(\mathfrak{A}_{2N}=\mathfrak{A}_{0}(\Lambda_{2N})\) given for integers \(r\) by
\[\mathfrak{A}_{r}(\Lambda_{2N})=\{\phi\in\operatorname{End}(F^{2N})\mid\forall k \in\mathbb{Z}\ \ \phi(\Lambda_{2N}(k))\subseteq\Lambda_{2N}(k+r)\}\]
satisfies \(\mathfrak{A}_{r}(\Lambda_{2N})=\mathfrak{A}_{[\frac{r}{2}]}(\Sigma_{2N})\) and \(\operatorname{val}_{\Lambda_{2N}}=2\operatorname{val}_{\Sigma_{2N}}\), so that actually \(\tilde{I}_{2N}(1)=1+\mathfrak{A}_{1}(\Lambda_{2N})=1+\mathfrak{A}_{2}( \Lambda_{2N})\) and \(\tilde{I}_{2N}(2)=1+\mathfrak{A}_{3}(\Lambda_{2N})\).
We leave aside the classification of affine generic characters and work directly with one whose restriction to \(I_{2N}(1)\) has the form
\[x\longmapsto\psi_{\beta}(x)=\psi\circ\operatorname{tr}(\beta(x-1))\]
for an element \(\beta\) in \(\operatorname{Lie}(\operatorname{Sp}(2N,F)\) such that \(\operatorname{val}_{\Lambda_{2N}}(\beta)=-2\) and \(\beta^{2N}=(-1)^{N}\varpi_{F}^{-1}\). In particular \(E=F[\beta]\) is a totally ramified extension of \(F\) of maximal degree \(2N\). Actually we fix \(\beta\) in \(\mathfrak{A}_{-2}(\Lambda_{2N})\) as follows: \(\beta=\left(\begin{smallmatrix}0&0&...&...&...&0&\varpi_{F}^{-1}\\ -1&0&0&...&...&...&0\\ 0&\ddots&\ddots&\ddots&&\vdots\\ \vdots&\ddots&-1&0&\ddots&&\vdots\\ \vdots&&0&1&0&&\vdots\\ \vdots&&\ddots&\ddots&\ddots&\vdots\\ 0&0&...&...&0&1&0\\ \end{smallmatrix}\right)\) (with \(N\) entries \(-1\) and \(N-1\) entries \(1\)). The adjoint of \(\beta\) is \(-\beta\) and for \(x=(x_{i,j})\) we have
\[\operatorname{tr}(\beta(x-1))=-x_{1,2}-\cdots-x_{N-1,N}-x_{N,N+1}+x_{N+1,N+2}+ \cdots+x_{2N-1,2N}+\varpi_{F}^{-1}x_{2N,1}.\]
Viewing \(\psi\circ\operatorname{tr}(\beta(x-1))\) as a character of \(I_{2N}(1)/I_{2N}(2)\) we can equate \(x_{1,2}\) to \(-x_{2N-1,2N}\) and so on, getting
\[\psi\circ\operatorname{tr}(\beta(x-1))=\psi(-2x_{1,2}-\cdots-2x_{N-1,N}-x_{N, N+1}+\varpi_{F}^{-1}x_{2N,1}). \tag{2.1}\]
We note that the lattice chain underlying \(\Lambda_{2N}\) is the set of \(\beta^{i}\Lambda_{2N}(k)\) for \(i\in\mathbb{Z}\) and any fixed \(k\), and that the \(\mathfrak{o}_{E}\)-order \(\mathfrak{A}_{0}(\Lambda_{2N})\cap E\) is just the maximal \(\mathfrak{o}_{E}\)-order \(\mathfrak{o}_{E}\).
Thus \((\Lambda_{2N},2,0,\beta)\) is a simple and maximal stratum in \(\operatorname{Lie}(\operatorname{Sp}(2N,F))\), to which we apply the machinery in [26]. Actually \(\beta\) is minimal over \(F\) and we have by [25, SS3.1]
\[J^{1}(\beta,\Lambda_{2N})=H^{1}(\beta,\Lambda_{2N})=I_{2N}(1),\qquad J(\beta, \Lambda_{2N})=ZJ^{1}(\beta,\Lambda_{2N})=ZI_{2N}(1).\]
Then \(\psi_{\beta}\) is the unique simple character in \(\mathcal{C}(\beta,\Lambda_{2N})\). The underlying stratum \((\Lambda_{2N},2,0,\beta)\) is simple and maximal (attached to the totally ramified field extension of maximal degree), so we obtain the following from [5, 3.6, 4.4 Theorem].
**Proposition 2.2**.: _For any character \(\chi\) of the center \(Z\simeq\{\pm 1\}\) of \(G\), we consider the beta-extension \(\kappa=\chi\otimes\psi_{\beta}\) of \(\psi_{\beta}\), a representation of \(ZI_{2N}(1)\), and the simple cuspidal representation \(\pi=\mathrm{c}\mathrm{-Ind}_{ZI_{2N}(1)}^{G}\,\chi\otimes\psi_{\beta}\) of \(G\). The Jordan set of \(\pi\) is \(\mathrm{\;Jord}(\pi)=\{(\epsilon_{1},1),(\sigma,1)\}\) where \(\epsilon_{1}\) is a character of \(F^{\times}\) with trivial square and \(\sigma\) is a cuspidal representation of \(\mathrm{GL}(2N,F)\) attached to the simple character \(\psi_{2\beta}\)._
We will discuss in the last section (see Theorem 5.1) the Langlands parameter of \(\pi\).
In the next section we compute the character \(\epsilon_{1}\), viewed as a character of \(F^{\times}=\mathrm{GL}(1,F)\). In section 4 we compute the cuspidal representation \(\sigma\) of \(\mathrm{GL}(2N,F)\). Both computations rely on Theorem 1.10. There is a slight difference between them: in section 3 we use first [5, 4.4 Theorem] to determine the restriction of \(\epsilon_{1}\) to \(\mathfrak{o}_{F}^{\times}\), based on a twisting character (3.3) computed in 3.1, then we proceed to the computation of the coefficients \(b_{0}\) and \(b_{1}\) using this restriction; in section 4 we proceed directly to the computation of \(b_{0}\) and \(b_{1}\) keeping the restriction to \(\mathfrak{o}_{F}^{\times}\) of the central character of \(\sigma\) as a parameter, the value of which then results from the computation. Both ways are possible, we chose to use both.
## 3. The quadratic or trivial character
### The inertial Jordan set relative to the trivial endoclass
The four quadratic or trivial characters of \(F^{\times}=\mathrm{GL}(1,F)\) are self-dual cuspidal representations attached to the null stratum \(((\mathfrak{p}_{F}^{k})_{k\in\mathbb{Z}},1,1,0)\), the trivial character of \(H^{1}((\mathfrak{p}_{F}^{k}),0)=1+\mathfrak{p}_{F}\) and a self-dual beta-extension of this character to \(J((\mathfrak{p}_{F}^{k}),0)=\mathfrak{o}_{F}^{\times}\), which is a quadratic or trivial character \(\tau\) of \(\mathfrak{o}_{F}^{\times}\). In [5, SS3.6] we built a cover \((J_{P},\lambda_{P})\) in \(\mathrm{Sp}(2N+2,F)\) of the type \((\mathfrak{o}_{F}^{\times}\times ZI_{2N}(1),\tau\otimes\kappa)\) in the Levi subgroup \(\mathrm{GL}(1,F)\times\mathrm{Sp}(2N,F)\). We recall some features of this cover.
In the notation of [5, SS3] we have \(V=F^{2N}\) as described above, \(X=F^{2N+2}\) with elements written in coordinates \((x_{0},x_{1},\cdots,x_{2N},x_{2N+1})^{t}\) and alternating form \(h_{2(N+1)}\), and with \(W=F\) the subspace given by the first coordinate \(x_{0}\) and \(W^{*}\) given by the last coordinate \(x_{2N+1}\). On the space \(W\oplus W^{*}\) we take the unique lattice sequence \(\Lambda_{2}\) built on \((\,\mathfrak{o}_{F}^{\circ}\,)\), \((\,\mathfrak{o}_{F}^{\circ}\,)\) and their scalar multiples, that has period \(4N\) and duality invariant \(1\), and on \(X=(W\oplus W^{*})\perp V\) we take the direct sum \(\Lambda=\Lambda_{2}\oplus\Lambda_{2N}\). The stratum underlying the cover is \((\Lambda,2,0,0\oplus\beta)\) (meaning: \(0\) in \(\mathrm{Lie}(\mathrm{SL}(2,F)\) and \(\beta\) in \(\mathrm{Lie}(\mathrm{Sp}(V))\).
We form two lattice sequences \(\mathfrak{M}_{0}\) and \(\mathfrak{M}_{1}\) in \(X\) as follows. The first one \(\mathfrak{M}_{0}\) (resp. the second one \(\mathfrak{M}_{1}\)) is the direct sum of the unique lattice sequence \(\mathfrak{m}_{0}\) (resp. \(\mathfrak{m}_{1}\)) built on \((\,\mathfrak{o}_{F}^{\circ}\,)\) (resp. \((\,\mathfrak{o}_{F}^{\circ}\,)\)) and its scalar multiples, that has period \(4N\) and duality invariant \(1\), and \(\Lambda_{2N}\).
For the record, we first describe the relevant finite groups in our situation. For \(i=0,1\), the finite group \(P(\Lambda_{\mathfrak{o}_{E}})/P^{1}(\Lambda_{\mathfrak{o}_{E}})\), isomorphic to \(\mathfrak{o}_{F}^{\times}\times\{\pm 1\}\), is a Levi factor of \(P(\Lambda_{\mathfrak{o}_{E}})/P^{1}(\mathfrak{M}_{i,\mathfrak{o}_{E}})\)
which is a parabolic subgroup of \(\mathcal{G}_{i}=P(\mathfrak{M}_{i,\mathfrak{o}_{E}})/P^{1}(\mathfrak{M}_{i, \mathfrak{o}_{E}})\simeq\mathrm{Sp}(2,k_{F})\times\{\pm 1\}\). Hence the two-dimensional Hecke algebras that arise here are just algebras on \(\mathrm{SL}(2,k_{F})\). We recall that, for a character \(\sigma\) of \({k_{F}}^{\times}\) with trivial square, viewed as a character of the Levi subgroup \({k_{F}}^{\times}\) of \(\mathrm{SL}(2,k_{F})\), the Hecke algebra \(\mathscr{H}(\mathrm{SL}(2,k_{F}),\sigma)\) has a generator \(T\) satisfying the following quadratic relation:
\[\begin{split}(T-1)(T+1)=0\text{ if }\sigma\neq 1,\\ (T-q)(T+1)=0\text{ if }\sigma=1.\end{split} \tag{3.1}\]
Now we apply [5, 4.4 Theorem]. We are actually dealing with that part of the inertial Jordan set of \(\pi\) relative to the trivial endoclass. The theorem says that it is the \(\delta\)-twist of the inertial Jordan set of the trivial representation of the trivial group, for a well-identified character \(\delta\) of \({k_{F}}^{\times}\) which we will address shortly.
The Jordan set of the trivial representation of the trivial group has itself long been known: it has one element, the pair \((\iota,1)\). Indeed the unique self-dual character \(\sigma\) of \(\mathrm{GL}(1,F)\) such that the normalized induced representation \(\mathrm{Ind}_{B}^{\mathrm{SL}(2,F)}\,\sigma|.|^{s}\) reduces for some \(s\geq 1\) (where \(B\) is the standard Borel subgroup of upper triangular matrices) is the trivial character \(\iota\), and then \(s=1\)[12, Corollary 9.3.3].
The character \(\delta\) is given by [5, 4.3 Proposition] and can be computed through [5, 4.2 Lemma]: as a character of \(\mathfrak{o}_{F}^{\times}\), its value at \(x\in\mathfrak{o}_{F}^{\times}\) is the signature of the natural left action of \(x\) on \(\mathfrak{J}_{\mathfrak{M}_{1}}^{1}\cap\mathrm{Hom}_{F}(V,W)/\mathfrak{H}_{ \mathfrak{M}_{1}}^{1}\cap\mathrm{Hom}_{F}(V,W)\). (With the convention of _loc.cit._ the space \(V^{0}\) is the trivial space, hence \(V^{\times 0}=V\).) Implicit here is the stratum \((\mathfrak{M}_{1},2,0,0\oplus\beta)\) where \(-2\) is the valuation of \(0\oplus\beta\) relative to the sequence \(\mathfrak{M}_{1}\), equal to \(\mathrm{val}_{\Lambda_{2N}}(\beta)\).
We must come back to the definitions. We recall that the _jumps_ of a lattice sequence \(\Sigma\) in a vector space \(S\) are those integers \(i\) such that \(\Sigma(i)\neq\Sigma(i+1)\). The set of jumps of \(\Sigma\) is also the image of \(S\backslash\{0\}\) by the valuation map attached to \(\Sigma\), given for \(y\in S\backslash\{0\}\) by \(\mathrm{val}_{\Sigma}(y)=\max\{k\in\mathbb{Z}\mid y\in\Sigma(k)\}\).
Our stratum in \(X=(W\oplus W^{*})\perp V\) is \((\mathfrak{M}_{1},2,0,0\oplus\beta)\) so the easiest way is to follow [25, SS3.3]. We obtain the \(\mathfrak{o}_{F}\)-orders \(\mathfrak{H}_{\mathfrak{M}_{1}}\) and \(\mathfrak{J}_{\mathfrak{M}_{1}}\) written in blocks in the decomposition \((W\oplus W^{*})\perp V\):
\[\mathfrak{H}_{\mathfrak{M}_{1}}=\begin{pmatrix}\mathfrak{H}(0,\mathfrak{m}_{1 })&\mathfrak{a}_{2}^{12}(\mathfrak{M}_{1})\\ \mathfrak{a}_{2}^{21}(\mathfrak{M}_{1})&\mathfrak{H}(\beta,\Lambda_{2N}) \end{pmatrix},\qquad\mathfrak{J}_{\mathfrak{M}_{1}}\ \ =\begin{pmatrix}\mathfrak{H}(0,\mathfrak{m}_{1})& \mathfrak{a}_{1}^{12}(\mathfrak{M}_{1})\\ \mathfrak{a}_{1}^{21}(\mathfrak{M}_{1})&\mathfrak{H}(\beta,\Lambda_{2N}) \end{pmatrix}. \tag{3.2}\]
We concentrate on the first line of the upper-right block that corresponds to \(\mathrm{Hom}_{F}(V,W)\). To compare it between \(\mathfrak{H}\) and \(\mathfrak{I}\) we have to describe the lattices explicitly. We check that \(\mathfrak{m}_{1}(t)=\begin{pmatrix}\mathfrak{p}_{F}^{\frac{[t+2N-1]}{4N}}\\ \mathfrak{p}_{F}^{\frac{[t+6N-1]}{4N}}\end{pmatrix}\) (period \(4N\), constant on the interval \([-2N+1,2N]\)), the set of jumps of \(\mathfrak{m}_{1}\) is \(4N\mathbb{Z}\). For \(t\in[-2N,2N-1]\) the lattices \(\Lambda_{2N}(t)\) are the columns of the order \(\mathfrak{A}_{2N}\) from right to left, each repeated twice; the set of jumps of \(\Lambda_{2N}\) is the set of odd integers.
The condition for some \(b\in\operatorname{Hom}_{F}(V,W)\) to belong to \(\mathfrak{a}_{1}^{12}(\mathfrak{M}_{1})\) or \(\mathfrak{a}_{2}^{12}(\mathfrak{M}_{1})\) is the following:
\[b\in\mathfrak{a}_{1}^{12}(\mathfrak{M}_{1}) \iff\ \forall t\ \text{odd,}\ b\Lambda_{2N}(t)\subseteq\mathfrak{m}_{1}(t+1) \cap W\iff b\Lambda_{2N}(-2N+1)\subseteq\mathfrak{o}_{F}\] \[b\in\mathfrak{a}_{2}^{12}(\mathfrak{M}_{1}) \iff\ \forall t\ \text{odd,}\ b\Lambda_{2N}(t)\subseteq\mathfrak{m}_{1}(t+2) \cap W\iff\begin{cases}b\Lambda_{2N}(-2N+1)\subseteq\mathfrak{o}_{F}\\ b\Lambda_{2N}(2N-1)\subseteq\mathfrak{p}_{F}\end{cases}\]
So the condition is: all entries of \(b\) in \(\mathfrak{o}_{F}\) for \(\mathfrak{a}_{1}^{12}(\mathfrak{M}_{1})\), the first entry in \(\mathfrak{p}_{F}\) and the others in \(\mathfrak{o}_{F}\) for \(\mathfrak{a}_{2}^{12}(\mathfrak{M}_{1})\). Using [5, 3.11 Lemma] we conclude that
\[\begin{split}&\delta\text{ is the quadratic character of }\mathfrak{o}_{F}^{\times},\text{ in other words:}\\ &\operatorname{IJord}(\pi,\mathbf{1})=([\epsilon_{1}],1)\text{ where }\epsilon_{1}\text{ is a quadratic ramified character of }F^{\times}.\end{split} \tag{3.3}\]
We remark that \(\operatorname{IJord}(\pi,\mathbf{1})\) does not depend on the character \(\chi\) of \(Z\) such that \(\pi=\operatorname{c-Ind}_{ZI_{2N}(1)}^{G}\chi\otimes\psi_{\beta}\).
### The Jordan set relative to the trivial endoclass
We apply the results of the first section to \(M=\operatorname{GL}(1,F)\times\operatorname{Sp}(2N,F)\) and \(P\) the parabolic subgroup of \(G^{+}=\operatorname{Sp}(2N+2,F)\) stabilizing the flag \(\{0\}\subset W\subset W\oplus V\subset X\). Let \(\epsilon\) be a quadratic ramified character of \(F^{\times}\). We are studying normalized parabolic induction from \(M\) to \(\operatorname{Sp}(2N+2,F)\), specifically we are investigating the reducibility of the following representation:
\[I(\pi,\epsilon,s)=\operatorname{Ind}_{P}^{G^{+}}\epsilon|\ |^{s}\otimes\pi\quad(s \in\mathbb{C}).\]
We have a type \((\mathfrak{o}_{F}^{\times}\times ZI_{N}(1),\delta\otimes\kappa)\) in \(M\) for \(\epsilon\otimes\pi\) and a cover \((J_{P},\lambda_{P})\) of this type in \(G^{+}\). We ease notation by calling the respective Hecke algebras of these types \(\mathcal{H}_{M}=\mathcal{H}(M,\delta\otimes\kappa)\) and \(\mathcal{H}_{G^{+}}=\mathcal{H}(G^{+},\lambda_{P})\).
We consider the generator \(\Psi\) of \(\mathcal{H}_{M}=\mathbb{C}[\Psi,\Psi^{-1}]\) supported on the \(\mathfrak{o}_{F}^{\times}\times ZI_{N}(1)\)-double coset of \(\varPi_{J_{M}}=\left(\begin{smallmatrix}\varpi_{F}&0&0\\ 0&I_{2N}&0\\ 0&0&\varpi_{F}^{-1}\end{smallmatrix}\right)\). The value of \(\Psi\) at \(\varPi_{J_{M}}\) is a non-zero intertwining operator of \(\delta\otimes\kappa\); it is unique up to scalar, we take it as the identity on the space of \(\kappa\) and some non-zero scalar on the space of \(\delta\).
We turn to \(\mathcal{H}_{G^{+}}\). The normalizer of \(M\) in \(G^{+}\) is the union of two \(M\)-cosets, the trivial coset and the coset of \(t_{0}=\left(\begin{smallmatrix}0&0&1\\ 0&I_{2N}&0\\ -1&0&0\end{smallmatrix}\right)\) and \(t_{1}=\left(\begin{smallmatrix}0&0&-\varpi_{F}^{-1}\\ 0&I_{2N}&0\\ \varpi_{F}&0&0\end{smallmatrix}\right)\). We check that \(t_{0}\) belongs to \(P(\mathfrak{M}_{0,\mathfrak{o}_{E}})\), that \(t_{1}\) belongs to \(P(\mathfrak{M}_{1,\mathfrak{o}_{E}})\) and that \(t_{0}t_{1}=\varPi_{J_{M}}\). The algebra \(\mathcal{H}_{G^{+}}\) has two generators \(\mathcal{T}_{0}\) and \(\mathcal{T}_{1}\) of respective supports \(J_{P}t_{0}J_{P}\) and \(J_{P}t_{1}J_{P}\), images of the corresponding generators of the two Hecke algebras in \(\operatorname{SL}(2,k_{F})\) described in SS3.1. In view of (3.1) and [5, 3.14 Proposition], the only possibility for a reducibility at some \(s\) with real part 1 (a fact in our case, once chosen a self-dual base point) is that both generators satisfy the quadratic relation \((T-q)(T+1)=0\), which defines them uniquely. Hence \(\mathcal{T}_{0}(t_{0})\) and \(\mathcal{T}_{1}(t_{1})\) are uniquely determined by the quadratic relations \((\mathcal{T}_{0}-q)(\mathcal{T}_{0}+1)=0\) and \((\mathcal{T}_{1}-q)(\mathcal{T}_{1}+1)=0\)
By Theorem 1.10 and Corollary 1.11, the quadratic character \(\epsilon_{1}\) in the Jordan set of \(\pi\) is characterised by:
\[\epsilon_{1}(\varpi_{F})\equiv\mathcal{T}_{0}(t_{0})\mathcal{T}_{1}(t_{1}) \tag{3.4}\]
where \(\equiv\) means "equal up to positive constant".
### Computation of the argument of \(\mathcal{T}_{0}(t_{0})\mathcal{T}_{1}(t_{1})\)
We proceed to determine the arguments of \(\mathcal{T}_{0}(t_{0})\) and \(\mathcal{T}_{1}(t_{1})\) providing quadratic relations with positive coefficients. We work this out following [3, SS1.d], that applies mutatis mutandis provided \(t_{0}\) and \(t_{1}\) behave well with respect to the Iwahori decomposition of \(J_{P}\), which we check first.
We write \(P=MU\) for the parabolic subgroup defined in the previous subsection, with \(U\) the unipotent radical of \(P\), and we write \(P^{-}=MU^{-}\) for the opposite parabolic with respect to \(M\). We write \(J_{\Lambda}\) for \(J(\Lambda,0\oplus\beta)\) and so on. From [5, SS3.6] we have
\[J_{P}=(H^{1}_{\Lambda}\cap U^{-})(\mathfrak{o}_{F}^{\times}\times ZI_{N}(1))( J_{\Lambda}^{1}\cap U)\]
#### 3.3.1. Some lattice computations
As in (3.2), following [25, SS3.3], for the stratum \((\Lambda,2,0,0\oplus\beta)\), we write in blocks in the decomposition \((W\oplus W^{*})\perp V\):
\[\mathfrak{H}_{\Lambda} =\begin{pmatrix}\mathfrak{H}(0,\Lambda_{2})&\mathfrak{a}_{2}^{12} (\Lambda)\\ \mathfrak{a}_{2}^{21}(\Lambda)&\mathfrak{H}(\beta,\Lambda_{2N})\end{pmatrix}, \quad\mathfrak{J}_{\Lambda}=\begin{pmatrix}\mathfrak{H}(0,\Lambda_{2})& \mathfrak{a}_{1}^{12}(\Lambda)\\ \mathfrak{a}_{1}^{21}(\Lambda)&\mathfrak{H}(\beta,\Lambda_{2N})\end{pmatrix},\] \[t_{0} =\begin{pmatrix}\begin{bmatrix}0&1\\ -1&0\end{bmatrix}&0\\ 0&I_{2N}\end{pmatrix},\quad t_{1}=\begin{pmatrix}\begin{bmatrix}0&-\varpi_{F} ^{-1}\\ \varpi_{F}&0\end{bmatrix}&0\\ 0&I_{2N}\end{pmatrix}. \tag{3.5}\]
We write further \(\mathfrak{a}_{i}^{12}(\Lambda)=\begin{pmatrix}R^{1}(i)\\ R^{2}(i)\end{pmatrix}\) where \(R^{1}(i)\), \(R^{2}(i)\) are lattices of row vectors in \(F^{2N}\) and similarly \(\mathfrak{a}_{i}^{21}(\Lambda)=(C^{1}(i)\,\,C^{2}(i))\) with lattices of column vectors. Recalling that \(J^{1}(\beta,\Lambda_{2N})=H^{1}(\beta,\Lambda_{2N})=I_{2N}(1)\) and that \(H^{1}(0,\Lambda_{2})=J^{1}(0,\Lambda_{2})=I_{1}\), we get:
\[J_{P}\cap U =\begin{pmatrix}1&R^{1}(1)&\mathfrak{o}_{F}\\ 0&I_{2N}&C^{2}(1)\\ 0&0&1\end{pmatrix}, J_{P}\cap U^{-} =\begin{pmatrix}1&0&0\\ C^{1}(2)&I_{2N}&0\\ \mathfrak{p}_{F}&R^{2}(2)&1\end{pmatrix}\] \[t_{0}(J_{P}\cap U^{-})t_{0}^{-1} =\begin{pmatrix}1&R^{2}(2)&\mathfrak{p}_{F}\\ 0&I_{2N}&C^{1}(2)\\ 0&0&1\end{pmatrix}, t_{0}(J_{P}\cap U)t_{0}^{-1} =\begin{pmatrix}1&0&0\\ C^{2}(1)&I_{2N}&0\\ \mathfrak{o}_{F}&R^{1}(1)&1\end{pmatrix}\] \[t_{1}(J_{P}\cap U^{-})t_{1}^{-1} =\begin{pmatrix}1&\varpi_{F}^{-1}R^{2}(2)&\mathfrak{p}_{F}^{-1} \\ 0&I_{2N}&\varpi_{F}^{-1}C^{1}(2)\\ 0&0&1\end{pmatrix},\ t_{1}(J_{P}\cap U)t_{1}^{-1} =\begin{pmatrix}1&0&0\\ \varpi_{F}C^{2}(1)&I_{2N}&0\\ \mathfrak{p}_{F}^{2}&\varpi_{F}R^{1}(1)&1\end{pmatrix}\]
We have to describe the lattices explicitly. We have seen before that for \(t\in[-2N,2N-1]\) the lattices \(\Lambda_{2N}(t)\) are the columns of the order \(\mathfrak{A}_{2N}\) from right to left, each repeated twice;
the set of jumps of \(\Lambda_{2N}\) is the set of odd integers. Now \(\Lambda_{2}\) has period \(4N\), has a constant value equal to \(\binom{\mathfrak{o}_{F}}{\mathfrak{p}_{F}}\) on the interval \([-N+1,N]\), and the set of jumps of \(\Lambda_{2}\) is \(N+2N\mathbb{Z}\).
Elements \(B=\begin{pmatrix}B_{1}\\ B_{2}\end{pmatrix}\) of \(\mathfrak{a}_{i}^{12}(\Lambda)\), \(i=1,2\), must satisfy \(B\Lambda_{2N}(t)\subset\Lambda_{2}(t+i)\) for all \(t\), i.e.
\[\begin{array}{ll}\underline{i=1}&B\Lambda_{2N}(-N)\subset\left(\begin{smallmatrix} \mathfrak{o}_{F}\\ \mathfrak{p}_{F}\end{smallmatrix}\right)\quad\text{ and }\quad B\Lambda_{2N}(N) \subset\left(\begin{smallmatrix}\mathfrak{p}_{F}\\ \mathfrak{p}_{F}\end{smallmatrix}\right);\\ \underline{i=2}&B\Lambda_{2N}(-N-1)\subset\left(\begin{smallmatrix} \mathfrak{o}_{F}\\ \mathfrak{p}_{F}\end{smallmatrix}\right)\quad\text{ and }\quad B\Lambda_{2N}(N-1) \subset\left(\begin{smallmatrix}\mathfrak{p}_{F}\\ \mathfrak{p}_{F}\end{smallmatrix}\right).\end{array}\]
The first remark concerns parity. Since the jumps of \(\Lambda_{2N}\) occur at odd integers, we have \(\Lambda_{2N}(N)=\Lambda_{2N}(N-1)\) if and only if \(N\) is odd. Hence if \(N\) is odd we have \(\mathfrak{a}_{1}^{12}(\Lambda)=\mathfrak{a}_{2}^{12}(\Lambda)\). We look at the rows of \(B\) focusing on \(R^{1}(1)\) and \(R^{2}(2)\) which appear in \(J_{P}\) above:
\[B_{1}\in R^{1}(1) \iff B_{1}\Lambda_{2N}(-N)\subset\mathfrak{o}_{F}\text{ and }B_{1}\Lambda_{2N}(N)\subset\mathfrak{p}_{F}\] \[B_{2}\in R^{2}(2) \iff B_{2}\Lambda_{2N}(-N-1)\subset\mathfrak{p}_{F}\text{ (and }B_{2}\Lambda_{2N}(N-1)\subset\mathfrak{p}_{F}).\]
In particular \(\varpi_{F}R^{1}(1)\subset R^{2}(2)\subset R^{1}(1)\subset\varpi_{F}^{-1}R^{ 2}(2)\),
and by duality \(\varpi_{F}C^{2}(1)\subset C^{1}(2)\subset C^{2}(1)\subset\varpi_{F}^{-1}C^{1} (2)\).
Finally:
\[\begin{array}{l}t_{0}(J_{P}\cap U^{-})t_{0}^{-1}\subset J_{P}\cap U\subset t _{1}(J_{P}\cap U^{-})t_{1}^{-1},\\ t_{1}(J_{P}\cap U)t_{1}^{-1}\subset J_{P}\cap U^{-}\subset t_{0}(J_{P}\cap U) t_{0}^{-1}.\end{array} \tag{3.6}\]
From these inclusions, we draw that \(t_{1}\) satisfies exactly the conditions in [3, (1.3)], whereas for \(t_{0}\) we will only need to exchange the roles of \(U\) and \(U^{-}\). With this the computation in [3, SS1.d] applies: we get the coefficients of the quadratic relations \(T^{2}=b_{0}T+c_{0}\mathcal{I}\) and \(T^{2}=b_{1}T+c_{1}\mathcal{I}\), satisfied respectively by \(\mathcal{T}_{0}\) and \(\mathcal{T}_{1}\), from [3, (1.4)]. In particular _loc.cit._ provides immediately:
**Lemma 3.7**.: _The coefficients \(c_{0}\) and \(c_{1}\) are positive if and only if \(\mathcal{T}_{0}(t_{0})\mathcal{T}_{0}(t_{0}^{-1})\) and \(\mathcal{T}_{1}(t_{1})\mathcal{T}_{1}(t_{1}^{-1})\) are positive, or equivalently \(\delta(-1)\mathcal{T}_{0}(t_{0})^{2}\) and \(\delta(-1)\mathcal{T}_{1}(t_{1})^{2}\) are positive._
For the coefficients \(b_{0}\) and \(b_{1}\) the computation based on [3, (1.4)] is more involved.
#### 3.3.2. Computation of the coefficient \(b_{1}\)
We must compute
\[b_{1}=\sum_{j\in(J_{P}\cap U)\backslash\Gamma}\mathcal{T}_{1}(j)\quad\quad \text{ where }\Gamma=t_{1}(J_{P}\cap U^{-})t_{1}^{-1}\cap J_{P}t_{1}J_{P}.\]
We have \(J_{P}t_{1}J_{P}=(J_{P}\cap U^{-})(\mathfrak{o}_{F}^{\times}\times ZI_{2N}(1))t _{1}(J_{P}\cap U^{-})\). The decomposition of an element \(x\) of \(J_{P}t_{1}J_{P}\) as a product
\[x=u^{-}\left(\begin{smallmatrix}\lambda&0&0\\ 0&g&0\\ 0&0&\lambda^{-1}\end{smallmatrix}\right)t_{1}v^{-}\text{ with }u^{-},v^{-}\in J _{P}\cap U^{-},\lambda\in\mathfrak{o}_{F}^{\times},g\in ZI_{2N}(1),\]
is unique and gives
\[\mathcal{T}_{1}(x)=\delta(\lambda)(\chi\otimes\psi_{\beta})(g)\mathcal{T}_{1}( t_{1}).\]
To compute \(b_{1}\) we must work out the matrix product to obtain, by identification, a characterization of \(\Gamma\) as some set of matrices \(j=\left(\begin{smallmatrix}1&B&z\\ 0&I_{2N}&C\\ 0&0&1\end{smallmatrix}\right)\), with \(B\in\varpi_{F}^{-1}R^{2}(2)\), \(C\in\varpi_{F}^{-1}C^{1}(2)\) and \(z\in\mathfrak{p}_{F}^{-1}\), and additional conditions, and compute \(\lambda\) and \(g\) as functions of \(B\), \(C\), \(z\). We want:
\[\left(\begin{smallmatrix}1&0&0\\ D_{1}&I_{2N}&0\\ Z_{1}&H_{1}&1\end{smallmatrix}\right)\left(\begin{smallmatrix}\lambda&0&0\\ 0&g&0\\ 0&0&\lambda^{-1}\end{smallmatrix}\right)\left(\begin{smallmatrix}0&0&-\varpi_{ F}^{-1}\\ 0&I_{2N}&0\\ \varpi_{F}&0&0\end{smallmatrix}\right)\left(\begin{smallmatrix}1&0&0\\ D_{2}&I_{2N}&0\\ Z_{2}&H_{2}&1\end{smallmatrix}\right) =\left(\begin{smallmatrix}-\lambda\varpi_{F}^{-1}Z_{2}&-\lambda \varpi_{F}^{-1}H_{2}&-\lambda\varpi_{F}^{-1}\\ -\lambda\varpi_{F}^{-1}D_{1}Z_{2}+gD_{2}&g-\lambda\varpi_{F}^{-1}D_{1}H_{2}& -\lambda\varpi_{F}^{-1}D_{1}\\ y&Y&-\lambda\varpi_{F}^{-1}Z_{1}\end{smallmatrix}\right)\] \[=\left(\begin{smallmatrix}1&B\\ 0&I_{2N}^{B}&\tilde{C}\\ 0&0&1\end{smallmatrix}\right).\]
The obvious condition is that \(z\) must have valuation \(-1\), then we let \(\lambda=-z\varpi_{F}\). Next:
* \(Z_{1}=Z_{2}=z^{-1}\in\varpi_{F}\mathfrak{o}_{F}^{\times}\);
* \(H_{2}=z^{-1}B\in R^{2}(2)\) and \(D_{1}=z^{-1}C\in C^{1}(2)\);
* \(g=I_{2N}-z^{-1}CB\).
We must check \(g\). Conditions on \(B\) and \(C\) are \(\varpi_{F}C\mathfrak{o}_{F}\subset\Lambda_{2N}(N+2)\) and \(B\Lambda_{2N}(-N-1)\subset\mathfrak{o}_{F}\), they are equivalent by duality. From the second condition, the entries in \(B\) are in \(\mathfrak{o}_{F}\) except the last \(k\) ones in \(\mathfrak{p}_{F}^{-1}\), for some \(k\) with \(1\leq k<N\). We will show that \(\varpi_{F}CB\) belongs to \(\mathfrak{A}_{0}(\Lambda_{2N})\) if and only if all the entries of \(B\) belong to \(\mathfrak{o}_{F}\) - this will show that actually \(\varpi_{F}CB\) belongs to \(\mathfrak{A}_{1}(\Lambda_{2N})\).
Recall that \(\left(\begin{smallmatrix}1&B&z\\ 0&I_{2N}&C\\ 0&0&1\end{smallmatrix}\right)\) belongs to \(\operatorname{Sp}(2N+2,F)\) if and only if, writing \(x_{1}\) to \(x_{2N}\) for the entries of \(B\), left to right, and \(c_{1}\) to \(c_{2N}\) for the entries of \(C\), top to bottom, we have \(c_{i}=x_{2N-i+1}\) for \(1\leq i\leq N\) and \(c_{i}=-x_{2N-i+1}\) for \(N+1\leq i\leq 2N\), which we will write as \(C=B^{\tau}\), and \(BC=0\). Assume that one of the last \(k\) entries of \(B\), say \(x_{2N-j+1}\), has valuation \(-1\), then the \((j,2N-j)\) entry of \(\varpi_{F}CB\) has valuation \(-1\), which proves our claim. In particular, when \(g\) belongs to \(\mathfrak{A}_{0}(\Lambda_{2N})\), it belongs to \(I_{2N}(1)\) and \((\chi\otimes\psi_{\beta})(g)=\psi\circ\operatorname{tr}(-\beta z^{-1}CB)\).
We leave aside for the moment the checking of the other coefficients and get on to computing \(b_{1}\), with the following facts:
\[\Gamma=t_{1}(J_{P}\cap U^{-})t_{1}^{-1}\cap J_{P}t_{1}J_{P}=\left\{\left( \begin{smallmatrix}1&B&\varpi_{F}^{-1}u\\ 0&I_{2N}&C\\ 0&0&1\end{smallmatrix}\right)\in\operatorname{Sp}(2N+2,F)\mid u\in\mathfrak{ o}_{F}^{\times},B\in\mathfrak{o}_{F}^{2N}\right\},\] \[\mathcal{T}_{1}\big{(}\!\left(\begin{smallmatrix}1&B\\ 0&I_{2N}&\tilde{C}\\ 0&0&1\end{smallmatrix}\right)\!\big{)}=\delta(-u)\psi\circ\operatorname{tr}(- \beta u^{-1}\varpi_{F}CB)\mathcal{T}_{1}(t_{1})\quad\text{ for }\left(\begin{smallmatrix}1&B&\varpi_{F}^{-1}u\\ 0&I_{2N}&C\\ 0&0&1\end{smallmatrix}\right)\in\Gamma.\]
We continue with the explicit element \(\beta\) given in SS2.2, so that
\[\psi\circ\operatorname{tr}(-\beta u^{-1}\varpi_{F}CB) =\psi(u^{-1}\varpi_{F}(2c_{1}x_{2}+\cdots+2c_{N-1}x_{N}+c_{N}x_{N+ 1}-\varpi_{F}^{-1}c_{2N}x_{1}))\] \[=\psi(u^{-1}x_{1}^{2}).\]
We need \(b_{1}\) up to a positive constant, which we write as \(\equiv\):
\[b_{1}\equiv\mathcal{T}_{1}(t_{1})\sum_{u\in k_{F}^{\times}}\delta(-u)\sum_{x\in k _{F}}\psi(u^{-1}x^{2})\equiv\mathcal{T}_{1}(t_{1})\delta(-1)G(\delta,\psi),\]
where \(G(\delta,\psi)\) is the Gauss sum \(\sum_{u\in k_{F}^{\times}}\delta(u)\psi(u)\), known to be the product of \(q^{\frac{1}{2}}\) and a square root of \((-1)^{\frac{q-1}{2}}\), namely
\[\xi(\delta,\psi)=\frac{G(\delta,\psi)}{|G(\delta,\psi)|},\qquad\xi(\delta,\psi) ^{2}=(-1)^{\frac{q-1}{2}}. \tag{3.8}\]
**Proposition 3.9**.: _The normalization of \(\mathcal{T}_{1}\) such that the coefficients \(b_{1}\) and \(c_{1}\) of the quadratic relation that it satisfies are positive is given, up to a positive scalar, by_
\[\mathcal{T}_{1}(t_{1})=\xi(\delta,\psi).\]
Indeed, with this normalization the coefficient \(c_{1}\) is also positive, as stated in Lemma 3.7, which stipulated that, up to a positive constant, \(\mathcal{T}_{1}(t_{1})\) was a square root of \(\delta(-1)\). The exact square root is specified by the Gauss sum \(G(\delta,\psi)\).
As for the last checks:
* \(-\lambda\varpi_{F}^{-1}D_{1}Z_{2}+gD_{2}=0\iff z^{-1}C+D_{2}-z^{-1}CBD_{2}=0\), which holds since \(D_{2}=-H_{2}^{\tau}=-z^{-1}B^{\tau}=-z^{-1}C\) and \(BC=0\).
* We have \(Y=H_{1}g+H_{2}=H_{1}-z^{-1}H_{1}CB+z^{-1}B\). Since \(H_{1}^{\tau}=-D_{1}=-z^{-1}C=-z^{-1}B^{\tau}\) we have \(H_{1}=-z^{-1}B\) and \(Y=0\) follows. Then \(y=-z^{-1}+H_{1}gD_{2}+z^{-1}\) is \(0\) for the same reasons.
#### 3.3.3. Computation of the coefficient \(b_{0}\)
As announced it is done in the same way with the roles of \(U\) and \(U^{-}\) being exchanged. We just write down the relevant facts.
\[b_{0}=\sum_{j\in(J_{P}\cap U^{-})\setminus\Gamma^{\prime}}\mathcal{T}_{1}(j) \qquad\text{ where }\Gamma^{\prime}=t_{0}(J_{P}\cap U)t_{0}^{-1}\cap J_{P}t_{0}J_{P}\]
We have \(J_{P}t_{0}J_{P}=(J_{P}\cap U)(\mathfrak{o}_{F}^{\times}\times ZI_{2N}(1))t_{0 }(J_{P}\cap U)\), and
\[\Gamma^{\prime}=\left\{\left(\begin{smallmatrix}1&0&0\\ D&I_{2N}&0\\ u&H&1\end{smallmatrix}\right)\in\operatorname{Sp}(2N+2,F)\mid u\in\mathfrak{o} _{F}^{\times},H\in(\mathfrak{o}_{F},\cdots,\mathfrak{o}_{F},\mathfrak{p}_{F}, \cdots,\mathfrak{p}_{F})=\mathfrak{p}_{F}^{N}\times\mathfrak{o}_{F}^{N}\right\},\] \[\mathcal{T}_{1}(\left(\begin{smallmatrix}1&0&0\\ D&I_{2N}&0\\ u&H&1\end{smallmatrix}\right))=\delta(-u)\psi\circ\operatorname{tr}(-\beta u^{- 1}DH)\mathcal{T}_{0}(t_{0})\quad\text{ for }\left(\begin{smallmatrix}1&0&0\\ D&I_{2N}&0\\ u&H&1\end{smallmatrix}\right)\in\Gamma^{\prime}.\]
Now \(D\) and \(H\) are related by \(D=-H^{\tau}\) so that \(\psi\circ\operatorname{tr}(-\beta u^{-1}DH)=\psi(-u^{-1}d_{N}^{2})\), and
\[b_{0}\equiv\mathcal{T}_{0}(t_{0})\sum_{u\in k_{F}^{\times}}\delta(-u)\sum_{x \in k_{F}}\psi(-u^{-1}x^{2})\equiv\mathcal{T}_{0}(t_{0})G(\delta,\psi).\]
**Proposition 3.10**.: _The normalization of \(\mathcal{T}_{0}\) such that the coefficients \(b_{0}\) and \(c_{0}\) of the quadratic relation that it satisfies are positive is given, up to a positive scalar, by_
\[\mathcal{T}_{0}(t_{0})=\delta(-1)\xi(\delta,\psi).\]
### Conclusion
Putting together (3.4) and the last two Propositions we obtain
\[\epsilon_{1}(\varpi_{F})=(-1)^{\frac{q-1}{2}}\xi(\delta,\psi)^{-2}=1. \tag{3.11}\]
In other terms, the Jordan set of \(\pi=\mathrm{c}\text{-}\mathrm{Ind}_{ZI_{2N}(1)}^{G}\,\chi\,\otimes\,\psi_{\beta}\) relative to the trivial endoclass is \((\epsilon_{1},1)\) where \(\epsilon_{1}\) is the ramified quadratic character such that \(\epsilon_{1}(\varpi_{F})=1\). In terms of \(\beta\), from SS2.2 we replace \(\varpi_{F}^{-1}\) by \((-1)^{N}\beta^{2N}=(-1)^{N+1}N_{E/F}(\beta)\) and get: \(\epsilon_{1}((-1)^{N+1}N_{E/F}(\beta))=1\), or
\[\epsilon_{1}(N_{E/F}(\beta))=(-1)^{(N+1)\frac{q-1}{2}}.\]
We remark that the result does not depend on \(\chi\), and conclude:
**Proposition 3.12**.: _The Jordan set of \(\pi=\mathrm{c}\text{-}\mathrm{Ind}_{ZI_{2N}(1)}^{G}\,\chi\,\otimes\,\psi_{\beta}\) relative to the trivial endoclass is \((\epsilon_{1},1)\) where_
* \(\epsilon_{1}\) _is the ramified quadratic character that is trivial on the norms of_ \(F[\beta]\) _if_ \(\frac{q-1}{2}\) _is even or if_ \(N\) _is odd;_
* \(\epsilon_{1}\) _is the ramified quadratic character that is non-trivial on the norms of_ \(F[\beta]\) _if_ \(N\) _is even and_ \(\frac{q-1}{2}\) _is odd._
## 4. The simple cuspidal of \(\mathrm{GL}(2N,F)\)
We try and apply the same method to determine the simple cuspidal of \(\mathrm{GL}(2N,F)\) that gives a reducibility with real part 1. We know from [5] the simple character underlying this representation: the square of the self-dual simple character extending \(\psi_{\beta}\). For the level zero part, section 5 in [5] would give the result, but we don't use it here. We compute the generators of the Hecke algebra in order to describe completely the simple cuspidal.
### The simple character and the cover
We start again with the symplectic space \((V,h)=(F^{2N},h_{2N})\) from section 2. We work in the symplectic space \(X=V\oplus V\oplus V\) equipped with the following symplectic form:
\[\mathbf{h}(\left(\begin{smallmatrix}a\\ b\\ c\end{smallmatrix}\right),\left(\begin{smallmatrix}a^{\prime}\\ b^{\prime}\\ c^{\prime}\end{smallmatrix}\right))=h(a,c^{\prime})+h(b,b^{\prime})+h(c,a^{ \prime})\qquad(a,b,c,a^{\prime},b^{\prime},c^{\prime}\in V).\]
We let \(W=\left\{\left(\begin{smallmatrix}a\\ 0\\ 0\end{smallmatrix}\right)\mid a\in V\right\}\) and \(W^{*}=\left\{\left(\begin{smallmatrix}0\\ 0\\ c\end{smallmatrix}\right)\mid c\in V\right\}\), and we make the identification \(V=\left\{\left(\begin{smallmatrix}0\\ b\\ 0\end{smallmatrix}\right)\mid b\in V\right\}\): this is the symplectic space on which our original group \(G=\mathrm{Sp}(2N,F)\) operates. For an endomorphism \(Z\) of \(V\) we denote by \(\,{}^{a}Z\) the adjoint endomorphism, as described in SS2.1. For an endomorphism \(Z\) of \(X\) we denote by \(Z\mapsto\,^{a}\!\!Z\) the adjoint map with respect to \(\mathbf{h}\). We have
\[\begin{CD}{}^{A}\!\left(\begin{smallmatrix}g_{1}&g_{2}\\ &g_{3}\end{smallmatrix}\right)=\left(\begin{smallmatrix}\,{}^{a}g_{3}&{}_{a}g_{ 2}\\ &{}_{a}g_{1}\end{smallmatrix}\right);\qquad{}^{A}\!\left(\begin{smallmatrix}&{}_{g }&{}_{g_{2}}&{}^{g_{1}}\\ &{}_{g_{3}}&{}_{g_{2}}\end{smallmatrix}\right)=\left(\begin{smallmatrix}&{}_{a }&{}_{g_{3}}&{}_{g_{2}}\\ &{}_{g_{3}}&{}_{g_{2}}\end{smallmatrix}\right);\\ {}^{A}\!\left(\begin{smallmatrix}I&X&Z\\ I&Y\\ I&I\end{smallmatrix}\right)=\left(\begin{smallmatrix}I&{}_{a}^{a}Z\\ &{}_{Y}&{}_{a}X&I\end{smallmatrix}\right).\end{CD}\]
We let \(H=\operatorname{Sp}(X)\simeq\operatorname{Sp}(6N,F)\) and we consider the embedding
\[\operatorname{GL}(W)\times G \longrightarrow H\] \[(x,g) \longmapsto \mathbf{m}\,(x,g)=\begin{pmatrix}x&&\\ &g&\\ &&{}^{a}x^{-1}\end{pmatrix},\qquad x\in\operatorname{GL}(W),\ g\in G.\]
The image of \(\mathbf{m}\) is a Levi subgroup \(M\) of \(H\). We let \(P\) be the parabolic subgroup of \(H\) stabilizing the flag \(\{0\}\subset W\subset W\oplus V\subset X\) and we write \(P=MU\), with \(U\) the unipotent radical of \(P\), and \(P^{-}=MU^{-}\) for the opposite parabolic with respect to \(M\).
Each subspace \(W\), \(V\), \(W^{*}\) of \(X\) bears a natural identification coordinate-wise with \(F^{2N}\) through which we identify \(\Lambda_{2N}\) to lattice sequences \(\Lambda_{W}\), \(\Lambda_{V}\), \(\Lambda_{W^{*}}\). Note that \(\Lambda_{W^{*}}\) is also the dual lattice sequence to \(\Lambda_{W}\) when identifying \(W^{*}\) to the dual of \(W\) through \(\mathbf{h}\), i.e.
\[\Lambda_{W^{*}}(t)=\left\{z\in W^{*}\ |\ \forall x\in\Lambda_{W}(1-t)\ \ h(z,x)\in\mathfrak{p}_{F}\right\}.\]
We recall our type in \(V\):
\[(J_{V},\lambda_{V})=(J(\beta,\Lambda_{V}),\chi\otimes\psi_{\beta}),\ \text{with}\ J(\beta,\Lambda_{V})=ZI_{2N}(1),\]
and consider the following data in \(W\):
* the simple and maximal stratum \((\Lambda_{W},2,0,2\beta)\),
* the associated compact open subgroups \(\tilde{J}^{1}(\beta,\Lambda_{W})\) and \(\tilde{J}(\beta,\Lambda_{W})=\mathfrak{o}_{F}^{\times}\tilde{J}^{1}(\beta, \Lambda_{W})\);
* the simple character \(\psi_{2\beta}\) of \(\tilde{J}^{1}(\beta,\Lambda_{W})\);
* a character \(\delta\) of \(\mathfrak{o}_{F}^{\times}\) with trivial square;
* the self-dual type \((\tilde{J}_{W},\tilde{\lambda}_{W})=(\tilde{J}(\beta,\Lambda_{W}),\delta \otimes\psi_{2\beta})\) in \(\operatorname{GL}(W)\).
We form the type \((J_{M}=\tilde{J}_{W}\times J_{V},\lambda_{M}=\tilde{\lambda}_{W}\otimes \lambda_{V})\) in \(M\).
We need a lattice sequence in \(X\) which, together with \(\beta_{X}=\beta\oplus\beta\oplus\beta\), will form a skew-simple stratum underlying an \(H\)-cover of \((J_{M},\lambda_{M})\). The attached groups \(H^{1}\), \(J^{1}\) and \(J\) must have Iwahori decomposition with respect to \(P=MU\). This will hold if the decomposition \(X=W\oplus V\oplus W^{*}\) is properly subordinate to the stratum [26, Corollaries 5.10, 5.11], i.e.
* the lattices in the sequence are direct sums of lattices in \(W,V,W^{*}\);
* from one lattice to the next, at most one of the three parts changes.
Using the definitions in [11, SS2], we let
\[\Lambda_{X}=(3\Lambda_{W}-2)\oplus(3\Lambda_{V})\oplus(3\Lambda_{W*}+2)\]
where \((3\Lambda_{W}-2)(t)=3\Lambda_{W}(t-2)=\Lambda_{W}([\frac{(t-2)+2}{3}])\), and so on. The period of \(\Lambda_{X}\) is \(12N\). The dual of \(\Lambda_{X}(t)\) is (with \(1-[\frac{x}{3}]=[\frac{1-x+4}{3}]\)):
\[\Lambda_{W}(1-[\frac{(t+2)+2}{3}])\oplus\Lambda_{V}(1-[\frac{t+2 }{3}])\oplus\Lambda_{W*}(1-[\frac{(t-2)+2}{3}])\] \[\qquad=\Lambda_{W}([\frac{(1-t-2)+2}{3}])\oplus\Lambda_{V}([ \frac{1-t+2}{3}])\oplus\Lambda_{W*}([\frac{(1-t+2)+2}{3}])=\Lambda_{X}(1-t)\]
so \(\Lambda_{X}\) has duality invariant \(1\). The jumps of the sequence in \(W\), resp. \(V\), resp. \(W^{*}\), occur for \(t\equiv 5\), resp. \(t\equiv 3\), resp. \(t\equiv 1\), mod \(6\). We have \(\Lambda_{X}(2t)=\Lambda_{X}(2t+1)\) for any \(t\in\mathbb{Z}\), which implies \(\mathfrak{A}_{2t-1}(\Lambda_{X})=\mathfrak{A}_{2t}(\Lambda_{X})\) for \(t\geqslant 1\).
We form in \(X\) the skew-simple stratum \((\Lambda_{X},6,0,\beta_{X}=\beta\oplus\beta\oplus\beta).\) We check the condition in [26, SS6.2]: the decomposition \(X=W\oplus V\oplus W^{*}\) is exactly subordinate to the stratum, we have \(\Lambda_{X}(1)=\Lambda_{X}(0)\) and \(\Lambda_{X}(1)\cap W^{*}\ni\Lambda_{X}(2)\cap W^{*}\). We stick to the conventions and notations of _loc.cit._ and let \(W=W^{(-1)}\), \(W^{*}=W^{(1)}\), with \(q_{1}=1\) and \(q_{-1}=-1\); our parabolic subgroup \(P\) is the same as in _loc.cit._.
We use the cover of \((J_{M},\lambda_{M})\) constructed by the third author [26, SS6.2, SS7.2.2]. Since \(\beta\) is minimal over \(F\) and \(\mathfrak{A}_{3}(\Lambda_{X})=\mathfrak{A}_{4}(\Lambda_{X})\), we have \(J^{1}(\Lambda_{X},\beta_{X})=H^{1}(\Lambda_{X},\beta_{X})\)[25, SS3.1]. The skew-simple character \(\psi_{\beta_{X}}\) of \(J^{1}_{X}=H^{1}_{X}=H^{1}(\Lambda_{X},\beta_{X})\) restricts through \(\mathbf{m}\,\) to the character \(\psi_{\beta_{X}}\circ\mathbf{m}\ =\psi_{2\beta}\otimes\psi_{\beta}\) of \(\tilde{J}^{1}(\beta,\Lambda_{W})\times J^{1}(\beta,\Lambda_{2N})\) and is trivial on the intersections with \(U\) and \(U^{-}\). We have
\[J_{X}:=J(\Lambda_{X},\beta_{X})=(H^{1}_{X}\cap U^{-})\ \mathbf{m}(\tilde{J}( \beta,\Lambda_{W})\times J(\beta,\Lambda_{V}))\ (H^{1}_{X}\cap U).\]
We get an \(H\)-cover \((J_{X},\lambda_{X})\) of \((J_{M},\lambda_{M})\) by letting \(\lambda_{X}\) be trivial on \(U\), \(U^{-}\) and putting \(\lambda_{X}\circ\mathbf{m}\,=\lambda_{M}\).
### The Hecke algebra
We turn to \(\mathcal{H}_{X}=\mathcal{H}(\operatorname{Sp}(X),\lambda_{X})\). The normalizer of \(M\) in \(H\) is the union of two \(M\)-cosets, the trivial coset and the coset of the elements \(s_{1}\) and \(s_{1}^{\varpi}\) from [26, SS6.2]:
\[s_{1}=w_{0}=\begin{pmatrix}0&0&I_{2N}\\ 0&I_{2N}&0\\ I_{2N}&0&0\end{pmatrix},\quad s_{1}^{\varpi}=w_{1}=\begin{pmatrix}0&0&\beta\\ 0&I_{2N}&0\\ -\beta^{-1}&0&0\end{pmatrix},\]
where we use \(\beta^{-1}\) as a uniformizing element for \(E\), in other words we let \(\beta^{-1}=\varpi_{E}\).
In [26, SS7.2.2], the third author constructs self-dual lattice sequences \(\mathfrak{M}_{0}\) and \(\mathfrak{M}_{1}\), of period \(2\) over \(E\), such that \(w_{0}\) belongs to \(P(\mathfrak{M}_{0,\varrho_{E}})\) and \(w_{1}\) belongs to \(P(\mathfrak{M}_{1,\varrho_{E}})\). They are defined by
\[\mathfrak{M}_{0}(2k+r)=\begin{cases}\varpi_{E}^{k}\Lambda_{X}(0)\ \text{if}\ r=0,\\ \varpi_{E}^{k}\Lambda_{X}(1)\ \text{if}\ r=1,\end{cases}\qquad\mathfrak{M}_{1}(2k+r)= \begin{cases}\varpi_{E}^{k}\Lambda_{X}(-2)\ \text{if}\ r=0,\\ \varpi_{E}^{k}\Lambda_{X}(3)\ \text{if}\ r=1.\end{cases}\]
The algebra \(\mathcal{H}_{X}\) has two generators \(T_{0}\) and \(T_{1}\) of respective supports \(J_{X}w_{0}J_{X}\) and \(J_{X}w_{1}J_{X}\). Furthermore \(P_{E}(\Lambda_{X})/P_{E}^{1}(\mathfrak{M}_{i})\) is a maximal Levi subgroup of the finite reductive group \(P_{E}(\mathfrak{M}_{i})/P_{E}^{1}(\mathfrak{M}_{i})\) and there is a quadratic character \(\epsilon_{\mathfrak{M}_{i}}\) of \(P_{E}(\Lambda_{X})/P_{E}^{1}(\mathfrak{M}_{i})\), depending only on \(\mathfrak{M}_{i}\), \(M\), \(U\), such that \(T_{i}\) satisfies a quadratic relation computed in
\[\mathcal{H}(P(\mathfrak{M}_{i,\varrho_{E}})/P^{1}(\mathfrak{M}_{i,\varrho_{E} }),\epsilon_{\mathfrak{M}_{i}}(\delta\otimes\chi)).\]
Actually we are in the situation of [5, SS3.16]: the finite reductive groups obtained are \(O(2,1)(k_{F})\) and \(\operatorname{SL}(2,k_{F})\times\{\pm 1\}\). In the first one the quadratic relation is always \(T^{2}=(q-1)T+q\), the quotient of the roots is \(-q\) (i.e. \(r_{0}=1\)). In the second one, we get either
the previous relation or \(T^{2}=1\), the quotient of the roots is \(-q\) or \(-1\) (i.e. \(r_{1}=1\) or \(0\)). Reducibility at \(\pm 1\) corresponds to both relations equal to \(T^{2}=(q-1)T+q\). We will come back to this later.
Now \(w_{0}\) and \(w_{1}\) normalize \(J_{X}\cap M\) and exchange \(U\) and \(U^{-}\), and Lemma 7.11 in [26] gives
\[w_{0}(J_{X}\cap U^{-})w_{0}^{-1}\subseteq J_{X}\cap U\qquad\text{ and }\qquad w_{1}(J_{X}\cap U)w_{1}^{-1}\subseteq J_{X}\cap U^{-}\]
hence for \(w_{0}\) :
\[\begin{split}& J_{X}w_{o}J_{X}=(J_{X}^{1}\cap U)w_{0}J_{M}(J_{X}^ {1}\cap U),\\ & J_{X}\cap w_{0}J_{X}w_{0}^{-1}=(J_{X}\cap U^{-})J_{M}w_{0}(J_{X }\cap U^{-})w_{0}^{-1},\\ &\Omega_{0}:=J_{X}/J_{X}\cap w_{0}J_{X}w_{0}^{-1}\simeq J_{X} \cap U/w_{0}(J_{X}\cap U^{-})w_{0}^{-1}\simeq J_{X}^{1}\cap U/w_{0}(H_{X}^{1} \cap U^{-})w_{0}^{-1};\end{split} \tag{4.1}\]
and for \(w_{1}\) :
\[\begin{split}& J_{X}w_{1}J_{X}=(H_{X}^{1}\cap U^{-})w_{1}J_{M}(H_{X} ^{1}\cap U^{-}),\\ & J_{X}\cap w_{1}J_{X}w_{1}^{-1}=w_{1}(J_{X}\cap U)w_{1}^{-1}J_{ M}(J_{X}\cap U),\\ &\Omega_{1}:=J_{X}/J_{X}\cap w_{1}J_{X}w_{1}^{-1}\simeq J_{X} \cap U^{-}/w_{1}(J_{X}\cap U)w_{1}^{-1}\simeq H_{X}^{1}\cap U^{-}/w_{1}(J_{X} ^{1}\cap U)w_{1}^{-1}.\end{split} \tag{4.2}\]
We already know the possible forms of the quadratic relations satisfied by the generators, up to normalization. What we have to do is:
* when two forms are possible, determine which one is obtained in terms of \(\chi\) and \(\delta\);
* in other words, choose the intertwining operator \(T_{i}(w_{i})\) up to a positive scalar.
Then Theorem 1.10 and the Corollary that follows will give us the result.
We proceed, following the framework in [3, SS1.d]. The relations are \(T_{i}^{2}=b_{i}T_{i}+c_{i}\mathbf{1}\) where the scalars \(b_{i}\) and \(c_{i}\) are given by the following formulae (simpler than in [3] since the space of \(\lambda_{X}\) has dimension 1):
\[\begin{split} c_{i}&\ =\ |\Omega_{i}|\ T_{i}(w_{i})\ T_{i}(w_{i}^{-1}),\\ b_{i}&\ =\ \sum_{x\in\Omega_{i}}\ T_{i}(w_{i}^{-1}x^{-1 }w_{i})=\ \sum_{x\in Y_{i}}\ T_{i}(x),\end{split} \tag{4.3}\]
where we let \(Y_{0}=(H_{X}^{1}\cap U^{-})\backslash w_{0}^{-1}(J_{X}^{1}\cap U)w_{0}\) and \(Y_{1}=(J_{X}^{1}\cap U)\backslash w_{1}^{-1}(H_{X}^{1}\cap U^{-})w_{1}\).
In the expression of \(b_{i}\), the support of the sum on \(Y_{i}\) is the intersection of (a system of representatives of) \(Y_{i}\) with the support of \(T_{i}\). From the uniqueness of the Iwahori decomposition, the decomposition of some element as a product in \(Uw_{i}MU\) or \(U^{-}w_{i}MU^{-}\) is unique (same reason: \(P\cap U^{-}=\{1\}\)). Let \(x\in Y_{0}\cap\operatorname{Supp}T_{0}\) and write \(x=uw_{o}d_{0}(x)u^{\prime}\) with \(u,u^{\prime}\in J_{X}^{1}\cap U\) and \(d_{0}(x)\in J_{M}\), and similarly for \(Y_{1}\) mutatis mutandis, consequently:
\[\begin{split} b_{i}&\ =\ T_{i}(w_{i})\sum_{x\in Y_{i} \cap\operatorname{supp}\ T_{i}}\ \lambda_{X}(d_{i}(x)).\end{split} \tag{4.4}\]
### Relevant matrix decompositions
We have to solve equations such as
\[\begin{pmatrix}I&&\\ D&I&\\ Z&H&I\end{pmatrix}=\begin{pmatrix}I&B_{1}&E_{1}\\ I&F_{1}\\ &&I\end{pmatrix}\begin{pmatrix}&I\\ I&\\ I&&\end{pmatrix}\begin{pmatrix}m&&\\ &g&\\ &&{}_{a}m^{-1}\end{pmatrix}\begin{pmatrix}I&B_{2}&E_{2}\\ &I&F_{2}\\ &&I\end{pmatrix} \tag{4.5}\]
and
\[\begin{pmatrix}I&H&Z\\ &I&D\\ &&I\end{pmatrix}=\begin{pmatrix}I&&\\ F_{1}&I&\\ E_{1}&B_{1}&I\end{pmatrix}\begin{pmatrix}&&\beta\\ &&I&\\ -\beta^{-1}&&\end{pmatrix}\begin{pmatrix}m&&\\ &g&\\ &&{}_{a}m^{-1}\end{pmatrix}\begin{pmatrix}I&&\\ F_{2}&I&\\ E_{2}&B_{2}&I\end{pmatrix} \tag{4.6}\]
in order to determine the intersections \(Y_{i}\cap\operatorname{Supp}T_{i}\), \(i=0,1\). By uniqueness of the Iwahori decomposition, if the LHS belongs to the symplectic group, so do the elements in the RHS. We want the LHS to belong to \(Y_{i}\) and the elements in the RHS to belong to the relevant subgroups in the decomposition of \(J_{X}w_{i}J_{X}\), in particular we need \(m\in\tilde{J}_{W}\), \(g\in J_{V}\).
(We remark that these equations are the ones considered by Shahidi in [24], for orthogonal groups. They actually hold for \(\operatorname{GL}(N^{\prime})\times\operatorname{Sp}(2N)\) as well as the solutions below. Shahidi studies the relationship between \(m\) and \(g\) in (4.5), \(m\) is almost \(\,{}^{a}Z^{-1}\) or \(Z\) and \(g\) is related to the inverse of the "norm" of \(m\), namely \(-m^{-1}\,{}^{a}m\).)
We recall that the adjoint of \(\begin{pmatrix}I&\\ D&I&\\ Z&H&I\end{pmatrix}\) is \(\begin{pmatrix}\,{}^{I}_{a}&I\\ {}_{a}Z&{}_{a}D&I\end{pmatrix}\) so for such a matrix, belonging to \(\operatorname{Sp}(X)\) amounts to \(H=-\,{}^{a}D\) and \(Z+\,{}^{a}Z+\,{}^{a}DD=0\).
To facilitate further checks, we expand the product on the RHS of (4.5):
\[\begin{pmatrix}E_{1}m&T_{1}mB_{2}+B_{1}g&E_{1}mE_{2}+B_{1}gF_{2}+\,{}^{a}m^{-1 }\\ F_{1}m&F_{1}mB_{2}+g&F_{1}mE_{2}+gF_{2}\\ m&mB_{2}&mE_{2}\end{pmatrix}\]
We see that (4.5) has a solution if and only if \(Z\) is invertible, given by
\[\begin{split} m&=\,Z;\quad B_{2}=-Z^{-1}\,{}^{a}D;\quad E_{2}=Z ^{-1};\quad F_{1}=DZ^{-1};\quad E_{1}=Z^{-1};\\ g&=I-(DZ^{-1})Z(-Z^{-1}\,{}^{a}D)=I+DZ^{-1}\,{}^{a}D.\end{split} \tag{4.7}\]
As in [24, Corollary 3.2] we have \(gD=D+DZ^{-1}\,{}^{a}DD=D-DZ^{-1}(Z+\,{}^{a}Z)=-DZ^{-1}\,{}^{a}Z\) so, when \(D\) is invertible:
\[g=-DZ^{-1}\,{}^{a}ZD^{-1}. \tag{4.8}\]
Similarly, the adjoint of \(\begin{pmatrix}\,{}^{I}&H&Z\\ &I&\\ &&I\end{pmatrix}\) is \(\begin{pmatrix}\,{}^{I}&{}^{a}D&{}^{a}Z\\ &I&\\ &&I\end{pmatrix}\) so belonging to \(\operatorname{Sp}(X)\) amounts to \(H=-\,{}^{a}D\) and \(Z+\,{}^{a}Z+\,{}^{a}DD=0\). The product on the RHS of (4.6) is :
\[\begin{pmatrix}\beta\,{}^{a}m^{-1}E_{2}&\beta\,{}^{a}m^{-1}B_{2}&\beta\,{}^{a} m^{-1}\\ gF_{2}+F_{1}\beta\,{}^{a}m^{-1}E_{2}&g+F_{1}\beta\,{}^{a}m^{-1}B_{2}&F_{1} \beta\,{}^{a}m^{-1}\\ -\beta^{-1}m+B_{1}gF_{2}+T_{1}\beta\,{}^{a}m^{-1}E_{2}&B_{1}g+E_{1}\beta\,{}^{ a}m^{-1}B_{2}&E_{1}\beta\,{}^{a}m^{-1}\end{pmatrix}\]
so the general solution for (4.6) is given, for an invertible \(Z\), by
\[{}^{a}\!m^{-1} =\beta^{-1}\,Z;\quad B_{2}=-Z^{-1\,a}D;\quad E_{2}=Z^{-1};\quad F_{ 1}=DZ^{-1};\quad E_{1}=Z^{-1};\] \[g =I-(DZ^{-1})Z(-Z^{-1\,a}D)=I+DZ^{-1\,a}D. \tag{4.9}\]
Again \(gD=D+DZ^{-1\,a}\,DD=D+DZ^{-1}(-Z-\,^{a}Z)=-DZ^{-1\,a}\,Z\) and, when \(D\) is invertible:
\[g=-DZ^{-1\,a}ZD^{-1}. \tag{4.10}\]
To proceed, we must describe the blocks in \(J^{1}_{X}\cap U\) and other relevant subgroups. This is done in [4, Proposition 1] (for a lattice chain, but the lattice sequence \(\Lambda_{X}\) is obtained by homothety-translation from the one in [4] and has same \(\mathfrak{A}_{1}\), \(\tilde{H}^{1}\) and \(\tilde{J}^{1}\)). Here we have \(t=3\) and a specially simple situation since \(\mathfrak{H}^{1}=\mathfrak{H}^{1}(\beta,\Lambda_{2N})=\mathfrak{J}^{1}(\beta, \Lambda_{2N})=\mathfrak{A}_{1}(\Lambda_{2N}):=\mathfrak{A}_{1}\). So:
\[\tilde{J}^{1}_{X}=\tilde{H}^{1}_{X}=I+\begin{pmatrix}\mathfrak{A}_{1}& \mathfrak{o}_{E}+\mathfrak{A}_{1}&\varpi_{E}^{-1}\mathfrak{A}_{1}\\ \mathfrak{A}_{1}&\mathfrak{A}_{1}&\mathfrak{o}_{E}+\mathfrak{A}_{1}\\ \mathfrak{p}_{E}+\varpi_{E}\mathfrak{A}_{1}&\mathfrak{A}_{1}&\mathfrak{A}_{1} \end{pmatrix}=I+\begin{pmatrix}\mathfrak{A}_{1}&\mathfrak{o}_{F}+\mathfrak{A}_ {1}&\mathfrak{A}_{0}\\ \mathfrak{A}_{1}&\mathfrak{A}_{1}&\mathfrak{o}_{F}+\mathfrak{A}_{1}\\ \mathfrak{p}_{E}+\varpi_{E}\mathfrak{A}_{1}&\mathfrak{A}_{1}&\mathfrak{A}_{1} \end{pmatrix}. \tag{4.11}\]
### Computation of \(T_{0}\)
We are looking for solutions (4.7) of (4.5) such that
* \(x=\left(\begin{smallmatrix}I&\\ D&I\\ Z&H&I\end{smallmatrix}\right)\) is in \(w_{0}^{-1}(J^{1}_{X}\cap U)w_{0}\), i.e. \(Z\in\mathfrak{A}_{0}\) and \(D\in\mathfrak{o}_{F}+\mathfrak{A}_{1}\), modulo \(H^{1}_{X}\cap U^{-}\);
* \(x\) belongs to \(J_{X}w_{0}J_{X}\), namely \(B_{1},B_{2}\in\mathfrak{o}_{E}+\mathfrak{A}_{1}\), \(E_{1},E_{2}\in\mathfrak{A}_{0}\), \(m\in\tilde{J}_{W}\) and \(g\in J_{V}\).
The first condition for existence is \(Z\in\tilde{J}_{W}\). Then other constraints are obviously satisfied except the one for \(g\). But since \(\tilde{J}_{W}=\mathfrak{o}_{F}^{\times}+\mathfrak{A}_{1}\), the condition \(Z+\,^{a}Z+\,^{a}DD=0\) implies \({}^{a}\!DD\in\mathfrak{o}_{F}^{\times}+\mathfrak{A}_{1}\), which, added to \(D\in\mathfrak{o}_{F}+\mathfrak{A}_{1}\), implies \(D\in\tilde{J}_{W}\). Then \(g=-DZ^{-1\,a}ZD^{-1}\) belongs to \(\tilde{J}_{W}\cap\operatorname{Sp}(V)=J_{V}\).
We use (4.4) with notation in (4.5) and (4.7). The general term in the sum is
\[\lambda_{X}(d_{0}(x))=(\delta\otimes\psi_{2\beta})(Z)\ (\chi\otimes\psi_{ \beta})(-DZ^{-1\,a}ZD^{-1}).\]
We write \(Z=a(1+z)\) and \(D=u(1+d)\) with \(a,u\in\mathfrak{o}_{F}^{\times}\) and \(z,d\in\mathfrak{A}_{1}\) and get:
\[\lambda_{X}(d_{0}(x))=\delta(a)\psi_{2\beta}(1+z)\ \chi(-1)\psi_{\beta}((1+d)(1+z)^{-1}( 1+\,^{a}z)(1+d)^{-1})=\delta(a)\chi(-1)\]
since \(\psi\circ\operatorname{tr}(\beta\,^{a}\!z)=\psi\circ\operatorname{tr}(-\beta z)\). Now the sum in (4.4) is on elements \(Z,D\in\tilde{J}_{W}\) with \(Z+\,^{a}\!Z+\,^{a}\!ZD=0\), or equivalently on \(a,u\in\mathfrak{o}_{F}^{\times}\), \(d,z\in\mathfrak{A}_{1}\), such that \(2a+a(z+\,^{a}\!z)+u^{2}+u^{2}(d+\,^{a}\!d)+u^{2\,a}dd=0\), in particular \(2a+u^{2}\equiv 0\) mod \(\mathfrak{p}_{F}\). Moreover, for each \(a,u\) satisfying this congruence, the number of pairs \((d,z)\) satisfying the conditions is constant, independent of \(a,u\). So, working up to a positive constant, we get
\[b_{0}\equiv T_{0}(w_{0})\sum_{\begin{subarray}{c}a,u\in k_{F}^{\times}\\ 2a+u^{2}=0\end{subarray}}\ \delta(a)\chi(-1)\equiv T_{0}(w_{0})\chi(-1)\sum_{u\in k_{F} ^{\times}}\ \delta(-u^{2}/2).\]
Since \(\delta\) is trivial on squares we have \(b_{0}\equiv T_{0}(w_{0})\chi(-1)\delta(-2)\). We know there is a normalisation of \(T_{0}\) such that \(b_{0}=q-1\) and \(c_{0}=q\). Since \(c_{0}\ =\ |\Omega_{0}|\ T_{0}(w_{0})^{2}\), this normalisation satisfies
\[T_{0}(w_{0})\equiv\chi(-1)\delta(-2). \tag{4.12}\]
### Computation of \(T_{1}\)
We look for solutions (4.9) with:
* \(x=\left(\begin{smallmatrix}I&H&Z\\ I&D\\ &I\end{smallmatrix}\right)\) is in \(w_{1}^{-1}(H_{X}^{1}\cap U^{-})w_{1}\), that is \(Z\in\beta(\mathfrak{o}_{F}+\mathfrak{A}_{1})\) and \(D\in\beta\mathfrak{A}_{1}\), mod \(J_{X}^{1}\cap U\) ;
* \(x\) belongs to \(J_{X}w_{1}J_{X}\), that is \(B_{1},B_{2}\in\mathfrak{A}_{1}\), \(E_{1},E_{2}\in\beta^{-1}(\mathfrak{o}_{F}+\mathfrak{A}_{1})\), \(m\in\tilde{J}_{W}\) and \(g\in J_{V}\).
The first condition is \(m=-\beta\,^{a}Z^{-1}\in\tilde{J}_{W}\), that is \(Z\in\beta\tilde{J}_{W}\). Then the other constraints are obviously satisfied, except the one for \(g\) that we postpone. We recall that \(\varpi_{E}=\beta^{-1}\).
The summation in (4.4) is over the \((J_{X}^{1}\cap U)\)-cosets of the intersection of \(w_{1}^{-1}(H_{X}^{1}\cap U^{-})w_{1}\) with the support of \(T_{1}\). An element of \(w_{1}^{-1}(H_{X}^{1}\cap U^{-})w_{1}\) can be written \(\left(\begin{smallmatrix}I&-\beta R&-\beta U\beta\\ I&S\beta\\ &I\end{smallmatrix}\right)\) with \(U\in\mathfrak{p}_{E}+\varpi_{E}\mathfrak{A}_{1}\) and \(S\in\mathfrak{A}_{1}\), that is \(\left(\begin{smallmatrix}I&H&\beta z+t\\ I&D\\ &I\end{smallmatrix}\right)\) with \(z\in\mathfrak{o}_{E}\), \(t\in\mathfrak{A}_{0}\), \(D\in\mathfrak{A}_{0}\). The intersection with \(J_{X}w_{1}J_{X}\) corresponds to \(z\in\mathfrak{o}_{E}^{\times}\). We obtain a system of representatives of the quotient \(Y_{1}\) as follows:
\(\left(\begin{array}{ccc}I&-\,^{a}D&\varpi_{E}^{-1}z-\frac{1}{2}\,^{a}DD\\ &I&D\\ &&I\end{array}\right)\), with \(z\in k_{F}^{\times}\) and \(D\) in a system of representatives \(\mathfrak{R}\) that we detail later. We get
\(m=\,^{a}[\varpi_{E}(\varpi_{E}^{-1}z-\frac{1}{2}\,^{a}DD)]^{-1}=\,z^{-1}\,^{a} (1-\frac{1}{2}\varpi_{E}z^{-1}\,^{a}DD)^{-1}\) and
\(g=1+D(1-\frac{1}{2}\varpi_{E}z^{-1}\,^{a}DD)^{-1}z^{-1}\varpi_{E}\,^{a}D\equiv 1+ Dz^{-1}\varpi_{E}\,^{a}D\) modulo \(\mathfrak{A}_{3}\) (recall that \(\Lambda_{V}\) has period 2 over \(E\)).
A term in the sum (4.4) can be computed as follows:
\[\tilde{\lambda}_{W}(z^{-1}\,^{a}(1-\frac{1}{2}\varpi_{E}z^{-1}\, ^{a}DD)^{-1})\,\otimes\lambda_{V}(1+Dz^{-1}\varpi_{E}\,^{a}D)\] \[=\ \delta(z^{-1})\,\psi_{2\beta}(\,^{a}(1-\frac{1}{2}\varpi_{E}z^{- 1}\,^{a}DD)^{-1})\,\psi_{\beta}(1+Dz^{-1}\varpi_{E}\,^{a}D)\] \[=\ \delta(z)\,\psi_{2\beta}(1-\frac{1}{2}\varpi_{E}z^{-1}\,^{a}DD) \,\psi_{\beta}(1+Dz^{-1}\varpi_{E}\,^{a}D)\] \[=\ \delta(z)\,\psi\circ\mathrm{tr}(2\beta(-\frac{1}{2}\varpi_{E}z ^{-1}\,^{a}DD))\psi\circ\mathrm{tr}(\beta Dz^{-1}\varpi_{E}\,^{a}D)\] \[=\ \delta(z)\,\psi\circ\mathrm{tr}(\beta z^{-1}(-\varpi_{E}\,^{a} DD+D\varpi_{E}\,^{a}D)).\]
Remember that we took \(\beta=\varpi_{E}^{-1}\) so
\[b_{1}\ =\ T_{1}(w_{1})\sum_{D\,\in\,\mathfrak{R},\,z\in k_{F}^{\times}}\ \delta(z)\,\psi\circ\operatorname{tr}(z^{-1}(-\,^{a}DD+\varpi_{E}^{-1}D\varpi_{ E}\,^{a}D)).\]
Now \(\mathfrak{R}\) is a system of representatives of \(\mathfrak{A}_{0}/\mathfrak{o}_{E}+\mathfrak{A}_{1}\), whereas for \(D\in\mathfrak{o}_{E}\) the trace under \(\psi\) is zero. We can use the bigger quotient \(\mathfrak{A}_{0}/\mathfrak{A}_{1}\) that has dimension \(2N\) (see SS2.1) and use for \(\mathfrak{R}\) the diagonal matrices \(D=\operatorname{diag}(d_{1},\ldots,d_{2N})\) with coefficients in \(\mathfrak{o}_{F}\) (mod \(\mathfrak{p}_{F}\)). Then
\[{}^{a}D=\operatorname{diag}(d_{2N},\ldots,d_{1}),\] \[\varpi_{E}^{-1}D\varpi_{E}=\operatorname{diag}(d_{2N},d_{1}, \ldots,d_{2N-1}),\] \[\operatorname{tr}(\,^{a}DD)=2(d_{1}d_{2N}+\cdots+d_{N}d_{N+1}),\] \[\operatorname{tr}(\varpi_{E}^{-1}D\varpi_{E}\,^{a}D)=d_{2N}^{2}+ d_{N}^{2}+2(d_{1}d_{2N-1}+\cdots+d_{N-1}d_{N+1}).\]
Working up to positive constant we get
\[b_{1}\ \equiv\ T_{1}(w_{1})\sum_{d_{1},\cdots,d_{2N}\,\in\,k_{F},\, z\in k_{F}^{\times}}\ \delta(z)\,\psi(z^{-1}(d_{2N}^{2}+d_{N}^{2}+2(d_{1}d_{2N-1}+\cdots+d_{N-1}d_{N+ 1})\\ -2(d_{1}d_{2N}+\cdots+d_{N}d_{N+1}))).\]
Fixing all variables except one, say \(d_{k}\) with \(k\neq N,2N\), we can factor out a sum \(\sum_{d_{k}\in k_{F}}\psi(ud_{k})\), equal to \(q\) if \(u\in\mathfrak{p}_{F}\) and to \(0\) if \(\operatorname{val}(u)=0\). So we are left with a sum with conditions \(d_{2N}=d_{2N-1}=\cdots=d_{N+1}\) and \(d_{N}=d_{N-1}=\cdots=d_{1}\) and, always up to positive constant, to:
\[b_{1} \equiv\ T_{1}(w_{1})\sum_{d_{N},d_{2N}\,\in\,k_{F},\,z\in k_{F}^{ \times}}\ \delta(z)\,\psi(z^{-1}(d_{2N}^{2}+d_{N}^{2}-2d_{N}d_{2N})\] \[\equiv\ T_{1}(w_{1})\sum_{d_{N},d_{2N}\,\in\,k_{F},\,z\in k_{F}^{ \times}}\ \delta(z)\,\psi(z^{-1}(d_{2N}-d_{N})^{2})\] \[\equiv\ T_{1}(w_{1})\sum_{d\,\in\,k_{F},\,z\in k_{F}^{\times}} \delta(z)\,\psi(z^{-1}d^{2}).\]
If \(\delta\) is trivial the sum over \(z\) for a fixed \(d\) is \(q-1\) if \(d=0\) and \(-1\) if \(d\neq 0\), so \(b_{1}=0\). Therefore we have reducibility at \(1\) if and only if \(\delta\) is quadratic. If so, for a fixed \(d\), the sum in \(z\) is zero if \(d=0\), independent of \(d\) if \(d\) is non-zero. We obtain
\[b_{1}\ \equiv\ T_{1}(w_{1})\sum_{z\in k_{F}^{\times}}\ \delta(z)\,\psi(z)\equiv\ T_{1}(w_{1}) \xi(\delta,\psi).\]
where \(\xi(\delta,\psi)\) is the normalised (modulus \(1\)) Gauss sum defined in (3.8).
We know that, if \(\delta\) is quadratic, there is a normalisation of \(T_{1}\) such that \(b_{1}=q-1\) and \(c_{1}=q\). Since \(c_{1}\ =\ |\Omega_{1}|\ \delta(-1)T_{1}(w_{1})^{2}\) this normalisation satisfies
\[T_{1}(w_{1})\equiv\xi(\delta,\psi)^{-1}. \tag{4.13}\]
### The answer
We fix now \(\delta\) as the (non-trivial) quadratic character of \(\mathfrak{o}_{F}^{\times}\). The cuspidal type \((\tilde{J}_{W},\tilde{\lambda}_{W})\) extends to the compact mod center subgroup \(E^{\times}\tilde{J}_{W}\) by choosing a character \(\tau\) of \(E^{\times}\) extending \(\delta\). This is equivalent to choosing the value of \(\tau\) on a uniformizing element of \(E\). The induced representation of \(\tau\otimes\psi_{2\beta}\) to \(\operatorname{GL}(W)\) is then cuspidal irreducible.
There is exactly one of these representations, say \(\sigma=\operatorname{c-Ind}_{E^{\times}\tilde{J}_{W}}^{\operatorname{GL}(2N,F )}\tau\otimes\psi_{2\beta}\), such that \(\sigma\) is self-dual and \(\operatorname{Ind}_{P}^{G}\sigma|\det|\otimes\pi\) is reducible. This representation is characterized by the value of \(\tau\) on a uniformizing element given by Theorem 1.10. Since we have
\[w_{0}w_{1}=\begin{pmatrix}-\beta^{-1}&0&0\\ 0&I_{2N}&0\\ 0&0&\beta\end{pmatrix}\]
we must have, up to a positive constant:
\[\tau(-\beta^{-1})\equiv\chi(-1)\delta(-2)\xi(\delta,\psi)^{-1}\]
But the representation must be self-dual and the inducing character also, hence
\[\tau(\beta^{-1})\equiv\chi(-1)\delta(2)\xi(\delta,\psi)^{-1}\]
**Proposition 4.14**.: _The Jordan set of \(\pi=\operatorname{c-Ind}_{ZI_{2N}(1)}^{G}\chi\otimes\psi_{\beta}\) relative to the endoclass of the simple character \(\psi_{2\beta}\) of \(\tilde{I}_{2N}(1)\) is \(\operatorname{Jord}(\pi,\psi_{2\beta})=\{(\sigma,1)\}\) with_
\[\sigma=\operatorname{c-Ind}_{E^{\times}\tilde{I}_{2N}(1)}^{\operatorname{GL}(2 N,F)}\tau\otimes\psi_{2\beta}\]
_where \(\tau_{|\mathfrak{o}_{E}^{\times}}\) is the quadratic character of \(\mathfrak{o}_{E}^{\times}\) and_
\[\tau(\beta)=\chi(-1)\delta(2)\xi(\delta,\psi).\]
We notice that \(\tau(-\beta^{2})=\tau(-1)\xi(\delta,\psi)^{2}=1\) and that
\[\tau(-\beta^{2N})=\delta(-1)\xi(\delta,\psi)^{2N}=[(-1)^{\frac{q-1}{2}}]^{N+1}\]
is trivial if \(N\) is odd and equal to \((-1)^{\frac{q-1}{2}}\) if \(N\) is even.
### Other simple cuspidals
So far we have computed the Jordan sets of the simple cuspidal representations of \(G=\operatorname{Sp}(2N,F)\) whose restriction to \(I_{2N}(1)\) is given by the element \(\beta\) of SS2.2. Note, however, that \(\beta\) depends on the choice of the uniformizer \(\varpi_{F}\) of \(F\), which we had fixed but is otherwise arbitrary. So varying \(\varpi_{F}\), hence \(\beta\), gives other simple cuspidal representations of \(G\), and our results apply equally to them.
However varying \(\varpi_{F}\) does not give all the simple cuspidal representations attached to the more general affine generic characters of SS2.1. Let us analyze the situation. We first note that an arbitrary Iwahori subgroup \(I\) of \(G\) is conjugate in \(G\) to our fixed Iwahori subgroup \(I_{2N}\), and that its subgroups \(I(1)\) and \(I(2)\) are sent onto \(I_{2N}(1)\) and \(I_{2N}(2)\) by a conjugation sending \(I\) to \(I_{2N}\): indeed \(I=G_{x,0}\) is the parahoric subgroup attached to the barycenter \(x\) of an alcove of the Bruhat-Tits building of \(G\), \(I(1)\) is the Moy-Prasad subgroup \(G_{x,0+}\) and \(I(2)\)
the Moy-Prasad subgroup \(G_{x,(\frac{1}{2N})^{+}}\). So we don't get more simple cuspidal representations by choosing an Iwahori subgroup other than \(I_{2N}\), and we may restrict to the ones attached to the affine generic characters of SS2.1.
Let \(\lambda\) be the affine generic character of \(I_{2N}(1)\) with given parameters \(\alpha_{i}\) for \(i=1,\ldots,N\) (which are units in \(F\)) and \(\alpha_{2N}\) (which has valuation \(-1\) in \(F\)). Let \(\lambda^{\prime}\) be another affine generic character, with parameters \(\alpha^{\prime}_{i}\).
The same reasoning that shows that our representation \(\pi\) of SS2.2 is irreducible (hence cuspidal) also shows that the intertwining of \(\lambda\) and \(\lambda^{\prime}\) is restricted to \(ZI_{2N}\), a group that normalizes \(I_{2N}(1)\). So we need to examine when \(\lambda\) and \(\lambda^{\prime}\) are conjugate under \(I_{2N}\), a result that was stated without proof in [21, p. 21]. We will moreover get that there are \(4(q_{F}-1)\) isomorphism classes of simple cuspidal representations of \(G\) (_loc. cit._).
Of course \(I_{2N}(1)\) acts trivially on \(\lambda\) and \(\lambda^{\prime}\), so it is enough to look at the conjugation action of the diagonal elements \(d=\operatorname{diag}(d_{1},\ldots,d_{N},1/d_{N},\ldots,1/d_{1})\) in \(I_{2N}\). Such an element \(d\) acts on \(\lambda\) by multiplying \(\alpha_{i}\) (for \(i=1,\ldots,N-1\)) by \(d_{i}/d_{i+1}\), \(\alpha_{N}\) by \((d_{N})^{2}\) and \(\alpha_{2N}\) by \((1/d_{1})^{2}\). Thus conjugation by \(d\) preserves the classes of \(\alpha_{N}\) and \(\alpha_{2N}\) modulo squares in \(\mathfrak{o}_{F}^{\times}\), and also preserves \((\alpha_{1}\cdots\alpha_{N-1})^{2}\alpha_{N}\alpha_{2N}\) (which matters only modulo \(1+\mathfrak{p}_{F}\)). We easily deduce that \(\lambda^{\prime}\) is the conjugate of \(\lambda\) by such a diagonal element \(d\) if and only if:
1. \(\alpha^{\prime}_{N}\) is equal to \(\alpha_{N}\) modulo squares in \(\mathfrak{o}_{F}^{\times}\).
2. \(\alpha^{\prime}_{2N}\) is equal to \(\alpha_{2N}\) modulo squares in \(\mathfrak{o}_{F}^{\times}\).
3. \((\alpha^{\prime}_{1}\cdots\alpha^{\prime}_{N-1})^{2}\alpha^{\prime}_{N}\alpha^ {\prime}_{2N}\) is equal to \((\alpha_{1}\cdots\alpha_{N-1})^{2}\alpha_{N}\alpha_{2N}\) modulo \(1+\mathfrak{p}_{F}\).
(Note that given (iii), (i) is equivalent to (ii)).
The number of conjugacy classes of \(\lambda\)'s is \(2(q_{F}-1)\). Indeed let \(\epsilon\) be a non-square in \(\mathfrak{o}_{F}^{\times}\). Conjugating as above we may assume that \(\alpha_{i}=-1\) for \(i=1,\ldots,N-1\), and that \(\alpha_{N}=-1\) or \(-\epsilon\), and then (iii) allows \(q_{F}-1\) choices for \(\alpha_{2N}\). Taking the central character into account shows that indeed \(G\) has \(4(q_{F}-1)\) simple cuspidal representations up to isomorphism.
**Remark 4.15**.: Changing the additive character \(\psi\) into the character \(\psi^{a}\) sending \(x\) to \(\psi(ax)\) amounts to taking \(\alpha_{i}=a\) for \(i=1,\ldots,N\) and \(i=2N\).
The results in sections 3 and 4 therefore apply directly to half of the simple cuspidal representations of \(G\).
To see that our results still apply to the other half, let us look at the conjugation action of \(\operatorname{GSp}(2N,F)\) on \(\operatorname{Sp}(2N,F)\). More precisely take the diagonal elements \(d_{\epsilon}\) in \(\operatorname{GSp}(2N,F)\) of the form \(\operatorname{diag}(\epsilon,\ldots,\epsilon,1,\ldots,1)\) where \(\epsilon\) (a non-square in \(\mathfrak{o}_{F}^{\times}\)) appears \(N\) times. Then conjugation by \(d_{\epsilon}\) preserves \(I_{2N}\) and \(I_{2N}(1)\), and transforms \(\psi_{\beta}\) into the affine generic character with \(\alpha_{i}=-2\) for \(i=1,\ldots,N-1\), \(\alpha_{N}=-\epsilon\) and \(\alpha_{2N}=\frac{1}{\epsilon\varpi_{F}}\). Varying \(\varpi_{F}\) we see that we get all missing cuspidals that way.
But the reducibility points are the same for our cuspidal representation \(\pi\) and its conjugate by \(d_{\epsilon}\): indeed on \(\operatorname{Sp}(2M+2N,F)\) we can consider the action of the similar matrix
\(\operatorname{diag}(\epsilon,\ldots,\epsilon,1,\ldots,1)\), but this time with \(N+M\) occurrences of \(\epsilon\). Conjugating by that matrix on the Levi subgroup \(\operatorname{GL}(M,F)\times\operatorname{Sp}(2N,F)\) induces the previous conjugation on \(\operatorname{Sp}(2N,F)\), but the identity on \(\operatorname{GL}(M,F)\).
A consequence of the preceding analysis is the following result, which follows from Propositions 3.12 and 4.14 by conjugation inside \(\operatorname{GSp}(2N,F)\):
**Theorem 4.16**.: _Let \(\pi\) be a simple cuspidal representation of \(G\), written as \(\pi=\operatorname{c-Ind}_{ZI_{2N}(1)}^{G}\chi\otimes\psi_{\beta}\), where \(\chi\) is a character of the center \(Z\simeq\{\pm 1\}\) of \(G\) and \(\beta^{-1}\) is a uniformizer of a totally ramified extension \(E\) of \(F\) of degree \(2N\) normalizing \(I_{2N}(1)\). The Jordan set of \(\pi\) is \(\operatorname{Jord}(\pi)=\{(\epsilon_{1},1),(\sigma,1)\}\) where_
* \(\epsilon_{1}\) _is the ramified quadratic character of_ \(F^{\times}\) _characterized by_ \[\epsilon_{1}(N_{E/F}(\beta))=(-1)^{(N+1)\frac{q-1}{2}};\]
* \(\sigma\) _is the simple cuspidal representation of_ \(\operatorname{GL}(2N,F)\) _defined by_ \[\sigma=\operatorname{c-Ind}_{E^{\times}\tilde{I}_{2N}(1)}^{\operatorname{GL}( 2N,F)}\tau\otimes\psi_{2\beta}\] _where_ \(\tau_{|\mathfrak{o}_{E}^{\times}}\) _is the quadratic character of_ \(\mathfrak{o}_{E}^{\times}\) _and_ \[\tau(\beta)=\chi(-1)\delta(2)\xi(\delta,\psi).\]
### A remark on epsilon factors
For use in the next paragraph, let us remark about the \(\varepsilon\)-factor at \(s=1/2\) of \(\epsilon_{1}\) and of \(\sigma\) in Theorem 4.16 above. Since \(\epsilon_{1}\) is quadratic (equal to \(\delta\)) on restriction to \(\mathfrak{o}_{F}^{\times}\), we have \(\varepsilon(\epsilon_{1},\frac{1}{2},\psi)=\xi(\delta,\psi)\). On the other hand the factor \(\varepsilon(\sigma,\frac{1}{2},\psi)\) is computed in [8, Lemma 2.2] and is equal to \(\frac{1}{\tau(2\beta)}\) (remarking that the trace of the matrix \(\beta\) is \(0\)). But by Proposition 4.14 we have \(\tau(2\beta)=\chi(-1)\xi(\delta,\psi)\), so \(\varepsilon(\epsilon_{1},\frac{1}{2},\psi)\varepsilon(\sigma,\frac{1}{2},\psi )=\chi(-1)\).
## 5. Langlands parameters for simple cuspidals
### The characteristic zero case
Let us now assume that \(F\) has characteristic \(0\). In that case the local Langlands correspondence has been established by Arthur, and our results about reducibility points allow us to give the parameter of a simple cuspidal representation of \(G\), thus completing, in the special case of simple cuspidal representations, the results of [5].
**Theorem 5.1**.: _Let \(\pi\) be a simple cuspidal representation of \(\operatorname{Sp}(2N,F)\) as in Theorem 4.16. Then the parameter of \(\pi\) is the direct sum of the quadratic character \(\omega\) of \(W_{F}\) corresponding to \(\epsilon_{1}\) and an irreducible orthogonal representation of dimension \(2N\), corresponding via the local Langlands correspondence for \(\operatorname{GL}(2N,F)\) to the cuspidal representation \(\sigma\) of Proposition 4.14._
**Remark 5.2**.: Once a local Langlands correspondence for \(G\) is established when \(F\) has characteristic \(p\), we get the result in that case too. There has been recent progress on establishing this correspondence when \(F\) has characteristic \(p\) (see Ganapathy-Varma [14], Gan-Lomeli [13], and current work of Aubert and Varma). Besides, for a generic cuspidal representation \(\pi\) of \(G\) (in particular for a simple cuspidal one), Lomeli [17] has used converse theorems to produce a parameter for \(\pi\). At another occasion we shall show that the arguments of the present section still apply to explicit the parameter, giving the exact same statement.
**Remark 5.3**.: Conjugating \(\pi\) inside \(\operatorname{GSp}(2N,F)\) by \(d_{\epsilon}\) as in 4.7 gives a representation with the same parameter.
### An alternative proof: method
In fact, the analysis and results of [5], supplemented by an identity due to Lapid, are enough to get the previous theorem, without using the computations of sections 3 and 4, as we show presently. That gives a consistency check on those very computations, when \(F\) has characteristic \(0\).
Let \(\pi\) be our simple cuspidal representation as in SS4.7. From [5] we know already that the parameter \(\rho\) of \(\pi\) is the direct sum of a quadratic character \(\omega\) of \(W_{F}\) and an irreducible orthogonal representation \(\tau\) of dimension \(2N\), corresponding to a simple cuspidal representation \(\sigma\) of \(\operatorname{GL}(2N,F)\) constructed from the stratum attached to \(2\beta\). In particular, \(\tau\) has Swan exponent \(1\), hence is not tame, and has trivial stabilizer under character twists. In principle the results of [5] allow us to determine the restrictions of \(\omega\) and \(\tau\) to the inertia group, so the only ambiguity left is small: we could twist \(\omega\) and \(\tau\) by unramified quadratic characters (see [5], section 6, in particular 6.6 Proposition) and have an equally plausible parameter after the results of [5].
To remove that ambiguity we note two things. The first is that, for an unramified character \(\eta\) of \(W_{F}\) of order \(2\), \(\tau\) and \(\eta\tau\) have equal determinant, since \(\dim(\tau)\) is even. So \(\omega\) is determined by \(\omega=\det(\tau)=\det(\eta\tau)\): there is no ambiguity in \(\omega\). The second is that the \(\varepsilon\)-factor of \(\tau\) is sensitive to that character twist, because \(\tau\) has Swan exponent \(1\) hence Artin exponent \(2N+1\): we have \(\varepsilon(\eta\tau,\frac{1}{2},\psi)=-\varepsilon(\tau,\frac{1}{2},\psi)\). But the main result of Lapid [16] gives us precisely the necessary information. Indeed, the representation \(\pi\) is generic, so its Langlands-Shahidi factors \(\varepsilon(\pi,s,\psi)\) are defined. But the local Langlands correspondence for \(\operatorname{Sp}(2N)\) preserves the \(\varepsilon\)-factors, in the sense that \(\varepsilon(\pi,s,\psi)=\varepsilon(\omega,s,\psi)\varepsilon(\tau,s,\psi)\) (for that preservation, see Appendices A and B in [1]). Similarly by the local Langlands correspondence for \(\operatorname{GL}(2N,F)\), we have \(\varepsilon(\tau,s,\psi)=\varepsilon(\sigma,s,\psi)\). The result of Lapid says that \(\varepsilon(\pi,\frac{1}{2},\psi)\) is the value \(\chi(-1)\) of the central character of \(\pi\) at \(-1\). Thus we deduce \(\varepsilon(\sigma,\frac{1}{2},\psi)\varepsilon(\omega,\frac{1}{2},\psi)=\chi (-1)\), which resolves the ambiguity in \(\rho\).
### An alternative proof: results
Let us identify \(\omega\) and the character corresponding to it via class field theory, also written \(\omega\). Let us show that \(\omega\) is the character \(\epsilon_{1}\) of Theorem 4.16. We first show that \(\omega\) is ramified. Indeed \(\tau\) has Artin exponent \(2N+1\) and the orthogonal representation \(\rho\) has trivial determinant. Then \(\rho\) has even Artin exponent by an old result of Serre [23], and that implies that \(\omega\) has odd Artin exponent, hence is quadratic ramified.
The cuspidal representation \(\sigma\) of \(\operatorname{GL}(2N,F)\) has central character \(\omega\) and is constructed from the affine generic character \(\psi_{2\beta}\) of the subgroup \(J^{1}=I_{N}(1)\) of \(\operatorname{GL}(2N,F)\). It is induced from an extension \(\theta\) of \(\psi_{2\beta}\) to its normalizer \(J\) in \(\operatorname{GL}(2N,F)\), which is the group \((2\beta)^{\mathbb{Z}}F^{\times}J^{1}\), and that extension is \(\omega\) on \(F^{\times}\), so is determined by its value \(a\) on \(2\beta\), subject to \(a^{2N}=\omega((2\beta)^{2N})\).
However \(\tau\) is self-dual, which imposes a condition on \(a\). The contragredient of \(\tau\) is induced from the character \(\theta^{-1}\) of \(J\). Saying that \(\tau\) is self-dual therefore means that \(\theta^{-1}\) intertwines with \(\theta\) in \(G\). But the restriction of \(\theta^{-1}\) to \(I_{N}(1)\) is the affine generic character \(\psi_{-2\beta}\), so it is sent to \(\psi_{2\beta}\) by conjugation by the diagonal matrix \(\operatorname{diag}(1,-1,1,-1,\ldots,1,-1)\), which conjugates \(\beta\) to \(-\beta\). The condition on \(a\) is therefore that \(\theta(-2\beta)=\frac{1}{\theta(2\beta)}\), that is \(a^{2}=\omega(-1)\). Thus a fortiori \(\omega(\beta^{2N})=a^{2N}=\omega(-1)^{N}\). But, as seen in SS3.4, \(N_{E/F}(\beta)=-\beta^{2N}\), so \(\omega(N_{E/F}(\beta))=\omega(-1)^{N-1}\). We happily find exactly the same recipe as in Proposition 3.12, so that indeed \(\omega=\epsilon_{1}\). It now also follows from SS4.8 that \(\sigma\) is given by the recipe of Theorem 4.16.
### The case of non-simple cuspidals for \(\operatorname{Sp}(4,F)\)
Let us briefly comment on what Lapid's result brings to the analysis of the examples in [5, SS6.9]. When \(N=1\), it gives supplementary information which determines the parameter of a cuspidal representation of \(\operatorname{SL}(2,F)\) (of course, that case can also be deduced from the local Langlands correspondence for \(\operatorname{GL}(2,F)\)).
Let us look at the more interesting case where \(N=2\). We do not consider parameters with an occurrence of \(\operatorname{St}_{3}\): the corresponding packets contain non-cuspidal discrete series, they have been determined explicitly by Suzuki and Xu [27], thus confirming guesses of the second author decades ago (Lettre aux espequatrophiles).
An ambiguous case in [5] was that of a parameter involving \(3\) quadratic characters and an irreducible orthogonal representation \(\rho\) of dimension \(2\), induced from a quadratic ramified extension. In that case the Artin exponent of \(\rho\) is odd, so choosing between \(\rho\) and the other possibility \(\rho^{\prime}\) (the twist of \(\rho\) by the unramified order \(2\) character) is done using Lapid's result. However when the parameter contains two ambiguous components of dimension \(2\), adding Lapid's result does not resolve all ambiguities.
|
2309.03489 | Sub-Finsler geometry and nonholonomic mechanics | In this paper, we discuss a variational approach to the length functional and
its relation to sub-Hamiltonian equations on sub-Finsler manifolds. Then, we
introduce the notion of the nonholonomic sub-Finslerian structure and prove
that the distributions are geodesically invariant concerning the Barthel
non-linear connection. We provide necessary and sufficient conditions for the
existence of the curves that are abnormal extremals; likewise, we provide
necessary and sufficient conditions for normal extremals to be the motion of a
free nonholonomic mechanical system, and vice versa.
Moreover, we show that a coordinate-free approach for a free particle is a
comparison between the solutions of the nonholonomic mechanical problem and the
solutions of the Vakonomic dynamical problem for the nonholonomic
sub-Finslerian structure. In addition, we provide an example of the
nonholonomic sub-Finslerian structure. Finally, we show that the sub-Laplacian
measures the curvature of the nonholonomic sub-Finslerian structure. | Layth M. Alabdulsada | 2023-09-07T05:55:57Z | http://arxiv.org/abs/2309.03489v1 | # Sub-Finsler geometry and nonholonomic mechanics
###### Abstract.
In this paper, we discuss a variational approach to the length functional and its relation to sub-Hamiltonian equations on sub-Finsler manifolds. Then, we introduce the notion of the nonholonomic sub-Finslerian structure and prove that the distributions are geodesically invariant concerning the Barthel non-linear connection. We provide necessary and sufficient conditions for the existence of the curves that are abnormal extremals; likewise, we provide necessary and sufficient conditions for normal extremals to be the motion of a free nonholonomic mechanical system, and vice versa. Moreover, we show that a coordinate-free approach for a free particle is a comparison between the solutions of the nonholonomic mechanical problem and the solutions of the Vakonomic dynamical problem for the nonholonomic sub-Finslerian structure. In addition, we provide an example of the nonholonomic sub-Finslerian structure. Finally, we show that the sub-Laplacian measures the curvature of the nonholonomic sub-Finslerian structure.
Key words and phrases:Sub-Finsler geometry, Sub-Hamiltonian vector field, Sub-Hamiltonian equations, Non-linear connection, Nonholonomic Free Particle, Sub-Laplacian 2020 Mathematics Subject Classification: 53C05, 53C60, 70F25, 53C17
## 1. Introduction
Sub-Finsler geometry and nonholonomic mechanics have attracted more attention recently; they are rich subjects with many applications.
Sub-Finsler geometry is a natural generalization of sub-Riemannian geometry. The sub-Riemannian metric was initially referred to as the Carnot-Caratheodory metric. J. Mitchell, [23], investigated the Carnot-Caratheodory distance between two points by considering a smooth Riemannian \(n\)-manifold \((M,g)\) equipped with a \(k\)-rank distribution \(\mathcal{D}\) of the tangent bundle \(TM\). A decade later, M. Gromov [17] provided a comprehensive study of the above concepts. V. N. Berestovskii [9] identified the Carnot-Caratheodory Finsler metric version as the Finsler counterpart of this metric, now commonly known as the sub-Finsler metric. In this study, our definition of the sub-Finsler metric closely aligns with the definition presented in previous works [3, 14]. The motivation behind studying sub-Finsler geometry lies in its pervasive presence within various branches of pure mathematics, particularly in differential geometry and applied fields like geometric mechanics, control theory, and robotics. We refer the readers to [1, 4, 8, 20].
Nonholonomic mechanics is currently a very active area of the so-called geometric mechanics [21]. Constraints on mechanical systems are typically classified into two categories: integrable and nonintegrable constraints. _Nonholonomic mechanics_: constraints that are not holonomic; these might be constraints that are expressed in terms of the velocity of the coordinates that cannot be derived from the constraints of the coordinates (thereby nonintegrable) or the constraints that
are not given as an equation at all [19]. Nonholonomic control systems exhibit unique characteristics, allowing control of underactuated systems due to constraint nonintegrability. These problems arise in physical contexts like wheel systems, cars, robotics, and manipulations, with more insights found in [10, 21].
In [18], B. Langerock considered a general notion of connections over a vector bundle map and applied it to the study of mechanical systems with linear nonholonomic constraints and a Lagrangian of kinetic energy type. A. D. Lewis in [19], investigated various consequences of a natural restriction of a given affine connection to distribution. The basic construction comes from the dynamics of a class of mechanical systems with nonholonomic constraints. In a previous paper in collaboration with L. Kozma [3], constructed a generalized non-linear connection for a sub-Finslerian manifold, called \(\mathcal{L}\)-connection by the Legendre transformation which characterizes normal extremals of a sub-Finsler structure as geodesics of this connection. In this paper, [3] and [4] play an important role in calculating our main results. These results are divided into two parts: sub-Hamiltonian systems and nonholonomic sub-Finslerian structures on the nonintegrable distributions.
The paper is organized according to the following: In Section 2, we review some standard facts about sub-Finslerian settings. In Section 3, we define a sub-Finsler metric on \(\mathcal{D}\) by using a sub-Hamiltonian function \(\eta(x,p)\) and show the correspondence between the solutions of sub-Hamiltonian equations and the solution of a variational problem. Section 4 introduces the notion of nonholonomic sub-Finslerian structures and presents the main results, including conditions for the motion of a free mechanical system under linear nonholonomic constraints to be normal extremal with respect to the linked sub-Finslerian structure. Section 5 provides an example of the nonholonomic sub-Finslerian structure, and Section 6 discusses the curvature of the sub-Finslerian structure. We conclude that if the sub-Laplacian \(\Delta_{F}\) is zero, then the sub-Finslerian structure is flat and locally isometric to a Riemannian manifold, while if \(\Delta_{F}\) is nonzero, the sub-Finslerian structure is curved and the shortest paths between two points on the manifold are not necessarily straight lines.
## 2. Preliminaries
Let \(M\) be an \(n\)-dimensional smooth (\(C^{\infty}\)) manifold, and let \(T_{x}M\) represent its tangent space at a point \(x\in M\). We denote the module of vector fields over \(C^{\infty}(M)\) by \(\mathfrak{X}(M)\), and the module of \(1\)-forms by \(\mathfrak{X}^{*}(M)\).
Consider \(\mathcal{D}\), a _regular distribution_ on \(M\), defined as a subbundle of the tangent bundle \(TM\) with a constant rank of \(k\). Locally, in coordinates, this distribution can be expressed as \(\mathcal{D}=\mathrm{span}\{X_{1},\ldots,X_{k}\}\), where \(X_{i}(x)\in\mathfrak{X}(M)\) are linearly independent vector fields.
A non-negative function \(F:\mathcal{D}\to\mathbb{R}_{+}\) is called a _sub-Finsler metric_ if it satisfies the following conditions:
1. **Smoothness**: \(F\) is a smooth function over \(\mathcal{D}\setminus 0\);
2. **Positive Homogeneity**: \(F(\lambda v)=|\lambda|F(v)\) for all \(\lambda\in\mathbb{R}\) and \(v\in\mathcal{D}\setminus 0\);
3. **Positive Definiteness**: The Hessian matrix of \(F^{2}\) is positive definite at every \(v\in\mathcal{D}_{x}\setminus 0\).
A differential manifold \(M\) equipped with a sub-Finsler metric \(F\) is recognized as a _sub-Finsler manifold_, denoted by \((M,\mathcal{D},F)\).
A piecewise smooth curve, denoted as \(\sigma:[0,1]\to M\), is considered _horizontal_ if its tangent vector field \(\dot{\sigma}(t)\) lies within \(\mathcal{D}_{\sigma(t)}\) for all \(t\in[0,1]\), whenever it is defined. This condition reflects the nonholonomic constraints imposed on the curve.
The length functional of such a horizontal curve \(\sigma\) possesses a derivative for almost all \(t\in[0,1]\), with the components of the derivative, \(\dot{\sigma}\), representing measurable curves. The _length_ of \(\sigma\) is usually defined as:
\[\ell(\sigma)=\int_{0}^{1}F(\dot{\sigma}(t))dt.\]
This length structure gives rise to a _distance function_, denoted as \(d:M\times M\to\mathbb{R}_{+}\), defined by:
\[d(x_{0},x_{1})=\inf\ell(\sigma),\qquad x_{0},x_{1}\in M,\]
and the infimum is taken over all horizontal curves connecting \(\sigma(0)=x_{0}\) to \(\sigma(1)=x_{1}\). This distance metric captures the minimal length among all possible horizontal paths between two points on the manifold \(M\).
A _geodesic_, also known as a _minimizing geodesic_, refers to a horizontal curve \(\sigma:[0,1]\to M\) that realizes the distance between two points,, i.e., \(\ell(\sigma)=d(\sigma(0),\sigma(1))\).
Throughout this paper, it is consistently assumed that \(\mathcal{D}\) is bracket-generating. A distribution \(\mathcal{D}\), is characterized as _bracket-generating_ if every local frame \(X_{i}\) of \(\mathcal{D}\), along with all successive Lie brackets involving these frames, collectively span the entire tangent bundle \(TM\). If \(\mathcal{D}\) represents a bracket-generating distribution on a connected manifold \(M\), it follows that any two points within \(M\) can be joined by a horizontal curve. This foundational concept was initially established by C. Caratheodory [12] and later reaffirmed by W. L. Chow [13] and P. K. Rashevskii [25]. However, for a comprehensive explanation of the bracket-generating concept, one can turn to R. Montgomery's book, [24].
## 3. Sub-Hamiltonian associated with sub-Finslerian manifolds
### The Legendre transformation and Finsler dual of sub-Finsler metrics
Let \(\mathcal{D}^{*}\) be a rank-\(s\) codistribution on a smooth manifold \(M\), assigning to each point \(x\in U\subset M\) a linear subspace \(\mathcal{D}^{*}_{x}\subset T^{*}_{x}M\). This codistribution is a smooth subbundle, and spanned locally by \(s\) pointwise linearly independent smooth differential \(1\)-forms:
\[\mathcal{D}^{*}_{x}=\operatorname{span}\{\alpha_{1}(x),\dots,\alpha_{s}(x)\}, \ \ \text{with}\ \alpha_{i}(x)\in\mathfrak{X}^{*}(M).\]
We define the annihilator of a distribution \(\mathcal{D}\) on \(M\) as \((\mathcal{D}^{\perp})^{0}\), a subbundle of \(T^{*}M\) consisting of covectors that vanish on \(\mathcal{D}\):
\[(\mathcal{D}^{\perp})^{0}=\{\alpha\in T^{*}M:\alpha(v)=0\ \text{for all}\ v\in \mathcal{D}\},\]
such that \(\langle v,\alpha\rangle:=\alpha(v)\). Similarly, we define the annihilator of the orthogonal complement of \(\mathcal{D}\), denoted by \(\mathcal{D}^{0}\), as the subbundle of \(T^{*}M\) consisting of covectors that vanish on \(TM^{\perp}\).
Using these notions, we can define a sub-Finslerian function denoted by \(F^{*}\in\mathcal{D}^{*}\sim T^{*}M\setminus\mathcal{D}^{0}\), where \(F^{*}\) is a positive function. This function shares similar properties with \(F\), but is based on \(\mathcal{D}^{*}\) instead of \(\mathcal{D}\).
In our previous work [4], we established the relationship:
\[F^{*}(p)=F(v),\ \text{where}\ p=\mathcal{L}_{L}(v),\quad\text{for every}\quad p \in\mathcal{D}^{*}_{x}\quad\text{and}\quad v\in\mathcal{D}_{x}, \tag{1}\]
such that \(\mathcal{L}_{L}\) is the Legendre transformation of the sub-Lagrangian function \(L:\mathcal{D}\subset TM\mathop{\hbox{\vrule width 0.4pt height 5.0pt depth -0.0pt\vrule width 0.4pt heig ht 5.0pt depth -0.0pt}}\mathbb{R}\), a diffeomorphism between \(\mathcal{D}\) and \(\mathcal{D}^{*}\).
In this context, to express \(F^{*}\) in terms of \(F\), we consider the Legendre transformation of \(F\) with respect to the sub-Lagrangian function \(L(v)=\frac{1}{2}F(v,v)\), where \(F(v,v)\) is the square of the Finsler norm of \(v\). The Legendre transformation \(\mathcal{L}_{L}\) maps \(v\in\mathcal{D}\) to \(p=\frac{\partial L}{\partial v}(v)\).
Utilizing the definition of the Legendre transformation, we observe that
\[p=\frac{\partial L}{\partial v}(v)=\frac{\partial}{\partial v}\left(\frac{1} {2}F(v,v)\right)=F(v,\cdot),\]
where \(F(v,\cdot)\) denotes the differential of \(F\) with respect to its first argument evaluated at \(v\). Note that \(F(v,\cdot)\) is a linear function on \(\mathcal{D}_{x}\).
Given a covector \(p\in\mathcal{D}^{*}\), we find that
\[F^{*}(p)=\sup_{v\in\mathcal{D}_{x}}\biggl{\{}\langle p,v\rangle-L(v)\biggr{\}},\]
where \(\langle p,v\rangle\) represents the inner product between the covector \(p\) and the vector \(v\). Substituting the expression for \(L(v)\) and employing the Legendre transformation \(\mathcal{L}_{L}(v)=F(v,\cdot)\), we get
\[F^{*}(p)=\sup_{v\in\mathcal{D}_{x}}\biggl{\{}\langle p,v\rangle-\frac{1}{2}F( v,v)\biggr{\}}=\sup_{v\in\mathcal{D}_{x}}\biggl{\{}\langle p,v\rangle-\frac{1}{2}|F (v,\cdot)|^{2}\biggr{\}}.\]
Since \(|F(v,\cdot)|^{2}=F(v,v)\), we can express the Finsler dual \(F^{*}\) in terms of \(F\) as
\[F^{*}(p)=\sup_{v\in\mathcal{D}_{x}}\biggl{\{}\langle p,v\rangle-\frac{1}{2}F( v,v)\biggr{\}}.\]
Therefore, when we have a sub-Finsler metric \(F\) on \(\mathcal{D}\), the Finsler dual \(F^{*}\) is a function on \(\mathcal{D}^{*}\) defined by
\[F^{*}(p)=\sup_{v\in\mathcal{D}_{x}}\biggl{\{}\langle p,v\rangle-F(v)\biggr{\}},\quad\text{for}\quad p\in\mathcal{D}_{x}^{*},\]
where \(x\) is the base point of \(\mathcal{D}\).
### The Sub-Hamiltonian Function and Sub-Hamilton's Equations for Sub-Finsler Manifolds
The sub-Hamiltonian function associated with a sub-Finsler metric \(F\) given by
\[\eta:=\frac{1}{2}(F^{*})^{2}.\]
Here, \(F^{*}\) denotes the dual metric to \(F\), defined by
\[F^{*}(p)=\sup_{v\in\mathcal{D}_{x},F(v)=1}\langle p,v\rangle, \tag{2}\]
where \(p\) represents a momentum vector in \(\mathcal{D}_{x}^{*}\) associated with the point \(x\) in the manifold \(M\), and \(\langle\cdot,\cdot\rangle\) denotes the inner product induced by a Riemannian metric \(g\). The sub-Finslerian metric defined by (2) is known as the Legendre transform of \(F\), i.e., satisfying the relationship in (1). It is worth noting that the sub-Hamiltonian function associated with a Finsler metric is not unique, and different choices of Hamiltonians may lead to different dynamics for the associated geodesics.
The sub-Hamiltonian formalism is a method of constructing a sub-Finsler metric on a subbundle \(\mathcal{D}\) by defining a sub-Hamiltonian function \(\eta(x,p)\) on the subbundle
\(\mathcal{D}^{*}\), where \(x\) denotes a point in \(M\) and \(p\) denotes a momentum vector in \(\mathcal{D}^{*}\), as explained in the following remark:
**Remark 1**.: The sub-Finsler vector bundle, introduced in [4] and expanded upon in [5], plays a pivotal role in formulating sub-Hamiltonians in sub-Finsler geometry. Consider the covector subbundle \((\mathcal{D}^{*},\tau,M)\) with projection \(\tau:\mathcal{D}^{*}\rTo M\), forming a rank-\(k\) subbundle in the cotangent bundle of \(T^{*}M\). The pullback bundle \(\tau^{*}(\tau)=(\mathcal{D}^{*}\times\mathcal{D}^{*},\mathrm{pr}_{1}, \mathcal{D}^{*})\) is obtained by pulling back \(\tau\) through itself and is denoted as the sub-Finsler bundle over \(\mathcal{D}^{*}_{x}\). This bundle allows the introduction of \(k\) orthonormal covector fields \(X_{1},X_{2},\ldots,X_{k}\) with respect to the induced Riemannian metric \(g\). The sub-Hamiltonian \(\eta\) induces a metric \(g\) on the sub-Finsler bundle. In terms of this metric, the sub-Hamiltonian function \(\eta\) can be expressed as a function of components \(p_{i}\). Specifically, \(\eta(x,p)=\frac{1}{2}\sum_{i,j=1}^{n}g^{ij}p_{i}p_{j}\), where \(g^{ij}\) is the inverse of the metric tensor \(g_{ij}\) for the extended Finsler metric \(\hat{F}\) on \(TM\), kindly check Remark 2. This defines a sub-Finsler metric on a subbundle \(\mathcal{D}\) of \(TM\) that is determined by a distribution on \(M\).
The sub-Finsler metric \(F\) is then defined as follows:
\[F_{x}(v)=\sup_{p\in\mathcal{D}^{*}_{x}}\{\langle p,v\rangle-\eta(x,p)\}, \tag{3}\]
where \(v\) is a tangent vector at \(x\).
Now fixing a point \(x\in M\), for any covector \(p\in\mathcal{D}^{*}\), there exists a unique _sub-Hamiltonian vector field_ on \(\mathcal{D}^{*}\), denoted by \(\vec{H}\), described by
\[\vec{H}=\frac{\partial\eta}{\partial p_{i}}\frac{\partial}{\partial x^{i}}- \frac{\partial\eta}{\partial x^{i}}\frac{\partial}{\partial p_{i}}. \tag{4}\]
where the partial derivatives are taken with respect to the local coordinates \((x^{i},p_{i})\) on \(\mathcal{D}^{*}\subset T^{*}M\).
**Definition 1**.: The sub-Hamiltonian equations on \(\mathcal{D}^{*}\) are then given by
\[\frac{\partial\eta}{\partial p_{i}} =g^{ij}p_{j}, \tag{5a}\] \[\frac{\partial\eta}{\partial x^{i}} =-\frac{1}{2}\frac{\partial g^{jk}}{\partial x^{i}}p_{j}p_{k}, \tag{5b}\]
where dot denotes differentiation with respect to time.
These equations express the fact that the sub-Hamiltonian vector field \(\vec{H}\) preserves the sub-Finsler metric \(F^{*}\) on \(\mathcal{D}^{*}\). If the Hamiltonian is independent of the cotangent variables \(p_{i}\), then the second equation above reduces to the Hamilton-Jacobi equation for the sub-Finsler manifold \((M,\mathcal{D},F)\).
**Remark 2**.: We extended sub-Finsler metrics to full Finsler metrics using an orthogonal complement subbundle in [3]. However, here are more details and evidence.
Given a subbundle \(\mathcal{D}\) of the tangent bundle \(TM\), its direct complement \(\mathcal{D}^{\perp}\) is a subbundle of \(TM\) such that \(TM=\mathcal{D}\oplus\mathcal{D}^{\perp}\), and at every point \(x\in M\), \(\mathcal{D}_{x}\cap\mathcal{D}^{\perp}_{x}=0\) and \(\mathcal{D}_{x}+\mathcal{D}^{\perp}_{x}=T_{x}M\).
One canonical way to obtain a direct complement to \(\mathcal{D}\) is to use the notion of an orthogonal complement. Given a subbundle \(\mathcal{D}\) of \(TM\), we define the orthogonal
complement bundle \(\mathcal{D}^{\perp}\) as follows:
\[\mathcal{D}^{\perp}_{x}=\{v\in T_{x}M:\langle v,w\rangle=0\text{ for all }w\in \mathcal{D}_{x}\},\]
such that \(v,w\) are orthogonal with respect to the inner product induced by the Riemannian metric. It can be shown that \(\mathcal{D}^{\perp}\) is a subbundle of \(TM\) and satisfies the conditions for being a direct complement to \(\mathcal{D}\). Moreover, it can be shown that any two direct complements to \(\mathcal{D}\) are isomorphic bundles, so the orthogonal complement is unique up to bundle isomorphism.
Note that if \(M\) is equipped with a sub-Finsler metric, then the metric induces a non-degenerate inner product on \(\mathcal{D}\), so we can use this inner product to define the orthogonal complement. However, if \(M\) is not equipped with a Riemannian metric, then the notion of an orthogonal complement may not be well-defined. So, to extend a given sub-Finsler metric \(F\) on a subbundle \(\mathcal{D}\) of \(TM\) to a full Finsler metric on \(TM\), one can use an orthogonal complement subbundle \(\mathcal{D}^{\perp}\). This is a regular subbundle of \(TM\) that is orthogonal to \(\mathcal{D}\) with respect to the Riemannian metric \(g_{ij}\). Locally, \(\mathcal{D}^{\perp}\) can be written as:
\[\mathcal{D}^{\perp}=\operatorname{span}\{X^{\prime}_{1},\dots,X^{\prime}_{n-k}\}, \tag{6}\]
where \(k\) is the rank of the subbundle \(\mathcal{D}\) and \(X^{\prime}_{1},\dots,X^{\prime}_{n-k}\) are local vector fields that form a basis for \(\mathcal{D}^{\perp}\). Then, one can define a Finsler metric \(\hat{F}\) on \(TM\) by:
\[\hat{F}(v)=\sqrt{F^{2}(P(v))+\widetilde{F}^{2}(P^{c}(v))}, \tag{7}\]
where \(P\) is the projection onto \(\mathcal{D}\), \(P^{c}\) is the projection onto \(\mathcal{D}^{\perp}\), and \(\widetilde{F}\) is a Finsler metric on \(\mathcal{D}^{\perp}\). This construction yields a full Finsler metric on \(TM\) that extends the sub-Finsler metric \(F\) on \(\mathcal{D}\). Note that the Finsler metric \(\widetilde{F}\) on \(\mathcal{D}^{\perp}\) is not unique, so the choice of \(\widetilde{F}\) is arbitrary. However, the resulting Finsler metric \(\hat{F}\) on \(TM\) is unique and independent of the choice of \(\widetilde{F}\).
To see this, suppose we have two choices of Finsler metrics \(\widetilde{F}\) and \(\widetilde{F}^{\prime}\) on \(\mathcal{D}^{\perp}\). Let \(\hat{F}\) and \(\hat{F}^{\prime}\) be the corresponding extensions of \(F\) to \(TM\) using Equation 7. Then for any \(v\in TM\), we have
\[\hat{F}^{2}(v) =F^{2}(P(v))+\widetilde{F}^{2}(P^{c}(v))\] \[\hat{F}^{\prime 2}(v) =F^{2}(P(v))+\widetilde{F}^{\prime 2}(P^{c}(v)).\]
Subtracting these two equations, we obtain
\[\hat{F}^{\prime 2}(v)-\hat{F}^{2}(v)=\widetilde{F}^{\prime 2}(P^{c}(v))- \widetilde{F}^{2}(P^{c}(v)).\]
Since \(v\) can be decomposed uniquely as \(v=v_{\parallel}+v_{\perp}\) with \(v_{\parallel}\in\mathcal{D}\) and \(v_{\perp}\in\mathcal{D}^{\perp}\), we have \(P^{c}(v)=v_{\perp}\), and the right-hand side of the above equation depends only on \(v_{\perp}\). Since the choice of \(\widetilde{F}\) on \(\mathcal{D}^{\perp}\) is arbitrary, we can choose \(\widetilde{F}\) and \(\widetilde{F}^{\prime}\) to be equal except on a single vector \(v_{\perp}\), in which case \(\widetilde{F}^{\prime 2}(P^{c}(v))-\widetilde{F}^{2}(P^{c}(v))\) will be nonzero only for that vector. Therefore, we have \(\hat{F}^{\prime 2}(v)-\hat{F}^{2}(v)\neq 0\) only for that vector, and hence \(\hat{F}=\hat{F}^{\prime}\).
Therefore, we have shown that the resulting Finsler metric \(\hat{F}\) on \(TM\) is unique and independent of the choice of \(\widetilde{F}\).
Let us turn to define the normal and abnormal extremals:
The projection \(x(t)\) to \(M\) is called a _normal extremal_. One can see that every sufficiently short subarc of the normal extremal \(x(t)\) is a minimizer sub-Finslerian
geodesic. This subarc is the unique minimizer joining its endpoints (see [4, 7]). In the sub-Finslerian manifold, not all the sub-Finslerian geodesics are normal (contrary to the Finsler manifold). This is because the sub-Finslerian geodesics, which admit a minimizing geodesic, might not solve the sub-Hamiltonian equations. Those minimizers that are not normal extremals are called _singular_ or _abnormal extremals_, (see for instance [24]). Even in the sub-Finslerian case, Pontryagin's maximum principle implies that every minimizer of the arc length of the horizontal curves is a normal or abnormal extremal.
### Non-Linear Connections on a sub-Finsler manifolds
**Definition 2**.: An \(\mathcal{L}\)-_connection_\(\nabla\) on a sub-Finsler manifold is a generalized non-linear connection over the induced mapping
\[E:T^{*}M\rTo TM,\quad E(\alpha(x))=\mathbf{i}(\mathcal{L}_{\eta}(\mathbf{i}^{*}( \alpha(x))))\in TM, \tag{8}\]
constructed by Legendre transformation \(\mathcal{L}_{\eta}:\mathcal{D}^{*}\subset T^{*}M\rTo TM\) by (8), where \(\mathbf{i}^{*}:T^{*}M\rTo\mathcal{D}^{*}\) is the adjoint mapping of \(\mathbf{i}:\mathcal{D}\to TM\), i.e. for any \(\alpha(x)\in\mathfrak{X}^{*}(M)\), \(\mathbf{i}^{*}(\alpha(x))\) is determined by
\[\langle X(x),\mathbf{i}^{*}(\alpha(x))\rangle=\langle\mathbf{i}(X(x)), \alpha(x)\rangle\text{ for all }X(x)\in\mathfrak{X}(M),\]
such that \(\langle v,\alpha\rangle:=\alpha(v)\) for all \(v\in\mathcal{D},\alpha\in\mathcal{D}^{*}\). For more details about the settings of the \(\mathcal{L}\)- connection \(\nabla\), we refer the reader to [3]. Obviously, \(E\) is a bundle mapping whose image set is precisely the subbundle \(\mathcal{D}\) of \(TM\) and whose kernel is the annihilator \(\mathcal{D}^{0}\) of \(\mathcal{D}\).
Moreover, we recall the _Barthel non-linear connection_\(\overline{\nabla}^{B}\) of the cotangent bundle as follows
\[\overline{\nabla}^{B}_{X}\alpha(Y)=X(\alpha(Y))-\alpha(\nabla^{B}_{X}Y),\]
where the Berwald connection \(\nabla^{B}\) on the tangent bundle was locally given by
\[N^{i}_{j}=\frac{1}{2}\frac{\partial G^{i}}{\partial v^{j}};\quad G^{i}=g^{ij} \left(\frac{\partial^{2}L}{\partial v^{j}\partial x^{k}}v^{k}-\frac{\partial L }{\partial x^{j}}\right). \tag{9}\]
The Barthel nonlinear connection plays the same role in the positivity homogeneous case as the Levi-Civita connection in Riemannian geometry, see [22].
**Definition 3**.: A curve \(\alpha:[0,1]\rTo T^{*}M\) is said to be \(E\)-_admissible_ if \(E(\alpha(t))=\dot{\sigma}(t)\ \forall t\in[0,1]\) such that \(\pi_{M}:T^{*}M\rTo M\) is the natural cotangent bundle projection. An _auto-parallel_ curve is the \(E\)-admissible curve with respect to \(\mathcal{L}\)-connection \(\nabla\) if it satisfies \(\nabla_{\alpha}\alpha(t)=0\) for all \(t\in[0,1]\). The _geodesic_ of \(\nabla\) is just the base curve \(\gamma=\pi_{M}\circ\alpha\) of the auto-parallel curve.
In coordinates, an auto-parallel curve \(\alpha(t)=(x^{i}(t),p_{i}(t))\) satisfies the equations
\[\dot{x}^{i}(t)=g^{ij}(x(t),p(t))p_{j}(t),\qquad\dot{p}_{i}(t)=-\Gamma^{jk}_{i} (x(t),p(t))p_{j}(t)p_{k}(t),\]
such that \(g^{ij}\) and \(\Gamma^{ik}_{j}\) are the local components of the contravariant tensor field of \(TM\otimes TM\rTo M\) associated with the sub-Hamiltonian structure and the connection coefficients of \(\nabla\), respectively. In fact, given a non-linear \(\mathcal{L}\)-connection \(\nabla\) we can always introduce a smooth vector field \(\Gamma^{\nabla}\) on \(\mathcal{D}^{*}\), in addition, their integral curves are auto-parallel curves in relation to \(\nabla\). In canonical coordinates, this vector field given by
\[\Gamma^{\nabla}(x,p)=g^{ij}(x,p)p_{j}\frac{\partial}{\partial x^{i}}-\Gamma^{ ik}_{j}(x,p)p_{i}p_{k}\frac{\partial}{\partial p_{j}}.\]
In [3], we proved that every geodesic of \(\nabla\) is a normal extremal, and vice versa. More precisely, we have shown that the coordinate expression for the sub-Hamiltonian vector field (this is another form of (4)) \(\vec{H}\) equals:
\[\vec{H}(x,p)=g^{ij}(x,p)p_{j}\frac{\partial}{\partial x^{i}}-\frac{1}{2}\frac{ \partial g^{ij}}{\partial x^{k}}(x,p)p_{i}p_{j}\frac{\partial}{\partial p_{k}}.\]
Comparing the latter formula with the definition of \(\Gamma^{\nabla}\), yields that \(\Gamma^{\nabla}(x,p)=\vec{H}(x,p)\).
Variational approach to the length functional and its relation to sub-Hamiltonian equations on sub-Finsler manifolds
We can consider a small variation \(\psi(s,t)\) of the curve \(\sigma(t)\) such that \(\psi(s,0)\) and \(\psi(s,1)\) are fixed at \(x_{0}\) and \(x_{1}\), respectively, and \(\psi(0,t)=\sigma(t)\) for all \(t\in[0,1]\). We can think of \(\psi(s,t)\) as a one-parameter family of curves in the set of all curves joining \(x_{0}\) and \(x_{1}\), and we can consider the variation vector field \(v(t)=\frac{\partial\psi}{\partial s}(0,t)\), which is tangent to the curve \(\sigma(t)\).
Then, we can define the directional derivative of the length functional \(\ell\) along the variation vector field \(v\) as
\[\mathbf{d}\ell(\sigma)\cdot v=\frac{d}{ds}\Big{|}_{s=0}\ell(\psi(s,\cdot)). \tag{10}\]
Note that \(\ell(\psi(s,\cdot))\) is the length of the curve \(\psi(s,\cdot)\), which starts at \(x_{0}\) and ends at \(x_{1}\). Therefore, \(\left.\frac{d}{ds}\right|_{s=0}\ell(\psi(s,\cdot))\) is the rate of change of the length of the curve as we vary it along the vector field \(v\).
By chain rule, we can write
\[\frac{d}{ds}\Big{|}_{s=0}\ell(\psi(s,\cdot))=\int_{0}^{1}\frac{\partial\ell}{ \partial x^{a}}(\sigma(t))\frac{\partial\psi^{a}}{\partial s}(0,t)dt,\]
where \(\frac{\partial\ell}{\partial x^{a}}\) is the gradient of the length functional. Using the fact that \(\psi(s,t)\) is a variation of \(\sigma(t)\) and \(\psi(0,t)=\sigma(t)\), we can express \(\frac{\partial\psi^{a}}{\partial s}(0,t)\) in terms of the variation vector field \(v\) as
\[\frac{\partial\psi^{a}}{\partial s}(0,t)=\frac{\partial}{\partial s}\Big{|}_{ s=0}\psi^{a}(s,t)=\frac{\partial}{\partial s}\Big{|}_{s=0}\sigma^{a}(t)+s \frac{\partial}{\partial t}\Big{|}t=tv^{a}(t)=v^{a}(t).\]
Therefore, we obtain
\[\frac{d}{ds}\Big{|}_{s=0}\ell(\psi(s,\cdot))=\int_{0}^{1}\frac{\partial\ell}{ \partial x^{a}}(\sigma(t))v^{a}(t)dt=\mathbf{d}\ell(\sigma)\cdot v,\]
which gives the desired equation (10).
Let us clarify the correct relationship between the sub-Hamiltonian equations and the length functional.
Given a sub-Finsler manifold \((M,\mathcal{D},F)\), the sub-Hamiltonian equations on \(M\) are given by
\[\frac{d}{dt}\left(\frac{\partial F}{\partial p_{a}}(\sigma(t))\right)=-\frac{ \partial F}{\partial x^{a}}(\sigma(t)), \tag{11}\]
where \(\sigma:[0,1]\to M\) is a piecewise smooth curve in \(M\) with \(\sigma(0)=x_{0}\) and \(\sigma(1)=x_{1}\).
On the other hand, the length functional on \(M\) is defined as
\[\ell(\sigma)=\int_{0}^{1}F(\sigma(t),\dot{\sigma}(t)),dt,\]
where \(\sigma\) is a piecewise smooth curve in \(M\) with \(\sigma(0)=x_{0}\) and \(\sigma(1)=x_{1}\).
It is a well-known fact that a curve \(\sigma\) is a solution to the sub-Hamiltonian equations if and only if it is a critical point of the length functional \(\ell\). In other words, if \(\sigma\) satisfies the sub-Hamiltonian equations, then \(\mathbf{d}\ell(\sigma)=0\), and conversely, if \(\sigma\) is a critical point of \(\ell\), then it satisfies the sub-Hamiltonian equations.
**Proposition 1**.: _A piecewise smooth curve \(\sigma:[0,1]\to M\) joining \(\sigma(0)=x_{0}\) with \(\sigma(1)=x_{1}\) in \(M\) is a solution to the sub-Hamiltonian equations if and only if it is a critical point of the length functional \(\ell\). That is, if and only if \(\mathbf{d}\ell(\sigma)=0\)._
Proof.: We will begin by proving the first direction:
Assume that \(\sigma\) satisfies the sub-Hamiltonian equations. Then, we have
\[\frac{d}{dt}\left(\frac{\partial F}{\partial p_{a}}(\sigma(t))\right)=-\frac{ \partial F}{\partial x^{a}}(\sigma(t)),\]
for all \(a=1,\ldots,m\) and \(t\in[0,1]\). Note that \(\frac{\partial F}{\partial p_{a}}\) is the conjugate momentum of \(x^{a}\), and we can write the sub-Finsler Lagrangian as
\[L(x,\dot{x})=F(x,\dot{x})\sqrt{\det(g_{ij}(x))},\]
where \(g_{ij}(x)=\frac{\partial^{2}F^{2}}{\partial\dot{x}^{2}\partial\dot{x}^{2}}(x, \dot{x})\) is the sub-Finsler metric tensor. Then, the length functional can be written as
\[\ell(\sigma)=\int_{0}^{1}L(\sigma(t),\dot{\sigma}(t)),dt.\]
Using the Euler-Lagrange equation for the Lagrangian \(L\), we have
\[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{x}^{a}}(\sigma(t),\dot{\sigma }(t))\right)-\frac{\partial L}{\partial x^{a}}(\sigma(t),\dot{\sigma}(t))=0,\]
for all \(a=1,\ldots,m\) and \(t\in[0,1]\). Since \(L\) depends only on \(\dot{x}\) and not on \(x\) explicitly, we can write this as
\[\frac{d}{dt}\left(\frac{\partial F}{\partial\dot{x}^{a}}(\sigma(t))\sqrt{\det (g_{ij}(\sigma(t)))}\right)-\frac{\partial F}{\partial x^{a}}(\sigma(t))\sqrt {\det(g_{ij}(\sigma(t)))}=0,\]
for all \(a=1,\ldots,m\) and \(t\in[0,1]\). Using the chain rule and the fact that \(\sigma\) is piecewise smooth, we can write this as
\[\frac{d}{dt}\left(\frac{\partial F}{\partial p_{a}}(\sigma(t))\right)-\frac{ \partial F}{\partial x^{a}}(\sigma(t))=0,\]
for all \(a=1,\ldots,m\) and \(t\in[0,1]\). This is exactly the condition for \(\sigma\) to be a critical point of \(\ell\), i.e., \(\mathbf{d}\ell(\sigma)=0\).
Now, let us proceed to prove the second direction:
Assume that \(\sigma\) is a critical point of \(\ell\), i.e., \(\mathbf{d}\ell(\sigma)=0\). Then, for any smooth variation \(\delta\sigma:[0,1]\to TM\) with \(\delta\sigma(0)=\delta\sigma(1)=0\), we have
\[0=\mathbf{d}\ell(\sigma)(\delta\sigma)=\int_{0}^{1}\left\langle\frac{\partial F }{\partial x^{a}}(\sigma(t)),\delta x^{a}(t)\right\rangle dt,\]
where \(\delta x^{a}(t)=\frac{d}{ds}\bigg{|}_{s=0}x^{a}(\sigma(t)+s\delta\sigma(t))\) is the variation of the coordinates \(x^{a}\) induced by \(\delta\sigma\). Note that we have used the fact that \(\delta\sigma(0)=\delta\sigma(1)=0\) to get rid of boundary terms.
Since \(\delta\sigma\) is arbitrary, this implies that
\[\frac{\partial F}{\partial x^{a}}(\sigma(t))=0,\]
for all \(a=1,\ldots,m\) and \(t\in[0,1]\). Using the sub-Hamiltonian equations, we can write this as
\[\frac{d}{dt}\left(\frac{\partial F}{\partial p_{a}}(\sigma(t))\right)=0,\]
for all \(a=1,\ldots,m\) and \(t\in[0,1]\). This implies that \(\frac{\partial F}{\partial p_{a}}\) is constant along \(\sigma\). Since \(\sigma\) is piecewise smooth, we can choose a partition \(0=t_{0}<t_{1}<\cdots<t_{n}=1\) such that \(\sigma\) is smooth on each subinterval \([t_{i-1},t_{i}]\). Let \(c_{a}\) be the constant value of \(\frac{\partial F}{\partial p_{a}}\) on \(\sigma\).
Then, for each \(i=1,\ldots,n\), we have
\[\frac{d}{dt}\left(\frac{\partial F}{\partial p_{a}}(\sigma(t))\right)=0,\]
for all \(a=1,\ldots,m\) and \(t\in[t_{i-1},t_{i}]\). This implies that
\[\frac{\partial F}{\partial p_{a}}(\sigma(t))=c_{a},\]
for all \(a=1,\ldots,m\) and \(t\in[t_{i-1},t_{i}]\). Since \(\frac{\partial F}{\partial p_{a}}\) is the conjugate momentum of \(x^{a}\), this implies that \(\sigma\) satisfies the sub-Hamiltonian equations on each subinterval \([t_{i-1},t_{i}]\).
Therefore, \(\sigma\) satisfies the sub-Hamiltonian equations on the whole interval \([0,1]\), which completes the proof of second direction.
**Corollary 1**.: _If \(\sigma:[0,1]\to M\) is a piecewise smooth horizontal curve that minimizes the length functional \(\ell\) between two points \(x_{0}\) and \(x_{1}\) on a sub-Finsler manifold \((M,\mathcal{D},F)\), then \(\sigma\) is a smooth geodesic between \(x_{0}\) and \(x_{1}\). Conversely, if \(\sigma\) is a smooth geodesic between \(x_{0}\) and \(x_{1}\), then its length \(\ell(\sigma)\) is locally minimized._
Proof.: The proof of this corollary follows directly from Proposition 1.
The Proposition 1 establish the significance of the results in the context of sub-Hamiltonian equations and curve optimization on a sub-Finsler manifold. The corollary highlights the connection between curve optimization, geodesics, and the length functional on sub-Finsler manifolds. Collectively, these results provide deep insights into the geometric behavior of curves on sub-Finsler manifolds, linking the sub-Hamiltonian equations, length minimization, and the concept of geodesics in this context.
## 4. Nonholonomic sub-Finslerian structure
A sub-Finslerian structure is a generalization of a Finslerian structure, where the metric on the tangent space at each point is only required to be positive-definite on a certain subbundle of tangent vectors.
A _nonholonomic sub-Finslerian structure_ is a triple \((M,\mathcal{D},F)\) where \(M\) is a smooth manifold of dimension \(n\), \(\mathcal{D}\) is a non-integrable distribution of rank \(k<n\) on \(M\), which means that it cannot be generated by taking the Lie bracket of vector fields. This property leads to the nonholonomicity of the structure and has important implications for the geometry and dynamics of the system. The regularity condition on \(\mathcal{D}\) means that it can be locally generated by smooth vector
fields, and the nonholonomic condition means that it cannot be integrable to a smooth submanifold of \(M\). The sub-Finslerian metric \(F\) is a positive-definite inner product on the tangent space of \(\mathcal{D}\) at each point of \(M\). It is often expressed as a norm that satisfies the triangle inequality but does not necessarily have the homogeneity property of a norm. The metric \(F\) induces a distance function on \(M\), known as the sub-Riemannian distance or Carnot-Caratheodory distance, which is a natural generalization of the Riemannian distance. Mechanically, sub-Riemannian manifolds \((M,\mathcal{D},g)\) and their generalization, sub-Finslerian manifolds \((M,\mathcal{D},F)\) are classified as configuration spaces [6].
Nonholonomic sub-Finslerian structures arise in the study of control theory and robotics, where they model the motion of nonholonomic systems, i.e., systems that cannot achieve arbitrary infinitesimal motions despite being subject to arbitrary small forces. The motivation for this generalization comes from the need to provide a framework that captures the complexities of motion in such systems beyond what sub-Riemannian geometry alone can achieve. The study of these structures involves geometric methods, such as the theory of connections and curvature, and leads to interesting mathematical problems. This generalization not only extends the applicability of the theory to a wider class of problems but also paves the way for new insights into the geometric mechanics of nonholonomic systems.
### Nonholonomic Free Particle Motion under a Non-Linear Connection and Projection Operators
We have the projection operator \(P^{*}:T^{*}M\rTo\mathcal{D}^{0}\) that projects any covector \(\alpha\in T^{*}M\) onto its horizontal component with respect to the non-linear connection induced by the distribution \(\mathcal{D}\). More precisely, for any \(Y\in TM\), we define \(P(Y)\) to be the projection of \(Y\) onto \(\mathcal{D}\), and then \(P^{*}(\alpha)(Y)=\alpha(Y-P(Y))\).
Next, we have the complement projection \((P^{*})^{c}:T^{*}M\rTo(\mathcal{D}^{\perp})^{0}\), which projects any covector \(\alpha\in T^{*}M\) onto its vertical component with respect to the non-linear connection induced by the distribution \(\mathcal{D}\). More precisely, for any \(Y\in TM\), we define \(P^{\perp}(Y)\) to be the projection of \(Y\) onto \(\mathcal{D}^{\perp}\), and then \((P^{*})^{c}(\alpha)(Y)=\alpha(P^{\perp}(Y))\).
Now, we consider a nonholonomic free particle moving along a piecewise smooth horizontal curve \(\sigma:[0,1]\rTo M\). Let \(\overline{\nabla}^{B}\) be a Barthel non-linear connection, (see [3, 18]), and the condition \(P^{*}(\overline{\nabla}^{B}_{\dot{\sigma}(t)}\dot{\sigma}(t))=0\) expresses the fact that the velocity vector \(\dot{\sigma}(t)\) is constrained to be horizontal, while the constraint condition \(\dot{\sigma}(t)\in\mathcal{D}^{0}\) expresses the fact that the velocity vector lies in the distribution \(\mathcal{D}\).
Using the fact that \(T^{*}M\) can be decomposed into its horizontal and vertical components with respect to the non-linear connection induced by the distribution \(\mathcal{D}\), we can express any covector \(\alpha\in T^{*}M\) as \(\alpha=P^{*}(\alpha)+(P^{*})^{c}(\alpha)\). Then, the constraint condition \(\dot{\sigma}(t)\in\mathcal{D}^{0}\) can be written as \((P^{*})^{c}(\mathrm{d}\sigma/\mathrm{d}t)=0\).
Using the above decomposition of \(\alpha\), we can rewrite the condition \(P^{*}(\overline{\nabla}^{B}_{\dot{\sigma}(t)}\dot{\sigma}(t))=0\) as \(P^{*}(\overline{\nabla}^{B}_{\dot{\sigma}(t)}\dot{\sigma}(t))=P^{*}(\mathrm{d} \dot{\sigma}/\mathrm{d}t)=\mathrm{d}(P^{*}(\dot{\sigma}))/\mathrm{d}t=0\), where we have used the fact that \(P^{*}(\mathrm{d}\dot{\sigma}/\mathrm{d}t)\) is the derivative of the horizontal component of \(\dot{\sigma}\) with respect to time, and hence is zero if \(\dot{\sigma}\) is constrained to be horizontal.
Therefore, the conditions \(P^{*}(\overline{\nabla}^{B}_{\dot{\sigma}(t)}\dot{\sigma}(t))=0\) and \((P^{*})^{c}(\mathrm{d}\sigma/\mathrm{d}t)=0\) together express the fact that the velocity vector \(\dot{\sigma}(t)\) of the nonholonomic free particle is constrained to be horizontal and lie in the distribution \(\mathcal{D}\), respectively.
Since \(T^{*}M\) is identified with \(TM\) via a Riemannian metric \(g\), we have a natural isomorphism between \((\mathcal{D}^{\perp})^{0}\) and \(\mathcal{D}^{0}\) given by the orthogonal projection. In particular, we have a direct sum decomposition of the cotangent bundle \(T^{*}M\) as
\[T^{*}M\cong(\mathcal{D}^{\perp})^{0}\oplus\mathcal{D}^{0}.\]
Note that any covector \(\alpha\in T^{*}M\) can be uniquely decomposed as \(\alpha=(P^{*})^{c}(\alpha)+P^{*}(\alpha)\).
We can define a new _non-linear connection_\(\overline{\nabla}\) on \((M,\mathcal{D},F)\) according to
\[\overline{\nabla}_{X}(P^{*}(\alpha))(Y)=\overline{\nabla}_{X}^{B}(P^{*}( \alpha))(Y)+\overline{\nabla}_{X}^{B}((P^{*})^{c}(\alpha))(Y) \tag{12}\]
for all \(X\in\mathfrak{X}(M)\) and \(\alpha\in\mathfrak{X}^{*}(M)\). We restrict this connection to \(\mathcal{D}^{0}\) and the equations of motion of the nonholonomic free particle can be re-written as \(\overline{\nabla}_{\dot{\sigma}(t)}\dot{\sigma}(t)=0\), together with the initial velocity taken in \(\mathcal{D}^{0}\) (see [18, 19]).
Given a nonholonomic sub-Finsler structure \((M,\mathcal{D},F)\) one can always construct a normal and \(\mathcal{D}\)-adapted \(\mathcal{L}\)-connection [3, Proposition 16]. Furthermore, we can construct a generalized non-linear connection over the vector bundle \(\mathbf{i}:\mathcal{D}\mathop{\hbox to 0.0pt{\hbox to 0.0pt{\lower 2.5pt\hbox{$ -$}}\hss}\raise 1.0pt\hbox{$\longrightarrow$}}TM\), we will set \(X\in\Gamma(\mathcal{D})\) with \(\mathcal{L}_{\eta}(\mathbf{i}\circ X)\in\mathfrak{X}^{*}(M)\). So, attached to \((M,\mathcal{D},F)\) there is a non-linear connection \(\nabla^{H}:\Gamma(\mathcal{D})\times\Gamma(\mathcal{D}^{0})\mathop{\hbox to 0.0pt{\hbox to 0.0pt{ \lower 2.5pt\hbox{$ -$}}\hss}\raise 1.0pt\hbox{$\longrightarrow$}}\Gamma(\mathcal{D}^{0})\) called the _nonholonomic connection_ over the adjoint mapping \(\mathbf{i}:\mathcal{D}\to TM\) on natural projection \(\tau:\mathcal{D}^{0}\mathop{\hbox to 0.0pt{\hbox to 0.0pt{\lower 2.5pt\hbox{$ -$}}\hss}\raise 1.0pt\hbox{$\longrightarrow$}}M\) given by
\[\nabla_{X}^{H}\alpha(Y)=P^{*}(\overline{\nabla}_{X}^{B}\alpha(Y)).\]
Moreover, there is no doubt this indeed determines a non-linear connection, namely,
\[\nabla_{X}^{H}\alpha(Y)=\overline{\nabla}_{X}(P^{*}(\alpha))(Y),\]
such that \(\overline{\nabla}\) is the non-linear connection given in (12), for all \(X\in\Gamma(\mathcal{D})\) and \(\alpha\in\Gamma(\mathcal{D}^{0})\). In the nonholonomic setting, the horizontal curves are \(\hat{\sigma}\) in \(\mathcal{D}\) that are extensions of curves in \(M\), i.e. \(\hat{\sigma}(t)=\hat{\sigma}(t)\) for some curve \(\sigma\) in \(M\).
**Definition 4**.: Let \((M,\mathcal{D},F)\) be a nonholonomic sub-Finsler structure. A _nonholonomic bracket_
\[[\cdot,\cdot]:\Gamma(\pi_{\mathcal{D}})\otimes\Gamma(\tau)\mathop{\hbox to 0.0pt{\hbox to 0.0pt{ \lower 2.5pt\hbox{$ -$}}\hss}\raise 1.0pt\hbox{$\longrightarrow$}}\Gamma(\tau)\]
is defined as \([X,\alpha]=(P^{*})^{c}[X,\alpha]\) for all \(X\in\Gamma(\pi_{\mathcal{D}})\), \(\alpha\in\Gamma(\tau)\), and \(\tau:\mathcal{D}^{*}\mathop{\hbox to 0.0pt{\hbox to 0.0pt{\lower 2.5pt\hbox{$ -$}}\hss}\raise 1.0pt\hbox{$\longrightarrow$}}M\). This Lie bracket satisfies all the regular properties of the Lie bracket with the exception of the Jacobi identity. It may happen that the nonholonomic bracket \([X,\alpha]\notin\Gamma(\tau)\) because \(\mathcal{D}^{*}\) is nonintegrable.
Now, we can formally define the torsion operator
\[T(X,\alpha):=\nabla_{X}^{H}\alpha-\nabla_{\alpha}^{H}X-P^{*}[X,\alpha].\]
In this setting, due to the symmetry of the non-linear connection \(\nabla_{X}^{H}\alpha=\nabla_{\alpha}^{H}X\), the torsion \(T(X,\alpha)=0\) for all \(X\in\Gamma(\mathcal{D})\) and \(\alpha\in\Gamma(\mathcal{D}^{0})\). Moreover, [4, Lemma 5], implies that the non-linear connection \(\nabla^{H}\) preserves the sub-Finsler metric \(F\) on \(\mathcal{D}\), i.e. \(\nabla_{X}^{H}F=0\) for all \(X\in\Gamma(\mathcal{D})\). Therefore, there exists a unique conservative homogeneous nonlinear connection \(\nabla^{H}\) with zero torsion and we can write the equations of motion for the given nonholonomic problem as \(\nabla_{\dot{\sigma}(t)}^{H}\dot{\sigma}(t)=0\), in such a way that \(\sigma\) is a curve in \(M\) tangent to \(\mathcal{D}\).
There is a close relationship between nonholonomic constraints and the controllability of non-linear systems. More precisely, there is a beautiful link between optimal control of nonholonomic systems and sub-Finsler geometry. In the case of
a large class of physically interesting systems, the optimal control problem is reduced to finding geodesics with respect to the sub-Finslerian metric. The geometry of such geodesic flows is exceptionally rich and provides guidance for the design of control laws, for more information see Montgomery [24]. We have seen in Section 2 that for each point \(x\in M\), we have the following distribution of rank \(k\)
\[\mathcal{D}=\operatorname{span}\{X_{1},\ldots,X_{k}\},\qquad X_{i}(x)\in T_{x}M,\]
such that for any control function \(u(t)=(u_{1}(t),\ldots,u_{t}(t))\in\mathbb{R}^{k}\) the the control system is defined as
\[\dot{x}=\sum_{i=1}^{k}u_{i}X_{i}(x),\qquad x\in M,\]
is called a _nonholonomic control system_ or _driftless control system_ in the quantum mechanical sense, see [6].
### Results
The subsequent findings enhance comprehension of nonholonomic sub-Finslerian structures and their relevance in geometric mechanics. These insights offer essential tools for addressing and resolving issues concerning restricted movement within mathematical and physical domains. Specifically, these results shed light on the behavior of nonholonomic structures and their utility in analyzing constrained motion, particularly within the realm of geometric mechanics.
**Remark 3**.: We call the distribution \(\mathcal{D}\) a _geodesically invariant_ if for every geodesic \(\sigma:[0,1]\rTo M\) of \(\overline{\nabla}^{B}\), \(\dot{\sigma}(0)\in\mathcal{D}_{\sigma(0)}\) implies that \(\dot{\sigma}(t)\in\mathcal{D}_{\sigma(t)}\) for every \(t\in(0,1]\).
One can prove that if \((M,\mathcal{D},F)\) is a sub-Finslerian manifold such that for any \(x\in M\), \(\mathcal{D}_{x}\) is a vector subspace of \(T_{x}M\). The distribution \(\mathcal{D}\) is geodesically invariant if and only if, for any \(x\in M\) and any \(v\in\mathcal{D}_{x}\), the Jacobi field along any geodesic \(\gamma(t)\) with initial conditions \(\gamma(0)=x\) and \(\dot{\gamma}(0)=v\) is also in \(\mathcal{D}\).
In other words, if the Jacobi fields along any geodesic with initial conditions in \(\mathcal{D}\) remain in \(\mathcal{D}\), then \(\mathcal{D}\) is geodesically invariant. Conversely, if \(\mathcal{D}\) is geodesically invariant, then any Jacobi field along a geodesic with initial conditions in \(\mathcal{D}\) must also remain in \(\mathcal{D}\). We leave the proof of this statement for future work.
The following Proposition implies, in particular, that \(\mathcal{D}\) is geodesically invariant with respect to Barthel's non-linear connection \(\overline{\nabla}^{B}\).
**Proposition 2**.:
1. _For each_ \(X\in\mathfrak{X}(M)\) _and_ \(\alpha\in\Gamma(\mathcal{D}^{0})\)_,_ \(\overline{\nabla}_{X}(P^{*}(\alpha))(Y)\in\Gamma(\mathcal{D}^{0})\)_._
2. _For each_ \(X\in\mathfrak{X}(M)\) _and_ \(\alpha\in\Gamma(\mathcal{D}^{0})\)_,_ \(\overline{\nabla}_{X}^{B}((P^{*})^{c}(\alpha))(Y)\in\Gamma(\mathcal{D}^{0})\)_._
3. _For each_ \(X\in\mathfrak{X}(M)\) _and_ \(\alpha\in\Gamma(\mathcal{D}^{\perp})^{0}\)_,_ \(\overline{\nabla}_{X}^{B}((P^{*})^{c}(\alpha))(Y)\in\Gamma(\mathcal{D}^{\perp} )^{0}\)_._
Proof.:
1. Let \(X\in\mathfrak{X}(M)\) and \(\alpha\in\Gamma(\mathcal{D}^{0})\). Then, by the definition of the pullback connection, given in (12), and the Leibniz rule, we have \[\overline{\nabla}_{X}(P^{*}(\alpha))(Y) =X(P^{*}(\alpha)(Y))-P^{*}(\alpha)(\overline{\nabla}_{X}(Y))\] \[=X(\alpha(P(Y)))-\alpha(\overline{\nabla}_{X}(Y))\] \[=\alpha(X(P(Y)))-\alpha(\overline{\nabla}_{X}(Y))\] \[=\alpha(P(\mathcal{L}_{X}(Y)))-\alpha(\overline{\nabla}_{X}(Y))\] \[=P(\alpha(\mathcal{L}_{X}(Y)))-\alpha(\overline{\nabla}_{X}(Y))\] \[=P(\mathcal{L}_{X}(\alpha(Y)))-\alpha(\overline{\nabla}_{X}(Y))\] \[=P(\mathcal{L}_{X}(P^{*}(\alpha)(Y)))-\alpha(\overline{\nabla}_{X }(Y))\] \[=P(\overline{\nabla}_{X}(P^{*}(\alpha))(Y))-\alpha(\overline{ \nabla}_{X}(Y)).\] Since \(P(\overline{\nabla}_{X}(P^{*}(\alpha))(Y))\) and \(\alpha(\overline{\nabla}_{X}(Y))\) both lie in \(\Gamma(\mathcal{D}^{0})\), it follows that \(\overline{\nabla}_{X}(P^{*}(\alpha))(Y)\) also lies in \(\Gamma(\mathcal{D}^{0})\).
2. Using the definition of the connection \(\overline{\nabla}^{B}\), we have: \[\overline{\nabla}^{B}_{X}((P^{*})^{c}(\alpha))(Y) =X((P^{*})^{c}(\alpha)(Y))-(P^{*})^{c}(\alpha)(\nabla^{B}_{X}Y)\] \[\quad+(P^{\perp})^{c}(\alpha)(\nabla^{B}_{X}Y).\] Now, let us analyze each term on the right-hand side individually: First, consider \(X((P^{*})^{c}(\alpha)(Y))\). Since \((P^{*})^{c}(\alpha)(Y)\) is a section of \(\mathcal{D}^{0}\) and \(X\) is a vector field on \(M\), \(X((P^{*})^{c}(\alpha)(Y))\) is a section of \(\mathcal{D}^{0}\). Next, we have \(-(P^{*})^{c}(\alpha)(\nabla^{B}_{X}Y)\). Here, \((P^{*})^{c}(\alpha)\) is a bundle map from \(\mathcal{E}\) to \(\mathcal{D}^{0}\), so \((P^{*})^{c}(\alpha)(\nabla^{B}_{X}Y)\) is a section of \(\mathcal{D}^{0}\). The negative sign in front ensures that the result remains in \(\mathcal{D}^{0}\). Finally, we consider \((P^{\perp})^{c}(\alpha)(\nabla^{B}_{X}Y)\). Since \((P^{\perp})^{c}(\alpha)\) is a bundle map from \(\mathcal{E}\) to the orthogonal complement of \(\mathcal{D}^{0}\), \((P^{\perp})^{c}(\alpha)(\nabla^{B}_{X}Y)\) is a section of \(\Gamma(\mathcal{D}^{\perp})\). However, we need it to be a section of \(\Gamma(\mathcal{D}^{0})\). To ensure that \((P^{\perp})^{c}(\alpha)(\nabla^{B}_{X}Y)\) lies in \(\Gamma(\mathcal{D})^{\prime}\), we can use the projection operator \(P\) to project it back onto \(\mathcal{D}^{0}\). This projection ensures that the final result remains within \(\Gamma(\mathcal{D}^{0})\). Combining these results, we see that \(\overline{\nabla}^{B}_{X}((P^{*})^{c}(\alpha))(Y)\) is a section of \(\Gamma(\mathcal{D}^{0})\), as desired.
3. Using the definition of the connection \(\overline{\nabla}^{B}\), we have \[\overline{\nabla}^{B}_{X}((P^{*})^{c}(\alpha))(Y) =X[(P^{*})^{c}(\alpha)(Y)]-(P^{*})^{c}(\alpha)(\overline{\nabla}_ {X}Y)+(P^{*})^{c}(\overline{\nabla}^{B}_{X}\alpha)(Y)\] \[=X[(P^{*})^{c}(\alpha)(Y)]-(P^{*})^{c}(\alpha)(\overline{\nabla}_ {X}Y)+(P^{*})^{c}((\overline{\nabla}_{X}\alpha)^{\top})(Y)\] \[=X[(P^{*})^{c}(\alpha)(Y)]-(P^{*})^{c}(\alpha)(\nabla^{B}_{X}Y)+( P^{*})^{c}((\overline{\nabla}_{X}\alpha)^{\top})(Y)\] where in the last step we used the fact that \[(P^{*})^{c}(\alpha)(\nabla^{B}_{X}Y)=-(P^{*})^{c}(\alpha)(\overline{\nabla}_ {X}Y),\] which follows from the definition of the codifferential operator and the fact that \((P^{*})^{c}=-(P^{*})^{c}\). Now we need to show that the three terms on the right-hand side of this expression lie in \(\Gamma(\mathcal{D}^{\perp})^{0}\). We will do this term by term. First, note
that \((P^{*})^{c}(\alpha)(Y)\in\Gamma(\mathcal{D}^{\perp})^{0}\) since \((P^{*})^{c}(\alpha)\) maps \(\Gamma(\mathcal{D}^{\perp})\) to itself and \(Y\in\Gamma(\mathcal{D}^{\perp})^{0}\).
Next, we need to show that \((P^{*})^{c}(\alpha)(\nabla^{B}_{X}Y)\in\Gamma(\mathcal{D}^{\perp})^{0}\). Note that
\[(P^{*})^{c}(\alpha)(\nabla^{B}_{X}Y)=-(P^{*})^{c}(\alpha)(\overline{\nabla}_{ X}Y),\]
so it suffices to show that \((P^{*})^{c}(\alpha)(\overline{\nabla}_{X}Y)\in\Gamma(\mathcal{D}^{\perp})^{0}\). To see this, note that \(\overline{\nabla}_{X}Y\in\Gamma(\mathcal{D}^{\perp})^{0}\) since \(X\) and \(Y\) are both sections of \(\mathcal{D}^{\perp}\), and that \((P^{*})^{c}(\alpha)\) maps \(\Gamma(\mathcal{D}^{\perp})^{0}\) to itself.
Finally, we need to show that \((P^{*})^{c}((\overline{\nabla}_{X}\alpha)^{\top})(Y)\in\Gamma(\mathcal{D}^{ \perp})^{0}\). To see this, note that \((\overline{\nabla}_{X}\alpha)^{\top}\) is a tensor of type \((1,1)\) that maps vectors tangent to \(M\) to vectors tangent to \(M\), so \((P^{*})^{c}((\overline{\nabla}_{X}\alpha)^{\top})(Y)\) is a section of \(\mathcal{D}^{\perp}\). Moreover, \((P^{*})^{c}((\overline{\nabla}_{X}\alpha)^{\top})\) maps \(\Gamma(\mathcal{D}^{\perp})^{0}\) to itself since \((\overline{\nabla}_{X}\alpha)^{\top}\) maps \(\Gamma(TM)\) to itself and \((P^{*})^{c}\) maps \(\Gamma(\mathcal{D}^{\perp})\) to itself.
Therefore, we have shown that \(\overline{\nabla}_{X}\alpha\in\Gamma(\mathcal{D}^{\perp})^{0}\), which implies that \(\alpha\) is a harmonic one-form with respect to the induced metric on \(\partial M\).
To summarize, we showed that if \(\alpha\) is a closed one-form on \(M\) such that \(\alpha|_{\partial M}=0\), then \(\alpha\) is a harmonic one-form with respect to the induced metric on \(\partial M\).
In the following, we shall present the nonholonomic sub-Finslerian structure results. To begin, we define coordinate independent conditions for the motion of a free mechanical system subjected to linear nonholonomic constraints to be normal extremal with respect to the connected sub-Finslerian manifold, and vice versa. Then, we address the problem of characterizing the normal and abnormal extremals that validate both nonholonomic and Vakonomic equations for a free particle subjected to certain kinematic constraints.
Let \((M,\mathcal{D},F)\) be a nonholonomic sub-Finslerian structure and \(\sigma:[0,1]\rTo M\) be a piecewise smooth horizontal curve tangent to \(\mathcal{D}\), then \(\sigma\) is said to be a normal extremal if there exists \(E\)-admissible curve \(\alpha\) with base curve \(\sigma\) that is auto-parallel with respect to a normal \(\mathcal{L}\)-connection (Definition 3). While the curve \(\sigma\) is said to be an abnormal extremal if there exists \(\gamma\in\Gamma(\mathcal{D}^{0})\) along \(\sigma\) such that \(\nabla_{\alpha}\gamma(t)=0\) for all \(t\in[0,1]\), with \(\alpha\) a \(E\)-admissible curve with base curve \(\sigma\).
**Remark 4**.: Cortes et al. [15], made a comparison between the solutions of the nonholonomic mechanical problem and the solutions of the Vakonomic dynamical problem for the general Lagrangian system. The Vakonomic dynamical problem, associated with a free particle with linear nonholonomic constraints, consists of finding normal extremals with respect to the sub-Finsleriann structure \((M,\mathcal{D},F)\). It is an interesting comparison because the equations of motion for the mechanical problem are derived by means of d'Alembert's principle, while the normal extremals are derived from a variation principle. Our next results are an alternate approach to the Cortes results, that is a coordinate-free approach, for the free particle case in the sub-Finslerian settings.
**Definition 5**.: Let \((M,\mathcal{D},F)\) be a nonholonomic sub-Finslerian structure, one can establish new tensorial operators according to the following:
\[T^{B}:\Gamma(\mathcal{D})\otimes\Gamma(\mathcal{D}^{*})\rTo( \mathcal{D}^{0}),\quad(X,\alpha)\mapsto P^{*}(\overline{\nabla}^{B}_{X}\alpha);\] \[T:\Gamma(\mathcal{D})\otimes\Gamma(\mathcal{D}^{0})\rTo( \mathcal{D}^{\perp})^{0},\quad(X,\gamma)\mapsto(P^{*})^{c}(\delta_{X}\gamma);\]
such that
\[\delta:\Gamma(\mathcal{D})\times\Gamma(\mathcal{D}^{0})\mathop{\hbox to 0.0pt{ \vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 0.0pt depth -0.0pt width 1.0pt}}\nolimits \mathfrak{X}^{*}(M),\quad(X,\gamma)\mapsto\delta_{X}\gamma=i_{X}d\gamma.\]
In addition, these tensorial operators have the following properties:
1. \(T^{B}\) and \(T\) are \(\mathcal{F}(M)\)-bilinear in their independent variables;
2. The behavior of \(T^{B}\) and \(T\) can be identified pointwise;
3. \(T^{B}_{x}(X,\alpha)\) and \(T_{x}(X,\gamma)\) have a clear and unequivocal meaning for all \(X\in\mathcal{D},\alpha\in\mathcal{D}^{*}\) and \(\gamma\in\mathcal{D}^{0}\).
In the following, w show the relation between the operator \(T^{B}\) and the curvature of the distribution \(\mathcal{D}\) using the following condition:
Suppose \(X\in\mathcal{D},\alpha\in\mathcal{D}^{*}\), then one has
\[\langle T(X,\gamma),\alpha\rangle=\langle\delta_{X}\gamma,\alpha\rangle=- \langle\gamma,[X,\alpha]\rangle,\]
for any \(\gamma\in\Gamma(\mathcal{D}^{0})\). Therefore, \(T\) is trivial if and only if \(\mathcal{D}\) is involutive.
**Definition 6**.: Let \(\nabla^{T}\) denote the non-linear connection over \(i:\mathcal{D}\mathop{\hbox to 0.0pt{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 0.0pt depth -0.0pt width 1.0pt}}\nolimits TM\) on \(\mathcal{D}^{0}\) by the following formula
\[\nabla^{T}_{X}\gamma=P^{*}(\delta_{X}\gamma),\]
such that \(X\in\Gamma(\mathcal{D})\) and \(\gamma\in\Gamma(\mathcal{D}^{0})\).
**Proposition 3**.: _Let \((M,\mathcal{D},F)\) be a nonholonomic sub-Finslerian structure, assume that \(\sigma:[0,1]\mathop{\hbox to 0.0pt{\vrule height 6.0pt depth -0.0pt width 0.4pt\vrule height 0.0pt depth -0.0pt width 1.0pt}}\nolimits M\) is a horizontal curve on \(\mathcal{D}\) and let \(\nabla\) be a \(\mathcal{D}\)-adapted \(\mathcal{L}\)-connection. Then, the following properties are satisfied:_
1. _If_ \(p_{0}\in\mathcal{D}^{*}_{\sigma(0)}\) _is a given initial point, then_ \(p(t)=\tilde{p}(t)\) _for each_ \(t\in[0,1]\) _if and only if_ \(T^{B}(\dot{\sigma}(t),\tilde{p}(t))=0\)_, such that_ \(p(t)\) _and_ \(\tilde{p}(t)\) _are parallel transported curves along_ \(\sigma\) _w.r.t._ \(\overline{\nabla}^{B}\) _and_ \(\nabla^{H}\)_, respectively._
2. _If_ \(\gamma_{0}\in\mathcal{D}^{0}_{\sigma(0)}\) _is a given initial point, then_ \(\gamma(t)=\tilde{\gamma}(t)\) _for each_ \(t\in[0,1]\) _if and only if_ \(T(\dot{\sigma}(t),\tilde{\gamma}(t))=0\)_, such that_ \(\gamma(t)\) _and_ \(\tilde{\gamma}(t)\) _are parallel transported curves along_ \(\sigma\) _w.r.t._ \(\nabla\) _and_ \(\nabla^{T}\)_, respectively._
Proof.: It is sufficient to prove that the first case and the second one follow similar arguments.
As a consequence of the definition of the tensorial operator \(T^{B}\), for any section \(S(t)\) of \(\mathcal{D}^{*}\) along \(\sigma\), the next expression is true
\[\nabla^{H}_{\dot{\sigma}(t)}S(t)=\overline{\nabla}^{B}_{\dot{\sigma}(t)}S(t)- T^{B}(\dot{\sigma}(t),S(t)).\]
Now, suppose that \(S(t)=\tilde{p}(t)=p(t)\), then we get,
\[T^{B}(\dot{\sigma}(t),S(t))=0.\]
Conversely, it is well known that, regarding any connection, the parallel transported curves are uniquely determined by their initial conditions.
It is clear that the second property of the above Proposition yields necessary and sufficient conditions for the existence of the curves that have abnormal extremals. In other words, \(\sigma\) is an abnormal extremal if and only if there exists a parallel transported section \(\tilde{\gamma}\) of \(\mathcal{D}^{0}\) along \(\sigma\) with respect to \(\nabla^{T}\) such that \(T(\dot{\sigma}(t),\tilde{\gamma}(t))=0\). Now, by the next Proposition, one can derive the necessary and sufficient condition for normal extremals to be a motion of a free nonholonomic mechanical system and vice versa.
**Lemma 1**.: _Let \((M,\mathcal{D},F)\) be nonholonomic sub-Finslerian structures, and \(\nabla\) be a normal non-linear \(\mathcal{L}\)-connection. Then for any \(\alpha\in\mathfrak{X}^{*}(M)\) we have that \(\nabla_{\alpha}\alpha=0\) if and only if_
\[\nabla^{H}_{E(\alpha)}\alpha(P)= -T(E(\alpha),(P^{*})^{c}(\alpha));\] \[\nabla^{T}_{E(\alpha)}P^{*}(\alpha)= -T^{B}(E(\alpha),\alpha(P)).\]
Proof.: We proved in [3], that \(\nabla_{\alpha}\alpha=0\) if and only if \(\nabla_{\alpha}\alpha=\overline{\nabla}^{B}_{E(\alpha)}(P^{*})^{c}(\alpha)+ \delta_{E(\alpha)}P^{*}(\alpha)=0.\) Moreover, \(P^{*}(\alpha)=\alpha(P)\) and the Barthel non-linear connection preserves the metric, i.e. \(\nabla^{B}\circ\mathcal{L}_{\eta}=\mathcal{L}_{\eta}\circ\overline{\nabla}^{B}\), therefore
\[\overline{\nabla}^{B}_{E(\alpha)}P^{*}(\alpha)= \nabla^{H}_{E(\alpha)}P^{*}(\alpha)+T^{B}(E(\alpha),P^{*}(\alpha )),\] \[\delta_{E(\alpha)}P^{*}(\alpha)= \nabla^{T}_{E(\alpha)}P^{*}(\alpha)+T(E(\alpha),(P^{*})^{c}( \alpha)).\]
According to the fact that \(T^{*}M\) can be written as the direct sum of \((\mathcal{D}^{\perp})^{0}\) and \(\mathcal{D}^{0}\), so the equivalence is pretty clear.
**Theorem 1**.: _If \(\sigma:[0,1]\to M\) is a solution of a free nonholonomic system given by nonholonomic sub-Finslerian structures, then it is also a solution of the corresponding Vakonomic problem, and vice versa, if and only if there exists \(\gamma\in\Gamma(\mathcal{D}^{0})\) along \(\sigma\) such that_
\[\nabla^{T}_{\dot{\sigma}}\gamma(t)=-T^{B}(\dot{\sigma}(t),\mathcal{L}_{L}( \dot{\sigma}(t))), \tag{13}\]
_further, for all \(t\), \(\gamma(t)\in\left(\mathcal{D}_{\sigma(t)}+[\dot{\sigma}(t),\mathcal{D}_{\sigma (t)}]\right)^{0}\)._
Proof.: \(\nabla_{\alpha}\alpha(t)=0\) is the condition for any \(E\)-admissible curve \(\alpha(t)=\mathcal{L}_{L}(\dot{\sigma}(t))+\gamma(t)\) to be parallel transported with respect to a normal \(\mathcal{L}\)-connection. In other words,
\[\nabla^{H}_{\dot{\sigma}}\mathcal{L}_{L}(\dot{\sigma}(t))=-T(\dot{\sigma}(t),\gamma(t))\]
and
\[\nabla^{T}_{\dot{\sigma}}\gamma(t)=-T^{B}(\dot{\sigma}(t),\mathcal{L}_{L}( \dot{\sigma}(t))).\]
Therefore, \(\nabla^{H}_{\dot{\sigma}}\mathcal{L}_{L}(\dot{\sigma}(t))=0\) if and only if \(T(\dot{\sigma}(t),\gamma(t))=0\), such that \(\gamma(t)\) is a solution of (13). Since Remark 3 and Proposition 2 guaranteed that \(\mathcal{D}\) is geodesically invariant, therefore, given any \(\gamma(t)\) in \(\left(\mathcal{D}_{\sigma(t)}+[\dot{\sigma}(t),\mathcal{D}_{\sigma(t)}]\right) ^{0}\), then (13) ensure that there is always a solution for all \(t\in[0,1]\) not only for \(\gamma(0)\) in \(\left(\mathcal{D}_{\sigma(0)}+[\dot{\sigma}(0),\mathcal{D}_{\sigma(0)}]\right) ^{0}\).
## 5. Examples from Robotics
Typically, nonholonomic systems occur when velocity restrictions are applied, such as the constraint that bodies move on a surface without slipping. Bicycles, cars, unicycles, and anything with rolling wheels are all examples of nonholonomic sub-Finslerian structures.
We will discuss the simplest wheeled mobile robot, which is a single upright rolling wheel, or unicycle, which is known as a kinematic penny rolling on a plane. Assume this wheel is of radius \(1\) and does not allow sideways sliding. Its configuration \(M\) consists of the heading angle \(\phi\), the wheel's point or the contact position \((x_{1},x_{2})\), and the rolling angle \(\psi\) (see Figure 1). Consequently, the space concerned has dimensions four, i.e., \(M=\mathbb{R}^{2}\times S^{1}\times S^{1}\). There are two control functions deriving the wheel [14, 21]:
1. \(u_{1}\) [rolling speed], the forward-backward rolling angular,
(II) \(u_{2}\) [turning speed], the speed of turning the heading direction \(\phi\).
With these controls, the rate of change of the coordinates can be expressed as follows:
\[\dot{M}=\begin{bmatrix}\dot{\phi}\\ \dot{x}_{1}\\ \dot{x}_{2}\\ \dot{\psi}\end{bmatrix}=\begin{bmatrix}0&1\\ \cos\phi&0\\ \sin\phi&0\\ 1&0\end{bmatrix}\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix}=X(M)u. \tag{14}\]
As we generally do not worry about the wheel's rolling angle, we could drop the fourth row from the above equation to get a simpler control system
\[\dot{M}=\begin{bmatrix}\dot{\phi}\\ \dot{x}_{1}\\ \dot{x}_{2}\end{bmatrix}=\begin{bmatrix}0&1\\ \cos\phi&0\\ \sin\phi&0\end{bmatrix}\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix}=X(M)u, \tag{15}\]
which can be written as the following equation:
\[X(M)u=X_{1}(M)u_{1}+X_{2}(M)u_{2},\]
such that \(u_{1},u_{2}\) are called the controls and \(X_{1}(w),X_{2}(w)\) are called vector fields. Moreover, each vector field assigns a velocity to every point \(w\) in the configuration space, so these vector fields are sometimes called velocity vector fields. Hence the velocity vector fields of any solution curve should lie in \(\mathcal{D}\) spanned by the following vector fields:
\[X_{1}(M) =\cos\phi\frac{\partial}{\partial x_{1}}+\sin\phi\frac{\partial} {\partial x_{2}}+\frac{\partial}{\partial\psi}\] \[X_{2}(M) =\frac{\partial}{\partial\phi}.\]
In a natural way, a sub-Riemannian metric on \(\mathcal{D}\) is gained by asserting the vector fields \(X_{1}(M),X_{2}(M)\) to be orthonormal vectors,
\[\langle u_{1}X_{1}(M)+u_{2}X_{2}(M),u_{1}X_{1}(M)+u_{2}X_{2}(M)\rangle=u_{1}^{ 2}+u_{2}^{2}.\]
The integral of this quadratic form measures the work completed in rolling the heading angle \(\phi\) at the rate \(\dot{\phi}\) and propelling the wheel ahead at the rate of \(\dot{\psi}\). The sub-Riemannian structure will be adjusted as specified by the notion that curvature is costly: namely, it takes more attempts to steer the wheel in a tight circle with little forward or backward movement than to steer it in a wide arc. Therefore, the
Figure 1. A kinematic penny rolling on a plane
curvature of the projection \(\sigma\) given by \(\kappa=\frac{\dot{\phi}}{\psi}\) this brings us to assume sub-Finsler metrics of the body
\[F=f(\kappa)\sqrt{d\psi^{2}+d\phi^{2}},\]
such that \(f\) grows larger but remains constrained as \(\mid\kappa\mid\) increases. After we check the sub-Finslerian property, one finds the nonholonomic of the rolling wheel, often known as a unicycle, by the equation \(\dot{M}=X(M)u\) which is the kinematic model of the unicycle.
## 6. The sub-Laplacian associated with nonholonomic sub-Finslerian structures
_The sub-Laplacian_ is a differential operator that arises naturally in the study of nonholonomic sub-Finslerian structures. These are geometric structures that generalize Riemannian manifolds, allowing for non-integrable distributions of tangent spaces.
On a sub-Finslerian manifold \(M\), there is a distinguished distribution of tangent spaces \(\mathcal{D}\), which corresponds to the directions that are accessible by moving along curves with bounded sub-Finsler length. The sub-Finsler metric \(F\) on \(M\) measures the sub-Finsler length of curves with respect to this distribution.
The sub-Laplacian is defined as a second-order differential operator that acts on functions on \(M\) and is defined in terms of the metric \(F\) and the distribution \(\mathcal{D}\). It is given by
\[\Delta_{F}=\operatorname{div}_{\mathcal{D}}(\operatorname{grad}_{F}),\]
where \(\operatorname{grad}_{F}\) is the gradient vector field associated with \(F\) which is the unique vector field satisfying \(\operatorname{d}\!F(\operatorname{grad}_{F},X)=X(F)\) for all vector fields \(X\) on \(M\), and \(\operatorname{div}_{\mathcal{D}}\) is the divergence operator with respect to the distribution \(\mathcal{D}\), which is defined as the trace of the tangential part of the connection on \(\mathcal{D}\).
Our goal in this section is to show that the sub-Laplacian measures the curvature of the sub-Finslerian structure. It captures the interplay between the sub-Finsler metric \(F\) and the distribution \(\mathcal{D}\), and plays a crucial role in many geometric and analytic problems on nonholonomic sub-Finslerian manifolds.
For example, the heat kernel associated with the sub-Laplacian provides a way to study the long-term behavior of solutions to the heat equation on sub-Finslerian manifolds. The Hodge theory on sub-Finslerian manifolds is also intimately related to the sub-Laplacian, and involves the study of differential forms that are harmonic with respect to the sub-Laplacian.
**Remark 5**.: To see that the sub-Laplacian measures the curvature of the sub-Finslerian structure, let us first recall some basic facts about Riemannian manifolds, see [16]. On a Riemannian manifold \((M,g)\), the Laplace-Beltrami operator is defined as
\[\Delta_{g}=\operatorname{div}(\operatorname{grad}_{g}),\]
where \(\operatorname{grad}_{g}\) is the gradient vector field associated with the Riemannian metric \(g\), and \(\operatorname{div}\) is the divergence operator. It is a well-known fact that the Laplace-Beltrami operator measures the curvature of the Riemannian structure in the sense that it is zero if and only if the Riemannian manifold is flat.
The sub-Finslerian case is more complicated due to the presence of the distribution \(\mathcal{D}\) that is not integrable in general. However, the sub-Laplacian \(\Delta_{F}\) can still
be understood as a curvature operator. To see this, we need to introduce the notion of a horizontal vector field.
A vector field \(X\) on \(M\) is called horizontal if it is tangent to the distribution \(\mathcal{D}\). Equivalently, \(X\) is horizontal if it is locally of the form \(X=\sum_{i=1}^{k}h_{i}X_{i}\), where \(h_{i}\) are smooth functions and \(X_{1},\ldots,X_{k}\) are smooth vector fields that form a basis for \(\mathcal{D}\).
Given a horizontal vector field \(X\), we can define its sub-Finsler length \(|X|_{F}\) as the infimum of the lengths of horizontal curves that are tangent to \(X\) at each point. Equivalently, \(|X|_{F}\) is the supremum of the scalar products \(g(X,Y)\) over all horizontal vector fields \(Y\) with \(|Y|_{F}\leq 1\).
With these definitions in place, we can now show that the sub-Laplacian measures the curvature of the sub-Finslerian structure. More precisely, we have the following result:
**Theorem 2**.: _The sub-Laplacian \(\Delta_{F}\) is zero if and only if the sub-Finslerian manifold \((M,F,\mathcal{D})\) is locally isometric to a Riemannian manifold._
Proof.: First, suppose that \((M,F,\mathcal{D})\) is locally isometric to a Riemannian manifold \((M,g)\). Then we can choose a local frame of orthonormal horizontal vector fields \(X_{1},\ldots,X_{k}\) with respect to the Riemannian metric \(g\). In this frame, we have
\[\mathrm{grad}_{F}h=\sum_{i=1}^{k}g(\mathrm{grad}_{h},X_{i})X_{i}\]
for any function \(h\) on \(M\), and hence
\[\Delta_{F}h=\sum_{i=1}^{k}\mathrm{div}_{\mathcal{D}}(g(\mathrm{grad}_{h},X_{i} )X_{i}).\]
Using the fact that the \(X_{i}\) form a basis for \(\mathcal{D}\), we can rewrite this as
\[\Delta_{F}h=\mathrm{div}(\mathrm{grad}_{h})=\Delta_{g}h,\]
where \(\Delta_{g}\) is the Laplace-Beltrami operator associated with the Riemannian metric \(g\). Since \(\Delta_{g}\) is zero if and only if \((M,g)\) is flat, it follows that \(\Delta_{F}\) is zero if and only if \((M,F,\mathcal{D})\) is locally isometric to a Riemannian manifold, which implies that the sub-Finslerian structure is also flat.
Conversely, suppose that \(\Delta_{F}\) is zero. Let \(X_{1},\ldots,X_{k}\) be a local frame of horizontal vector fields such that \(F(X_{i})=1\) for all \(i\), and let \(\omega_{ij}=g(X_{i},X_{j})\) be the Riemannian metric induced by \(F\) on \(\mathcal{D}\). Using the definition of the sub-Laplacian and the fact that \(\Delta_{F}\) is zero, we have
\[0=\Delta_{F}F=\mathrm{div}_{\mathcal{D}}(\mathrm{grad}_{F}F)=\sum_{i=1}^{k} \sum_{i=1}^{k}\frac{\partial^{2}F}{\partial x_{i}\partial x_{j}}\omega_{ij},\]
where \(x_{1},\ldots,x_{k}\) are local coordinates on \(M\) that are adapted to \(\mathcal{D}\) (i.e., \(X_{1},\ldots,X_{k}\) form a basis for the tangent space at each point). This implies that the Hessian of \(F\) with respect to the Riemannian metric \(\omega\) is zero, so \(F\) is locally affine with respect to \(\omega\). In other words, \((M,F,\mathcal{D})\) is locally isometric to a Riemannian manifold.
**Remark 6**.: In the above Theorem 2, we have shown that the sub-Laplacian \(\Delta_{F}\) measures the curvature of the sub-Finslerian structure. If \(\Delta_{F}\) is zero, then the sub-Finslerian manifold is locally isometric to a Riemannian manifold, and hence the
sub-Finslerian structure is flat. If \(\Delta_{F}\) is nonzero, then the sub-Finslerian manifold is not locally isometric to a Riemannian manifold, and the sub-Finslerian structure is curved. This means that the shortest paths between two points on the manifold are not necessarily straight lines, and the geometry of the manifold is more complex than that of a Riemannian manifold.
|
2305.19704 | Tutorial: projector approach to master equations for open quantum
systems | Most quantum theorists are familiar with different ways of describing the
effective quantum dynamics of a system coupled to external degrees of freedom,
such as the Born-Markov master equation or the adiabatic elimination.
Understanding the deep connection between these -- sometimes apparently
unrelated -- methods can be a powerful tool, allowing us to derive effective
dynamics in unconventional systems or regimes. This tutorial aims at providing
quantum theorists across multiple fields (e.g., quantum and atom optics,
optomechanics, or hybrid quantum systems) with a self-contained practical
toolbox to derive effective quantum dynamics, applicable to systems ranging
from N-level emitters to mechanical resonators. First, we summarize the
projector approach to open quantum systems and the derivation of the
fundamental Nakajima-Zwanzig equation. Then, we show how three common effective
equations, namely the Brownian master equation, the Born-Markov master
equation, and the adiabatic elimination used in atom and molecular optics, can
be derived from different perturbative expansions of the Nakajima-Zwanzig
equation. We also solve in detail four specific examples using this formalism,
namely a harmonic oscillator subject to displacement noise, the effective
equations of a mechanical resonator cooled by an optical cavity, the Purcell
effect for a qubit coupled to an optical cavity, and the adiabatic elimination
in a Lambda system. | C. Gonzalez-Ballestero | 2023-05-31T10:00:22Z | http://arxiv.org/abs/2305.19704v4 | # Tutorial: projector approach to open quantum systems
###### Abstract
Most quantum theorists are familiar with different ways of describing the effective quantum dynamics of a system coupled to external degrees of freedom, such as the Born-Markov master equation or the adiabatic elimination. Understanding the deep connection between these apparently unrelated methods can be a powerful tool, allowing us to derive effective dynamics in unconventional systems or regimes. This tutorial aims at providing quantum theorists across multiple fields (e.g. quantum and atom optics, optomechanics, or hybrid quantum systems) with a self-contained practical toolbox to derive effective quantum dynamics, applicable to systems ranging from \(N-\)level emitters to mechanical resonators. First, we summarize the projector approach to open quantum systems and the derivation of the fundamental Nakajima-Zwanzig equation. Then, we show how three common effective equations, namely the Born-Markov Master Equation, the adiabatic elimination used in atom physics, and a different adiabatic elimination used in sideband cooling, can be derived from different perturbative expansions of the Nakajima-Zwanzig equation. We also solve in detail two specific examples using this formalism, namely the adiabatic elimination in a Lambda system and the effective equations of a mechanical resonator cooled by an optical cavity.
## I Introduction
A core tool in theoretical quantum physics is the ability to derive, starting from the dynamical equation of a large system, a consistent quantum dynamical equation only for a subsystem of interest. Many textbooks only focus on specific physical systems and often use mathematical shortcuts to reach the desired result, thus failing to provide a general picture of the open quantum system problem. As a consequence, it is often assumed that different physical systems or parameter regimes require different and fully unrelated methods to derive reduced dynamics. For instance, "adiabatic elimination" can be used to trace out highly detuned levels of a lossless \(N-\)level system, whereas a "Born-Markov master equation" can be used to describe open systems coupled to a rapidly decaying bath. However, as known in open quantum systems theory [1; 2], all these methods are just particular cases of a single general equation describing the quantum dynamics of subsystems. This equation is typically derived using a beautiful mathematical formulation in terms of projection superoperators. Getting familiar with this formulation can be a powerful asset. Not only does it allow us to derive reduced dynamics in cases beyond the standard systems and parameter regimes, but also helps clarify unfortunate nomenclature conflicts across different fields [3].
This tutorial aims at providing a rigorous yet practical guide for quantum theorists on how to rigorously derive reduced dynamics in common physical scenarios and regimes. This tutorial summarizes well-known results in open quantum systems theory, and assumes the reader is familiar with open quantum systems and the derivation of basic master equations. We emphasize that, despite the apparent mathematical complexity, this tutorial aims only at providing a practical toolbox. It is thus not intended as a comprehensive review, nor as a complete bibliographical resource, nor as the most general analysis of the vast field of open quantum systems.
This tutorial is organized as follows. First, we summarise the derivation of the fundamental equation governing any open system, namely the Nakajima-Zwanzig equation, in Sec. II. Then, in Secs. III, IV, and V, we show how to derive the three most common reduced equations in applied quantum theory (Born-Markov master equation, atomic physics version of adiabatic elimination, and sideband-cooling version of adiabatic elimination, respectively) from different perturbative expansions of the Nakajima-Zwanzig equation. Sections IV and V also contain the detailed derivation of well-known examples, specifically the adiabatic elimination in a Lambda system and optomechanical sideband cooling, respectively.
## II Statement of the problem and the Nakajima-Zwanzig equation.
We consider a quantum system with density matrix \(\rho\), whose evolution is governed by a possibly time-dependent Liouvillian \(\mathcal{L}(t)\),
\[\dot{\rho}=\mathcal{L}(t)\rho. \tag{1}\]
Our goal, as in any open quantum systems scenario, is to determine the effective dynamics of only a part of the whole system. Typically we call the part of interest "system" \(S\) and the remaining part "bath" \(B\). The effective dynamics of the system under the action of the bath depends heavily on the case under consideration but, at the fundamental level, can be formulated in the same way for all cases. In this section we present this general formulation, which is the starting point to derive, in the following sections, the most common forms of effective dynamics found in quantum science.
More specifically, we aim at solving the following problem (Fig. 1). Let us define a projector \(\mathcal{P}=\mathcal{P}^{2}\) in the space of density matrices, and its complementary projector \(\mathcal{Q}=\mathcal{Q}^{2}\):
\[\mathcal{Q}\equiv\mathfrak{I}-\mathcal{P}. \tag{2}\]
Here, \(\mathfrak{I}\) represents the identity superoperator. Note that, throughout this tutorial, we will use normal font for operators acting on the space of kets/wavefunctions (e.g., \(\hat{H},\mathbb{1}...\)) and cursive font for the superoperators acting on the space of density matrices (e.g., \(\mathcal{P}\), \(\mathcal{Q}\), \(\mathfrak{I}...\)). These two projectors are orthogonal, i.e.,
\[\mathcal{P}\mathcal{Q}=\mathcal{Q}\mathcal{P}=0, \tag{3}\]
and define two contributions to the density matrix,
\[\rho=\mathcal{P}\rho+\mathcal{Q}\rho\equiv v+w. \tag{4}\]
Our goal is to derive a dynamical equation for one of the contributions, namely \(v\), that depends only on \(v\) itself. As we will see below, when applying this to physical problems we will identify \(v\) with the density matrix projected onto the subspace containing the degrees of freedom of the system \(S\). We remark that the projector \(\mathcal{P}\) must be chosen differently for each situation depending on the relations between the system and bath energy scales.
To derive an equation of evolution for \(v\), we first introduce the identity superoperator \(\mathcal{P}+\mathcal{Q}\) both to the right and to the left of the Liouvillian in Eq. (1). This allows us to split this equation into two orthogonal coupled equations,
\[\dot{v}=\mathcal{P}\mathcal{L}v+\mathcal{P}\mathcal{L}w, \tag{5}\]
\[\dot{w}=\mathcal{Q}\mathcal{L}v+\mathcal{Q}\mathcal{L}w. \tag{6}\]
Here we have assumed the projectors are time-independent so that \(\mathcal{P}\dot{\rho}=\dot{v}\). Our second step is to formally solve Eq. (6) and introduce it in Eq. (5). We do so by defining a propagator \(\mathcal{G}(t)\) fulfilling
\[\dot{\mathcal{G}}(t)=\mathcal{Q}\mathcal{L}(t)\mathcal{G}(t)\quad;\quad \mathcal{G}(0)=\mathfrak{I}. \tag{7}\]
From the identity \(\mathcal{G}(t)\mathcal{G}^{-1}(t)=\mathfrak{I}\) we can also derive the properties of its inverse,
\[\dot{\mathcal{G}}^{-1}(t)=-\mathcal{G}^{-1}(t)\mathcal{Q}\mathcal{L}(t)\quad; \quad\mathcal{G}^{-1}(0)=\mathfrak{I}. \tag{8}\]
Using these definitions one can cast Eq. (6) in the form
\[\frac{d}{dt}(\mathcal{G}^{-1}(t)w(t))=\mathcal{G}^{-1}(t)\mathcal{Q}\mathcal{ L}(t)v(t), \tag{9}\]
and formally solve it as
\[w(t)=w(0)+\mathcal{G}(t)\int_{0}^{t}dt^{\prime}\mathcal{G}^{-1}(t^{\prime}) \mathcal{Q}\mathcal{L}(t^{\prime})v(t^{\prime}). \tag{10}\]
Hereafter we will assume
\[w(0)=0, \tag{11}\]
which, for the type of projectors \(\mathcal{P}\) we consider in this tutorial, is equivalent to assuming no initial correlations between system and bath [1]. Analyzing the implications of initial system-bath correlations is beyond the scope of this tutorial, but we remark that this is a relevant and complex topic in the field of open quantum systems (see e.g. [4] and references therein). Introducing Eq. (11) in Eq. (5) we obtain the desired equation,
\[\dot{v}(t)=\mathcal{P}\mathcal{L}(t)v(t)+\\ +\mathcal{P}\mathcal{L}(t)\mathcal{G}(t)\int_{0}^{t}dt^{\prime} \mathcal{G}^{-1}(t^{\prime})\mathcal{Q}\mathcal{L}(t^{\prime})v(t^{\prime}). \tag{12}\]
We can write a more explicit expression by explicitly solving for the propagator \(\mathcal{G}(t)\). Formally integrating Eq. (7) yields
\[\mathcal{G}(t)=\mathfrak{I}+\int_{0}^{t}dt^{\prime}\mathcal{Q}\mathcal{L}(t^{ \prime})\mathcal{G}(t^{\prime}). \tag{13}\]
By repeatedly reintroducing this equation onto itself we obtain a solution in terms of an infinite series,
Figure 1: In the projector approach, the time evolution of a system density matrix (blue curve) is projected by a projector \(\mathcal{P}\) onto a chosen subspace. The dynamical equation within this subspace describes the evolution of these degrees of freedom – the “system” \(S\) (orange curve) – influenced by the remaining ones – the “bath” \(B\). This general equation can be perturbatively expanded to recover various conventional methods to describe reduced dynamics, e.g. the Born-Markov equation for a system coupled to a continuum (left box), the adiabatic elimination of levels in an \(N-\)level system (lower box), or the cooling of a mechanical resonator coupled to an optical cavity (right box).
\[\mathcal{G}(t) =\left(\mathfrak{I}+\int_{0}^{t}dt^{\prime}\mathcal{QL}(t^{\prime}) +\int_{0}^{t}dt^{\prime}\int_{0}^{t^{\prime}}dt^{\prime\prime}\mathcal{QL}(t^{ \prime})\mathcal{QL}(t^{\prime\prime})+...\right)= \tag{14}\] \[=\left(\mathfrak{I}+\int_{0}^{t}dt^{\prime}\mathcal{T}_{+}\left[ \mathcal{QL}(t^{\prime})\right]+\frac{1}{2!}\int_{0}^{t}dt^{\prime}\int_{0}^{ t}dt^{\prime\prime}\mathcal{T}_{+}\left[\mathcal{QL}(t^{\prime})\mathcal{QL}(t^{ \prime\prime})\right]+...\right)=\mathcal{T}_{+}\left[\exp\int_{0}^{t}dt^{ \prime}\mathcal{QL}(t^{\prime})\right],\]
where \(\mathcal{T}_{+}\) is the usual time-ordering superoperator (for a detailed definition and discussion in the context of quantum time-evolution see e.g. Ref. [5], Chapter 3). Similarly,
\[\mathcal{G}^{-1}(t)=\mathcal{T}_{-}\left[\exp\left(-\int_{0}^{t}dt^{\prime} \mathcal{QL}(t^{\prime})\right)\right] \tag{15}\]
with \(\mathcal{T}_{-}\) the time anti-ordering superoperator. Introducing the above two expressions into Eq. (12) we find the explicit closed equation for the density matrix of interest, \(v\),
\[\boxed{\dot{v}(t)=\mathcal{PL}(t)v(t)+\mathcal{PL}(t)\int_{0}^{t}d\tau \mathcal{T}_{+}\left[\exp\int_{0}^{t}dt^{\prime}\mathcal{QL}(t^{\prime}) \mathcal{Q}\right]\mathcal{T}_{-}\left[\exp\left(-\int_{0}^{\tau}dt^{\prime} \mathcal{QL}(t^{\prime})\mathcal{Q}\right)\right]\mathcal{QL}(\tau)v(\tau).} \tag{16}\]
Note that we have added trivial \(\mathcal{Q}\) projectors to the exponentials to obtain a symmetric expression. This equation is known as the Nakajima-Zwanzig equation [6; 7], and is completely general aside from the two assumptions, namely \(w(0)=0\) and time-independent projector \(\mathcal{P}\). Note that the Nakajima-Zwanzig equation is trivially fulfilled for the two limiting cases of \(\mathcal{P}=\mathfrak{I}\) (i.e. the subspace of interest is the whole space of density matrices) and \(\mathcal{P}=0\) (i.e. the complementary subspace is is the whole space of density matrices). Last but not least, we emphasize that the solution of the above equation involves the full solution of the whole system, that is, the Nakajima-Zwanzig equation is just a recasting of Eq. (1) and has the same complexity. The true strength of the Nakajima-Zwanzig equation relies on its potential for a perturbative expansion of the Liouvillian in terms of the relevant timescales, as we will see below.
### Usual form of the Liouvillian and simplification of the Nakajima-Zwanzig equation
In the context of open quantum systems, the total Hilbert space \(\mathbb{H}\) is usually split as \(\mathbb{H}=\mathbb{H}_{S}\otimes\mathbb{H}_{B}\) where \(\mathbb{H}_{S}\) and \(\mathbb{H}_{B}\) describe the individual Hilbert spaces for the system and the bath respectively. The Liouvillian can thus also be written as a sum of three terms,
\[\mathcal{L}(t)=\mathcal{L}_{S}(t)+\mathcal{L}_{B}(t)+\mathcal{L}_{\text{Int}}( t), \tag{17}\]
corresponding to the Liouvillians of the system degrees of freedom (whose reduced dynamics we aim at deriving), the bath degrees of freedom, and their interaction respectively. We choose to define \(\mathcal{L}_{S}(t)\) and \(\mathcal{L}_{B}(t)\) as the terms in the Liouvillian that act only on the subspaces of system and bath, respectively. That is, if one writes a general operator \(\{\hat{\mathcal{O}}:\mathbb{H}\rightarrow\mathbb{H}\}\) as \(\hat{\mathcal{O}}=\sum_{\alpha}\hat{\mathcal{S}}_{\alpha}\otimes\hat{B}_{\alpha}\) with \(\hat{\mathcal{S}}_{\alpha}\) and \(\hat{B}_{\alpha}\) arbitrary system and bath operators (i.e., \(\hat{\mathcal{S}}_{\alpha}:\mathbb{H}_{S}\rightarrow\mathbb{H}_{S}\) and \(\hat{B}_{\alpha}:\mathbb{H}_{B}\rightarrow\mathbb{H}_{B}\)) and \(\alpha\) an arbitrary index, then by definition
\[\mathcal{L}_{S}(t)\sum_{\alpha}\hat{\mathcal{S}}_{\alpha}\otimes\hat{B}_{ \alpha}=\sum_{\alpha}\left(\mathcal{L}_{S}(t)\hat{\mathcal{S}}_{\alpha}\right) \otimes\hat{B}_{\alpha}, \tag{18}\]
and
\[\mathcal{L}_{B}(t)\sum_{\alpha}\hat{\mathcal{S}}_{\alpha}\otimes\hat{B}_{ \alpha}=\sum_{\alpha}\hat{\mathcal{S}}_{\alpha}\otimes\left(\mathcal{L}_{B}(t )\hat{B}_{\alpha}\right). \tag{19}\]
Conversely, all the terms acting non-trivially on both \(S\) and \(B\) are by definition contained in \(\mathcal{L}_{\text{Int}}(t)\).
Usually, the projector \(\mathcal{P}\) is chosen such that it projects onto product state density matrices with a common bath state, i.e,
\[\mathcal{P}:\rightarrow\mathcal{P}(*)=\rho_{B}\otimes\text{Tr}_{B}\left[(*) \right], \tag{20}\]
where \((*)\) indicates the argument operator and \(\text{Tr}_{B}\) a partial trace over the bath degrees of freedom. Note that by definition
\[v(t)=\mathcal{P}\rho=\rho_{B}\otimes\rho_{S}(t), \tag{21}\]
where \(\rho_{S}(t)\) is the reduced density matrix of the system. The choice of \(\rho_{B}\) in Eq. (20) is physically motivated by the problem at hand. The most popular choice is a steady state of the bath Liouvillian, i.e.,
\[\mathcal{L}_{B}(t)\rho_{B}=0. \tag{22}\]
This choice will be assumed throughout all this tutorial. However, we remark that other, more complicated systems may require the choice of more involved projectors [8; 9; 10].
For the projector defined in Eq. (20), the following general properties can be demonstrated:
\[\mathcal{L}_{S}(t)\mathcal{P}=\mathcal{P}\mathcal{L}_{S}(t), \tag{23}\]
\[\mathcal{L}_{B}(t)\mathcal{P}=0=\mathcal{PL}_{B}(t). \tag{24}\]
The first of these properties follows from Eq. (18). Regarding the second property, the first equality is a consequence of Eq. (22), whereas the second equality can be derived assuming \(\mathcal{L}_{B}\) is a physical (i.e. trace- and positivity-preserving) Liouvillian [11]. In many cases, as we will see in the following sections, it is also possible to demonstrate that \(\mathcal{PL}_{\text{Int}}(t)\mathcal{P}=0\), but since this is not always the case we will not assume this equality.
The above properties allow for a simplification of the Nakajima-Zwanzig equation [1]. After taking the partial trace over the bath we find
\[\dot{\rho}_{S}(t) =\mathcal{L}_{S}(t)\rho_{S}(t)+\text{Tr}_{B}\left[\mathcal{PL}_{ \text{Int}}(t)v(t)\right]+ \tag{25}\] \[+\text{Tr}_{B}\mathcal{L}_{\text{Int}}(t)\int_{0}^{t}d\tau \mathcal{T}_{+}\left[\exp\int_{0}^{t}dt^{\prime}\mathcal{QL}(t^{\prime}) \mathcal{Q}\right]\mathcal{T}_{-}\left[\exp\left(-\int_{0}^{\tau}dt^{\prime} \mathcal{QL}(t^{\prime})\mathcal{Q}\right)\right]\mathcal{QL}_{\text{Int}}( \tau)v(\tau).\]
This equation is still generally involved, but can be expanded perturbatively. In this procedure only the second line of Eq. (25), that is, only the reduced evolution of the reduced density matrix \(\rho_{S}(t)\) induced by the bath, is expanded perturbatively. In the following sections, we will address three particular cases of such expansion.
## III Particular case 1: Born-Markov master equation
An example _par excellence_ of reduced dynamics is the Born-Markov Master Equation, obtained whenever a system is weakly coupled to an environment with rapidly decaying correlations. The weak system-bath coupling is encoded in a small perturbative parameter \(0<\epsilon\ll 1\), such that the Liouvillian Eq. (17) reads
\[\mathcal{L}(t)=\mathcal{L}_{S}(t)+\mathcal{L}_{B}+\epsilon\mathcal{L}_{\text {Int}}(t). \tag{26}\]
Generally we allow both system and bath to be open systems themselves, that is, they are described by a coherent and a dissipative evolution,
\[\mathcal{L}_{S}(t)(*)=-\frac{i}{\hbar}\left[\hat{H}_{S}(t),(*)\right]+ \mathcal{D}_{S}(t)[(*)], \tag{27}\]
\[\mathcal{L}_{B}(*)=-\frac{i}{\hbar}\left[\hat{H}_{B},(*)\right]+\mathcal{D}_{B }[(*)], \tag{28}\]
where \(\mathcal{D}_{S}[(*)]\) and \(\mathcal{D}_{B}[(*)]\) are arbitrary dissipators. For simplicity we assume the bath Liouvillian to be time-independent and the interaction Liouvillian to be purely conservative, i.e.,
\[\mathcal{L}_{\text{Int}}(t)(*)=-\frac{i}{\hbar}\left[\hat{V}(t),(*)\right], \tag{29}\]
although these assumptions are not necessary.
It is usual to transform to the interaction picture with respect to \(\hat{H}_{S}(t)+\hat{H}_{B}\), where the Liouvillians simplify to
\[\mathcal{L}_{j}^{(i)}(*)=\mathcal{D}_{j}^{(i)}(t)[(*)]\ \ \left(j=S,B\right), \tag{30}\]
and the index \((i)\) denotes the interaction picture. Introducing the above Liouvillians in Eq. (25) we find
\[\dot{\rho}_{S}^{(i)}(t)=\mathcal{D}_{S}^{(i)}(t)\rho_{S}^{(i)}(t)+ \epsilon\text{Tr}_{B}\left[\mathcal{L}_{\text{Int}}^{(i)}(t)\left(\rho_{B}^{(i )}\otimes\rho_{S}^{(i)}(t)\right)\right]+\epsilon^{2}\text{Tr}_{B}\mathcal{L} _{\text{Int}}^{(i)}(t)\int_{0}^{t}d\tau \tag{31}\] \[\mathcal{T}_{+}\left[\exp\int_{0}^{t}dt^{\prime}\Big{(}\mathcal{L }_{B}^{(i)}(t^{\prime})+\epsilon\mathcal{QL}_{\text{Int}}^{(i)}(t^{\prime}) \mathcal{Q}\Big{)}\right]\mathcal{T}_{-}\left[\exp\left(-\int_{0}^{\tau}dt^{ \prime}\Big{(}\mathcal{L}_{B}^{(i)}(t^{\prime})+\epsilon\mathcal{QL}_{\text {Int}}^{(i)}(t^{\prime})\mathcal{Q}\Big{)}\right)\right]\mathcal{QL}_{\text{ Int}}^{(i)}(\tau)v^{(i)}(\tau),\]
where we have used the identity \(\mathcal{L}_{B}=\mathcal{QL}_{B}\mathcal{Q}\). Our goal is to obtain an reduced equation to second order in the small coupling parameter \(\epsilon\). To do so, we can expand the time-ordered exponentials as
\[\mathcal{T}_{\eta}\left[\eta\exp\int_{0}^{T}dt^{\prime}\Big{(}\mathcal{L}_{B}^ {(i)}(t^{\prime})+\epsilon\mathcal{QL}_{\text{Int}}^{(i)}(t^{\prime}) \mathcal{Q}\Big{)}\right]\approx e^{\eta\mathcal{L}_{B}^{(i)}T}\left( \mathfrak{I}+\mathcal{O}(\epsilon)\right), \tag{32}\]
for \(\eta=\pm\) and \(T\) an arbitrary time argument. Since the last term in Eq. (31) is already second order in \(\epsilon\), only the identity terms in Eq. (32) are retained, resulting in
\[\dot{\rho}_{S}^{(i)}(t)=\mathcal{D}_{S}^{(i)}(t)\rho_{S}^{(i)}(t)+\epsilon \text{Tr}_{B}\left[\mathcal{L}_{\text{Int}}^{(i)}(t)\left(\rho_{B}^{(i)} \otimes\rho_{S}^{(i)}(t)\right)\right]+\epsilon^{2}\text{Tr}_{B}\mathcal{L} _{\text{Int}}^{(i)}(t)\int_{0}^{t}d\tau e^{\mathcal{L}_{B}^{(i)}(t-\tau)} \mathcal{QL}_{\text{Int}}^{(i)}(\tau)v^{(i)}(\tau)+\mathcal{O}(\epsilon^{3}), \tag{33}\]
where we have used the property \(\mathcal{QL}_{B}=\mathcal{L}_{B}\mathcal{Q}\). This equation is the general form of a Born master equation.
### Reducing to common expression
To reduce Eq. (33) to the more common "textbook" version, we first write the interaction potential in general form as
\[\hat{V}^{(i)}(t)=\sum_{\alpha}\hat{S}^{(i)}_{\alpha}(t)\otimes\hat{B}^{(i)}_{ \alpha}(t), \tag{34}\]
with \(\hat{S}_{\alpha}\) and \(\hat{B}_{\alpha}\) arbitrary system and bath operators and \(\alpha\) an arbitrary index. When the bath operators have nonzero expectation value, i.e., \(\langle\hat{B}_{\alpha}(t)\rangle\equiv\text{Tr}_{B}[\rho_{B}\hat{B}_{\alpha}( t)]\neq 0\), the integral in Eq. (33) might diverge (this occurs, for instance, in the case of non-dissipative baths, \(\mathcal{L}^{(i)}_{B}(t)=0\)). This divergence stems from the fact that the perturbative expansion includes a part of the interaction \(\hat{V}^{(i)}(t)\) which is purely a system operator, namely the driving term \(\text{Tr}_{B}[\rho_{B}\hat{V}(t)]=\sum_{\alpha}\hat{S}_{\alpha}(t)\langle\hat {B}_{\alpha}(t)\rangle\). To avoid this divergence, we redefine the system and interaction Liouvillians in the Schrodinger picture in the following way,
\[\mathcal{L}^{\prime}_{S}(t)(*)=-\frac{i}{\hbar}\left[\hat{H}_{S}(t)+\hat{S}_{ V}(t),(*)\right]+\mathcal{D}_{S}(t)[(*)], \tag{35}\]
\[\mathcal{L}^{\prime}_{\text{Int}}(t)(*)=-\frac{i}{\hbar}\left[\hat{V}(t)-\hat {S}_{V}(t),(*)\right]\equiv-\frac{i}{\hbar}\left[\hat{V}^{\prime}(t),(*) \right], \tag{36}\]
where we have defined the system operator
\[\hat{S}_{V}(t)\equiv\text{Tr}_{B}[\rho_{B}\hat{V}(t)], \tag{37}\]
and the modified interaction potential
\[\hat{V}^{\prime}(t)\equiv\hat{V}(t)-\hat{S}_{V}(t)=\sum_{\alpha}\hat{S}_{ \alpha}\otimes\left[\hat{B}_{\alpha}-\langle\hat{B}_{\alpha}(t)\rangle\right]. \tag{38}\]
Note that, since the contributions added to \(\mathcal{L}_{S}\) and \(\mathcal{L}_{\text{Int}}\) are equal in magnitude and have opposite sign, the global Liouvillian remains unchanged, i.e. \(\mathcal{L}_{S}(t)+\mathcal{L}_{\text{Int}}(t)=\mathcal{L}^{\prime}_{S}(t)+ \mathcal{L}^{\prime}_{\text{Int}}(t)\). However, by rearranging in this way the interaction and the system Liouvillian, we can truly perform the perturbative expansion on system-bath couplings while avoiding divergences. Note that after this redefinitions the interaction Liouvillian fulfills \(\mathcal{P}\mathcal{C}^{\prime}_{\text{Int}}(t)\mathcal{P}=0\). Using this property and transforming to the interaction picture with respect to \(\hat{H}_{S}(t)+\hat{S}_{V}(t)+\hat{H}_{B}\) we can cast the master equation Eq. (33) as
\[\hat{\rho}^{(i)}_{S}(t)=\mathcal{D}^{(i)}_{S}(t)\rho^{(i)}_{S}(t) -\frac{1}{\hbar^{2}}\text{Tr}_{B}\int_{0}^{t}d\tau\\ \Big{[}\hat{V}^{{}^{\prime}(i)}(t),e^{\mathcal{L}^{(i)}_{B}\tau }\Big{[}\hat{V}^{{}^{\prime}(i)}(t-\tau),\rho^{(i)}_{B}\otimes\rho^{(i)}_{S}( t-\tau)\Big{]}\Big{]}, \tag{39}\]
where we have neglected terms of order \(\epsilon^{3}\) or higher, reabsorbed the constants \(\epsilon\) into the interaction Liouvillians, and changed integration variable from \(\tau\) to \(t-\tau\). Equation (39) is the usual expression for the Born Master equation for a dissipative system coupled to a dissipative bath.
In many scenarios one further simplifies the above equation by assuming that (i) the whole system+bath ensemble is a closed system, that is, neither the system nor the bath are subject to dissipation, i.e. \(\mathcal{D}_{S}(t)=\mathcal{D}_{B}(t)=0\); (ii) the bath correlation functions decay much faster than the system evolution and than the timescales associated to the interaction Liouvillian, allowing to approximate \(\rho^{(i)}_{S}(t-\tau)\approx\rho^{(i)}_{S}(t)\)[12] and to extend the upper integration limit to infinity (Markov approximation). This results in the simplest version of the Born-Markov master equation used e.g. in quantum optics [1],
\[\hat{\rho}^{(i)}_{S}(t)=\\ =-\frac{1}{\hbar^{2}}\text{Tr}_{B}\int_{0}^{\infty}d\tau\Big{[} \hat{V}^{{}^{\prime}(i)}(t),\Big{[}\hat{V}^{{}^{\prime}(i)}(\tau),\rho^{(i)}_{ B}\otimes\rho^{(i)}_{S}(t)\Big{]}\Big{]}. \tag{40}\]
This shows how to obtain the usual Born-Markov master equation from a perturbative expansion of the Nakajima Zwanzig equation.
## IV Particular case 2: adiabatic elimination of "fast baths"
We now focus on a different situation, sometimes referred to as the Brownian Master Equation or as adiabatic elimination in atomic, molecular, and optical physics (AMO) [13; 14; 1]. It is the quantum extension of well-known methods to eliminate fast variables in classical mechanics [15]. Specifically, we consider a bath whose degrees of freedom evolve much faster than any degree of freedom in the system, i.e., we can write
\[\mathcal{L}(t)=\xi^{2}\mathcal{L}_{B}(t)+\xi\mathcal{L}_{\text{Int}}(t)+ \mathcal{L}_{S}(t), \tag{41}\]
in terms of a large expansion parameter \(\xi\gg 1\). The dependences of \(\mathcal{L}_{\text{Int}}(t)\) and \(\mathcal{L}_{S}(t)\) on \(\xi\) do not need to be linear, but should grow slower than \(\xi^{2}\) for the following expansion to be valid. There are two common physical realizations of the above timescale hierarchy. First, a very lossy bath, whose dissipation rate is much larger than any other timescale, and can thus be assumed to always be in its steady state. This situation is analogous to the Born-Markov master equation in Sec. III. Second, a bath whose intrinsic energy scales are far detuned with respect to those of the system and to the system-bath coupling rates. This is a standard scenario in AMO physics [16] and the focus of the example below. In both cases the dynamics can be approximated by its projection on the bath steady state.
For simplicity we will assume \(\mathcal{L}_{B}\), \(\mathcal{L}_{\text{Int}}\), and \(\mathcal{L}_{S}\) to be time-independent, as the generalization to time-dependent Liouvillians is straightforward. Our aim is to approximate the Nakajima-Zwanzig equation Eq. (25)
to leading order in \(\xi\). By expanding the time-ordered exponentials as
\[\mathcal{T}_{\eta}\left[\eta\exp\int_{0}^{T}dt^{\prime}\mathcal{Q} \mathcal{L}(t^{\prime})\mathcal{Q}\right]\approx\\ \approx e^{\xi^{2}\mathcal{L}_{B}T}\left(\mathfrak{I}+\mathcal{O}(1/\xi) \right), \tag{42}\]
for \(\eta=\pm\) and \(T\) an arbitrary time argument, we reduce the Nakajima-Zwanzig equation to
\[\dot{\rho}_{S}(t)=\mathcal{L}_{S}(t)\rho_{S}(t)+\xi\mathrm{Tr}_{B} \left[\mathcal{L}_{\mathrm{Int}}(t)\left(\rho_{B}\otimes\rho_{S}(t)\right)\right] \\ +\xi^{2}\mathrm{Tr}_{B}\mathcal{L}_{\mathrm{Int}}(t)\int_{0}^{t}d \tau e^{\xi^{2}\mathcal{L}_{B}(t-\tau)}\mathcal{Q}\mathcal{L}_{\mathrm{Int}}( \tau)v(\tau)+\mathcal{O}(\xi), \tag{43}\]
which is the desired perturbative expansion. Note that this equation is formally equivalent to the weak-coupling expansion obtained in previous section, Eq. (33). However, we have arrived to this expression under different assumptions, which will allow us to apply it to systems beyond the usual Born-Markov paradigm (see example below).
Sometimes, the expansion at large \(\xi\) also justifies a Markov approximation in the above equation. To illustrate this, we reabsorb the parameter \(\xi\) into the interaction Liouvillians and change the integration variable from \(\tau\) to \(t-\tau\), obtaining
\[\dot{\rho}_{S}(t)=\mathcal{L}_{S}(t)\rho_{S}(t)+\mathrm{Tr}_{B} \left[\mathcal{L}_{\mathrm{Int}}(t)\left(\rho_{B}\otimes\rho_{S}(t)\right)\right] \\ +\mathrm{Tr}_{B}\mathcal{L}_{\mathrm{Int}}(t)\int_{0}^{t}d\tau e^{ \mathcal{L}_{B}\tau}\mathcal{Q}\mathcal{L}_{\mathrm{Int}}(t-\tau)v(t-\tau). \tag{44}\]
Assuming the steady state of \(\mathcal{L}_{B}\) exists, the real part of its eigenvalues is negative, and hence \(\exp(\mathcal{L}_{B}\tau)\) decays with \(\tau\). Moreover, it decays much faster than any other timescale, as by assumption \(\mathcal{L}_{B}\) is the largest energy scale of the problem (proportional to \(\xi^{2}\)). Since the density matrix \(v\) evolves on a slower timescale (\(\sim\xi\)), it practically remains unchanged during this decay. This justifies our approximation to remove the \(\tau\) dependence in \(v(t-\tau)\) and extend the upper limit of the integral to infinity, obtaining
\[\dot{\rho}_{S}(t)=\mathcal{L}_{S}(t)\rho_{S}(t)+\mathrm{Tr}_{B} \left[\mathcal{L}_{\mathrm{Int}}(t)\left(\rho_{B}\otimes\rho_{S}(t)\right)\right] \\ +\mathrm{Tr}_{B}\mathcal{L}_{\mathrm{Int}}(t)\int_{0}^{\infty}d \tau e^{\mathcal{L}_{B}\tau}\mathcal{Q}\mathcal{L}_{\mathrm{Int}}(t-\tau)v(t), \tag{45}\]
This is the usual Brownian Master Equation found in the literature [13; 14].
### Example: Adiabatic elimination in a Lambda System
For this example we choose a system where the adiabatic elimination is enabled not due to high dissipation but to high detuning. The system we consider is a three-level system in a Lambda configuration, formed by two lower energy states \(|a\rangle\) and \(|b\rangle\) and an excited state \(|e\rangle\) [see Fig. (2)]. We follow the notation of Ref. [17], where this problem has been discussed in depth. We define the frequency of the transitions \(|a\rangle\rightarrow|e\rangle\) and \(|b\rangle\rightarrow|e\rangle\) as \(\omega_{a}\) and \(\omega_{b}\), respectively. A coherent drive is applied to each of these transitions, with respective rates \(\Omega_{a}\) and \(\Omega_{b}\) and respective frequencies \(\omega_{La}\) and \(\omega_{Lb}\). We assume the system experiences no dissipation. In the rotating wave approximation, the Hamiltonian of this system reads
\[\hat{H}/\hbar=\omega_{a}|e\rangle\langle e|+(\omega_{a}-\omega_{b })|b\rangle\langle b|+\\ +\sum_{j=a,b}\left(\frac{\Omega_{j}}{2}e^{-i\omega_{Lj}t}|e \rangle\langle j|+\mathrm{H.c.}\right). \tag{46}\]
Notice we take the level \(|a\rangle\) as the origin of energies. We simplify the Hamiltonian by applying the unitary transformation
\[\hat{U}=\exp\left[it\Big{(}(\delta/2)|a\rangle\langle a|+\right.\\ +\left.(\omega_{a}-\omega_{b}-\delta/2)|b\rangle\langle b|+( \omega_{a}-\Delta)|e\rangle\langle e|)\right)\right] \tag{47}\]
where we define the detunings between each coherent drive and the corresponding transition frequencies, \(\delta_{j}\equiv\omega_{j}-\omega_{Lj}\), their difference \(\delta=\delta_{a}-\delta_{b}\), and the average detuning \(\Delta\equiv(\delta_{a}+\delta_{b})/2\). Under this transformation the
Figure 2: Example 1: the two transitions of a three-level system in a Lambda configuration are coherently driven. When the two driings are far detuned, the excited state remains practically unpopulated, but can mediate transitions between the two ground states \(|a\rangle\) and \(|b\rangle\). In this limit one can obtain, via adiabatic elimination, an effective Hamiltonian for the ground-state manifold in which the effect of the driings and the excited state is mapped into frequency shifts and an effective coupling term [17].
Hamiltonian becomes time-independent,
\[\hat{H}/\hbar=-\frac{\delta}{2}|a\rangle\langle a|+\frac{\delta}{2}|b \rangle\langle b|+\Delta|e\rangle\langle e|\\ +\sum_{j=a,b}\left(\frac{\Omega_{j}}{2}|e\rangle\langle j|+\text{ H.c.}\right). \tag{48}\]
We now focus on the case where the excited level is far detuned with respect to the drivings, i.e., we assume
\[|\Delta|\gg|\delta|,|\Omega_{j}|. \tag{49}\]
In this limit the level \(|e\rangle\) does not play an active role in the dynamics, always remaining unpopulated. However, it can mediate virtual transitions between the levels \(|a\rangle\) and \(|b\rangle\), resulting in an effective coupling between them. Our goal is thus to adiabatically eliminate the level \(|e\rangle\) and derive an reduced equation of motion in the subspace span by the states \(|a\rangle\) and \(|b\rangle\). We thus split the Hamiltonian into system, bath, and interaction part, \(\hat{H}=\hat{H}_{S}+\hat{H}_{B}+\hat{V}\), where
\[\hat{H}_{S}/\hbar=-\frac{\delta}{2}|a\rangle\langle a|+\frac{\delta}{2}|b \rangle\langle b|, \tag{50}\]
\[\hat{H}_{B}/\hbar=\Delta|e\rangle\langle e|, \tag{51}\]
\[\hat{V}/\hbar=|e\rangle\sum_{j=a,b}\frac{\Omega_{j}}{2}\langle j|+\text{H.c.} \tag{52}\]
The corresponding Liouvillians are time-independent and given by
\[\mathcal{L}_{\{S,B\}}(*)=-\frac{i}{\hbar}\left[\hat{H}_{\{S,B\}},(*)\right], \tag{53}\]
\[\mathcal{L}_{\text{Int}}=-\frac{i}{\hbar}\left[\hat{V},(*)\right]. \tag{54}\]
A necessary preliminary step to use the formalism of this tutorial is to write the interaction Hamiltonian Eq. (52) in the form of a sum of tensor products of system and bath operators, as in Eq. (34). To do so, we _artificially_ enlarge our Hilbert space, i.e., assume it is formed by three subsystems \(A\), \(B\), and \(E\), each with its own vacuum state \(|0_{j}\rangle\), with \(j=a,b,e\), and define operators \(\hat{j}\) and \(\hat{j}^{\dagger}\) such that
\[\hat{j}^{\dagger}|0_{j}\rangle=|j\rangle\ \ ;\ \ \hat{j}|j\rangle=|0_{j}\rangle \tag{55}\]
and
\[\hat{j}|0_{j}\rangle=\hat{j}^{\dagger}|j\rangle=0. \tag{56}\]
These operators are artificial constructs with no well-defined physical meaning (only products of two, four, etc of these operators have physical sense [18]). They are just helpful representations that allow us to compute the reduced dynamics. In terms of these operators the interaction Hamiltonian Eq. (52) can be cast in the desired form as
\[\hat{V}/\hbar=\hat{e}^{\dagger}\otimes\hat{S}+\hat{e}\otimes\hat{S}^{\dagger}, \tag{57}\]
with a global system jump operator
\[\hat{S}\equiv\sum_{j=a,b}\frac{\Omega_{j}}{2}\hat{j}. \tag{58}\]
Using this representation, the partial trace over the bath degrees of freedom, namely the system \(E\), reads
\[\text{Tr}_{B}[(*)]=\langle e|(*)|e\rangle+\langle 0_{e}|(*)|0_{e}\rangle. \tag{59}\]
We define the projector as
\[\mathcal{P}[(*)]\equiv\rho_{B}\otimes\text{Tr}_{B}[(*)]\equiv|0_{e}\rangle \langle 0_{e}|\otimes\text{Tr}_{B}[(*)], \tag{60}\]
The choice of the bath state \(\rho_{B}=|0_{e}\rangle\langle 0_{e}|\) is physically motivated, as it is the only state in the subspace span by \(\{|0_{e}\rangle,|e\rangle\}\) fulfilling the following conditions: first, the probability of the whole system to be in state \(|e\rangle\) is zero, \(\langle e|\rho_{B}|e\rangle=0\). This is consistent with our assumption that the excited state \(|e\rangle\) is never populated during the dynamics. Second, the state \(\rho_{B}\) is a stationary state of the bath Liouvillian, \(\mathcal{L}_{B}(|0_{e}\rangle\langle 0_{e}|)=0\).
We are now ready to proceed with the adiabatic elimination. Due to the assumption Eq. (49), the Liouvillian of our system obeys the hierarchy of energy scales of the general Liouvillian Eq. (41), allowing to use the expansion Eq. (44). Note that in this case the Markovian version Eq. (45) cannot be used as the exponential \(\exp(\mathcal{L}_{B}\tau)\) does not decay due to the absence of losses. One can, however, notice that this exponential oscillates very fast (at a frequency \(\Delta\)), and therefore the integral averages out to zero except for small delays \(\tau\). This allows us to approximate \(v(t-\tau)\approx v(t)\) in Eq. (44) and obtain the time-local expression
\[\hat{\rho}_{S}(t)\approx-\frac{i}{\hbar}\left[\hat{H}_{S},\rho_{S }\right]-\frac{1}{\hbar^{2}}\int_{0}^{t}d\tau\\ \text{Tr}_{B}\left[\hat{V},e^{\mathcal{L}_{B}\tau}\left[\hat{V}, \rho_{B}\otimes\rho_{S}(t)\right]\right], \tag{61}\]
where we have used the fact that the interaction Liouvillian fulfills \(\mathcal{PL}_{\text{Int}}\mathcal{P}=0\). We can compute the second line by introducing the explicit expression of \(\hat{V}\), Eq. (57), and carefully applying each superoperator sequentially. This requires to compute the action of the superoperator \(\exp[\mathcal{L}_{B}\tau]\) on bath density matrices. Since the bath space \(E\) has only dimension 2, we need only to compute the action on the four matrices \(|e\rangle\langle e|\), \(|0_{e}\rangle\langle e|\), \(|e\rangle\langle 0_{e}|\), and \(|0_{e}\rangle\langle 0_{e}|\). From the definition of \(\mathcal{L}_{B}\) it is straightforward to prove that
\[\mathcal{L}_{B}|e\rangle\langle e|=\mathcal{L}_{B}|0_{e}\rangle\langle 0_{e}|=0, \tag{62}\]
\[\mathcal{L}_{B}|0_{e}\rangle\langle e|=i\Delta|0_{e}\rangle\langle e|\ \ \ ;\ \ \ \mathcal{L}_{B}|e\rangle\langle 0_{e}|=-i\Delta|e\rangle\langle 0_{e}|, \tag{63}\]
from which the identities
\[e^{\mathcal{L}_{B}\tau}|e\rangle\langle e|=|e\rangle\langle e|, \tag{64}\]
\[e^{\mathcal{L}_{B}\tau}|0_{e}\rangle\langle 0_{e}|=|0_{e}\rangle\langle 0_{e}|, \tag{65}\]
\[e^{\mathcal{L}_{B}\tau}|0_{e}\rangle\langle e|=e^{i\Delta\tau}|0_{e}\rangle \langle e|, \tag{66}\]
\[e^{\mathcal{L}_{B}\tau}|e\rangle\langle 0_{e}|=e^{-i\Delta\tau}|e\rangle \langle 0_{e}|, \tag{67}\]
immediately follow. Using these identities, we obtain the master equation
\[\dot{\rho}_{S}(t)=-\frac{i}{\hbar}\left[\hat{H}_{S},\rho_{S} \right]+\frac{i}{\Delta}\Big{[}-\rho_{S}\hat{S}^{\dagger}\hat{S}\left(1-e^{i \Delta t}\right)\\ +\hat{S}^{\dagger}\hat{S}\rho_{S}\left(1-e^{-i\Delta t}\right)- 2i\sin(\Delta t)\hat{S}\rho_{s}\hat{S}^{\dagger}\Big{]}. \tag{68}\]
In this form, we are ready to dispose of the operator notation and rewrite the equation in terms of the original states \(|a\rangle\) and \(|b\rangle\). The term proportional to \(\hat{S}\rho_{S}\hat{S}^{\dagger}\) can be neglected as it does not act on the subspace span by \(|a\rangle\) and \(|b\rangle\) (or, equivalently, \(\langle j|\hat{S}\rho_{S}\hat{S}^{\dagger}|k\rangle=0\) for all \(j,k=a,b\)). Furthermore, the terms oscillating at frequency \(\Delta\) can be discarded under a rotating wave approximation. This approximation is valid provided that \(|\Delta|\) oscillates much faster than the oscillation frequencies of \(\hat{S}^{\dagger}\hat{S}\), i.e. \(|\Delta|\gg|\delta|\), and provided that \(|\Delta|\) is much larger than the oscillation amplitude, namely \(|\Delta|\gg\Omega_{a,b}^{2}/|\Delta|\). Both these conditions are true by assumption, see Eq. (49). Finally, by writing explicitly the operator \(\hat{S}^{\dagger}\hat{S}\) in the basis \(\{|a\rangle,|b\rangle\}\), i.e.,
\[\hat{S}^{\dagger}\hat{S}=\sum_{j,k=a,b}\frac{\Omega_{j}\Omega_{k}^{*}}{4}|j \rangle\langle k| \tag{69}\]
we can write the reduced dynamics of the system in the compact form
\[\dot{\rho}_{S}(t)=-\frac{i}{\hbar}\left[\hat{H}_{S}-\sum_{j,k=a,b}\frac{ \Omega_{j}\Omega_{k}^{*}}{4\Delta}|j\rangle\langle k|,\rho_{S}\right]. \tag{70}\]
In other words, the far-detuned level \(|e\rangle\) effectively modifies the Hamiltonian of the states \(|a\rangle\) and \(|b\rangle\) to
\[\hat{H}_{S}^{\prime}/\hbar=\left(-\frac{\delta}{2}-\frac{|\Omega _{a}|^{2}}{4\Delta}\right)|a\rangle\langle a|+\\ +\left(\frac{\delta}{2}-\frac{|\Omega_{b}|^{2}}{4\Delta}\right)|b \rangle\langle b|+\\ -\left[\frac{\Omega_{a}\Omega_{b}^{*}}{4\Delta}|a\rangle\langle b |+\text{H.c.}\right] \tag{71}\]
which is the same result obtained in Ref. [17]. This evidences the power of projection operator techniques beyond describing dissipation induced by a bath.
## V Particular case 3: Sideband Cooling Master Equation
As a last case of interest, we will derive what is usually referred to as adiabatic elimination, e.g. in sideband cooling in AMO physics [19] and optomechanics [20]. The timescale hierarchy in this case is very closely related to the adiabatic elimination in Sec. IV, where a system is coupled to a bath which evolves into its steady state very fast. However, in this case we focus on the following, more complex Liouvillian,
\[\mathcal{L}(t)=\xi^{2}\mathcal{L}_{B}+\xi\mathcal{L}_{\text{Int}}(\xi^{2}t)+ \mathcal{L}_{S}(\xi^{2}t), \tag{72}\]
with \(0<1\ll\xi\) a large expansion parameter. Although the above expression can apply to other scenarios, it is usually better understood assuming it describes a Liouvillian in the interaction picture. In such scenario, the terms \(\mathcal{L}_{B}\) and \(\mathcal{L}_{S}\) would only contain dissipative terms, and the time dependence on both \(\mathcal{L}_{S}\) and \(\mathcal{L}_{\text{Int}}\) would stem from the transformation to the interaction picture, i.e. it would usually take the form of a sum of exponentials \(e^{i\omega t}\), with \(\omega\) the coherent evolution frequencies of system and bath. Therefore, the following perturbative analysis can apply to situations where _both_ the system and bath coherent energy scales, as well as the bath dissipation rate, are comparable and dominate over the remaining energy scales. For more insight on the physical picture, we address the reader to the example below.
Our aim is to perturbatively expand the Nakajima-Zwanzig equation Eq. (25) to leading orders in \(\xi\). For simplicity we will assume that \(\mathcal{P}\mathcal{L}_{\text{Int}}\mathcal{P}=0\) for this derivation, although this is not necessary. We start by expanding the time ordered exponential as
\[\mathcal{T}_{+}\left[\exp\int_{0}^{t}dt^{\prime}\mathcal{QL}(t^{\prime}) \mathcal{Q}\right]\approx e^{\xi^{2}\mathcal{L}_{B}t}\left[1+\mathcal{O} \left(\xi^{-1}\right)\right]. \tag{73}\]
Proceeding analogously with the anti-ordered exponential and introducing the result in the Nakajima-Zwanzig equation, we obtain a similar expression to Eq. (43),
\[\dot{\rho}_{S}(t)=\mathcal{L}_{S}(t)\rho_{S}(t)+\xi^{2}\text{Tr}_ {B}\mathcal{L}_{\text{Int}}(\xi^{2}t)\\ \times\int_{0}^{t}d\tau e^{\xi^{2}\mathcal{L}_{B}(t-\tau)}\mathcal{ L}_{\text{Int}}(\xi^{2}\tau)v(\tau)+\mathcal{O}(\xi). \tag{74}\]
Changing integration variable from \(\tau\) to \(\xi^{2}(t-\tau)\), we obtain
\[\dot{\rho}_{S}(t)=\mathcal{L}_{S}(t)\rho_{S}(t)+\text{Tr}_{B} \mathcal{L}_{\text{Int}}(\xi^{2}t)\int_{0}^{\xi^{2}t}d\tau\\ \times e^{\mathcal{L}_{B}\tau}\mathcal{L}_{\text{Int}}(\xi^{2}t- \tau)v(t-\tau/\xi^{2})+\mathcal{O}(\xi). \tag{75}\]
We are now ready to take the limit \(\xi\to\infty\). This results in an effective Markov approximation, as it allows us to drop the dependence of \(v\) with \(\tau\) and to extend the upper integration limit to infinity. Moreover, it is customary in this step to take a Rotating Wave Approximation [20]. As we will show in the example below, this approximation consists on neglecting, in the product of the
two interaction Liouvillians \(\sim\mathcal{L}_{\text{Int}}(\xi^{2}t)(...)\mathcal{L}_{\text{Int}}(\xi^{2}t-\tau)\), all the terms that oscillate rapidly in the variable \(t\). In most cases, and in analogy to the elimination of "fast time" dependence in classical equations [15], this makes the equation independent on \(\xi^{2}t\), as one only retains terms in the product \(\sim\mathcal{L}_{\text{Int}}(\xi^{2}t)(...)\mathcal{L}_{\text{Int}}(\xi^{2}t-\tau)\) whose \(t-\)dependencies cancel each other (see an example in the section below). We remark that, precisely because such dependencies can cancel each other, the dependence with \(\tau\) might become relevant and thus it is in general not correct to approximate \(\mathcal{L}_{\text{Int}}(\xi^{2}t-\tau)\approx\mathcal{L}_{\text{Int}}(\xi^{2}t)\) even in the large \(\xi\) limit. After the Rotating Wave Approximation, one finds the adiabatic elimination equation,
\[\dot{\rho}_{S}(t)=\mathcal{L}_{S}(t)\rho_{S}(t)+\text{Tr}_{B} \mathcal{L}_{\text{Int}}(\xi^{2}t)\int_{0}^{\infty}d\tau\\ \times\int_{0}^{\infty}d\tau e^{\mathcal{L}_{B}\tau}\mathcal{L}_{ \text{Int}}(\xi^{2}t-\tau)v(t)\Big{|}_{\text{RWA}}. \tag{76}\]
### Example: cavity cooling of a mechanical resonator
As a relevant example we consider the optomechanical system depicted in Fig. (3). A mechanical mode (the system) with frequency \(\Omega_{m}\) is coupled via radiation pressure to a cavity (the bath) with frequency \(\omega_{c}\), which is coherently driven by a laser detuned by \(\delta\). Both degrees of freedom experience dissipation through their coupling to thermal reservoirs. Specifically, the mechanical mode experiences absorption and decay at rates \(\gamma_{m}\bar{n}\) and \(\gamma_{m}(\bar{n}+1)\), with \(\gamma_{m}\) the mechanical linewidth and \(\bar{n}\) the average (Bose-Einstein) occupation of the reservoir at frequency \(\Omega_{m}\) and temperature \(T\). Due to its high (e.g. optical) frequency, the cavity experiences only decay at a rate \(\kappa\), as its reservoir has near-zero occupation at such frequency. In a frame rotating at the cavity driving frequency and after linearization of the optomechanical interaction [21], the Liouvillians of this problem take the form
\[\mathcal{L}_{S}[(*)]=-i\left[\Omega_{m}\hat{b}^{\dagger}\hat{b},(*)\right]+ \gamma_{m}\mathcal{D}_{S}[(*)] \tag{77}\]
\[\mathcal{L}_{B}[(*)]=-i\left[\delta\hat{a}^{\dagger}\hat{a},(*)\right]+\kappa \mathcal{D}_{B}[(*)] \tag{78}\]
\[\mathcal{L}_{\text{Int}}[(*)]=-i\left[(g\hat{a}+g^{*}\hat{a}^{\dagger})(\hat{b }+\hat{b}^{\dagger}),(*)\right] \tag{79}\]
with \(g\) the linearized optomechanical coupling, and with mechanical and optical dissipative terms given by
\[\mathcal{D}_{S}[(*)]=\bar{n}\mathfrak{D}_{\hat{b}^{\dagger}}[(*)]+(\bar{n}+1) \mathfrak{D}_{\hat{b}}[(*)], \tag{80}\]
and
\[\mathcal{D}_{B}[(*)]=\mathfrak{D}_{\hat{a}}[(*)], \tag{81}\]
in terms of the Lindblad dissipator
\[\mathfrak{D}_{\hat{A}}[(*)]\equiv\hat{A}(*)\hat{A}^{\dagger}-\frac{1}{2}\{ \hat{A}^{\dagger}\hat{A},(*)\}, \tag{82}\]
and where curly brackets denote the anticommutator. For convenience, we will work in the interaction picture with respect to the free mechanical Hamiltonian \(\hbar\Omega_{m}\hat{b}^{\dagger}\hat{b}\), where the system and interaction Liouvillians reduce to
\[\mathcal{L}_{S}[(*)]=\gamma_{m}\mathcal{D}_{S}[(*)], \tag{83}\]
\[\mathcal{L}_{\text{Int}}[(*),t]=-i\left[(g\hat{a}+\text{H.c.})(\hat{b}e^{-i \Omega_{m}t}+\text{H.c.}),(*)\right]. \tag{84}\]
It is well known in optomechanics [21] that, under certain conditions, the cavity mode can be used to cool the mechanical mode way below the temperature of its surrounding environment, \(T\). One regime where this occurs is the regime \(\kappa,\Omega_{m}\gg\gamma_{m},g\), which will be our regime of interest. Since the cavity decay \(\kappa\) is very fast, the full dynamics must be well approximated by their projection onto the cavity steady state, namely its vacuum state. Hence, the cavity can be adiabatically eliminated. However, the "AMO adiabatic elimination" studied in Sec. IV is not adequate, as another frequency scale of the Liouvillian, namely \(\Omega_{m}\), can - and should, as we will see below - be comparable to or larger than \(\kappa\). We must thus use the perturbative expansion of the present section. Here, the large parameter \(\xi\) represents both the mechanical frequency \(\Omega_{m}\) and the cavity linewidth \(\kappa\), which are the dominant energy scales of the problem.
Figure 3: Under certain conditions, an optical cavity mode can be adiabatically eliminated and act as an effective environment for the mechanical motion, inducing cooling and heating rates \(\Gamma_{c}\) and \(\Gamma_{h}\). By tuning the parameters one can fix \(\Gamma_{c}\gg\Gamma_{h}\), resulting in cavity cooling of the mechanical mode.
We start by defining the projector as usual,
\[\mathcal{P}[(*)]\equiv\rho_{B}\otimes\mathrm{Tr}_{B}[(*)], \tag{85}\]
where \(\rho_{B}=|0\rangle\langle 0|\) is the vacuum state of the cavity mode \(\hat{a}\). It is straightforward to prove that for this projection
\[\mathcal{P}\mathcal{L}_{\mathrm{Int}}\mathcal{P}=0. \tag{86}\]
Our starting equation is the perturbative expansion Eq. (76), which for this problem reads
\[\dot{\rho}_{S}(t)=\gamma_{m}\mathcal{D}_{S}[\rho_{S}]+\mathrm{Tr} _{B}\mathcal{L}_{\mathrm{Int}}(\xi^{2}t)\\ \times\int_{0}^{\infty}d\tau e^{\mathcal{L}_{B}\tau}\mathcal{L}_{ \mathrm{Int}}(\xi^{2}t-\tau)\rho_{B}\otimes\rho_{S}(t)\Big{|}_{\mathrm{RWA}}. \tag{87}\]
where \(\rho_{S}(t)\) is the reduced density matrix of the mechanical mode. Let us develop the above expression to derive the final master equation for the motion.
We first aim at performing the rotating wave approximation. To do that, we cast the Interaction Liouvillian as
\[\mathcal{L}_{\mathrm{Int}}(t)\equiv\sum_{\lambda=\pm}e^{i\lambda\Omega=t} \mathcal{L}_{\lambda} \tag{88}\]
with time-independent superoperators
\[\mathcal{L}_{+}\equiv\hat{b}^{\dagger}\otimes(g\hat{a}+\mathrm{H.c.})\equiv \hat{b}_{+}\otimes\hat{Q}_{a}, \tag{89}\]
\[\mathcal{L}_{-}\equiv\hat{b}\otimes(g\hat{a}+\mathrm{H.c.})\equiv\hat{b}_{-} \otimes\hat{Q}_{a}, \tag{90}\]
where the notation \(\hat{b}_{\pm}\) is used for convenience in the manipulation of the expressions. After introducing the above expressions into the second term of Eq. (87) we obtain four terms: two of them with time dependence \(\propto\exp[\tau(\mathcal{L}_{B}\pm i\Omega_{m})]\), and two of them with time dependence \(\propto\exp[\tau(\mathcal{L}_{B}\pm i\Omega_{m})]\times\exp[\pm i2\Omega_{m}t]\). Since by assumption \(\Omega_{m}\gg\gamma_{m},g\), the latter oscillate very fast as compared to the free evolution of the system density matrix or to the system-bath interaction timescales. Thus, these terms can be discarded under a rotating are approximation. After this approximation we can write Eq. (87) in compact form as
\[\dot{\rho}_{S}(t)=\gamma_{m}\mathcal{D}_{S}[\rho_{S}]+\sum_{\lambda =\pm}\mathrm{Tr}_{B}\mathcal{L}_{-\lambda}\\ \times\int_{0}^{\infty}d\tau e^{\tau(\mathcal{L}_{B}-i\lambda \Omega_{m})}\mathcal{L}_{\lambda}\rho_{B}\otimes\rho_{S}(t). \tag{91}\]
The second step is to apply all the superoperators inside the integral to the projected density matrix \(\rho_{B}\otimes\rho_{S}(t)\). The following three useful identities are straightforward to demonstrate:
\[\mathcal{L}_{\lambda}\rho_{B}\otimes\rho_{S}(t)=-i(\hat{b}_{\lambda}\rho_{S}( t)g^{*}|1\rangle\langle 0|-\mathrm{H.c.}), \tag{92}\]
\[\mathcal{L}_{B}|1\rangle\langle 0\rangle=\left(-i\delta-\frac{\kappa}{2}\right)|1 \rangle\langle 0|, \tag{93}\]
\[\mathcal{L}_{B}|0\rangle\langle 1\rangle=\left(i\delta-\frac{\kappa}{2}\right)|0 \rangle\langle 1|, \tag{94}\]
where \(|1\rangle\) is the \(n=1\) Fock state of the cavity mode. Using these identities, Eq. (91) reduces to
\[\dot{\rho}(t)=\gamma_{m}\mathcal{D}_{S}[\rho]\\ -|g|^{2}\sum_{\lambda=\pm}E_{\lambda}(\Omega_{m})\left[\hat{b}_{ -\lambda}\hat{b}_{\lambda}\rho-\hat{b}_{\lambda}\rho\hat{b}_{-\lambda}\right] \\ +|g|^{2}\sum_{\lambda=\pm}E_{\lambda}^{*}(-\Omega_{m})\left[\hat{b }_{-\lambda}\rho\hat{b}_{\lambda}-\rho\hat{b}_{\lambda}\hat{b}_{-\lambda} \right]. \tag{95}\]
where we have defined
\[E_{\lambda}(\omega)\equiv\int_{0}^{\infty}d\tau e^{\tau[-i( \delta+\lambda\omega)-\kappa/2]}=\\ =\frac{1}{i(\delta+\lambda\omega)+\kappa/2}. \tag{96}\]
Note that in the literature the above rates are often expressed in terms of the power spectral densities of the bath, \(S_{\lambda}(\omega)=(2\pi)^{-1}|g|^{2}E_{\lambda}(\omega)\). Finally, by rearranging the terms and using the identity \(E_{\pm}(\omega)=E_{\mp}^{*}(-\omega)\), we can cast the above equation into Lindblad form,
\[\dot{\rho}(t)=\gamma_{m}\mathcal{D}_{S}[\rho]-i\left[\Delta_{m} \hat{b}^{\dagger}\hat{b},\rho\right]+\Gamma_{h}\mathfrak{D}_{\hat{b}^{ \dagger}}[\rho]+\Gamma_{c}\mathfrak{D}_{\hat{b}}[\rho]\\ =-i\left[\Delta_{m}\hat{b}^{\dagger}\hat{b},\rho\right]+\left( \gamma_{m}\bar{n}+\Gamma_{h}\right)\mathfrak{D}_{\hat{b}^{\dagger}}[\rho]\\ +\left(\gamma_{m}(1+\bar{n})+\Gamma_{c}\right)\mathfrak{D}_{\hat{b} }[\rho], \tag{97}\]
The adiabatically eliminated cavity thus induces, first, a frequency shift of the mechanical mode given by
\[\Delta_{m}\equiv|g|^{2}\mathrm{Im}\left[E_{+}(\Omega_{m})+E_{-}(\Omega_{m}) \right]. \tag{98}\]
Second, it introduces additional heating and cooling rates given by
\[\Gamma_{h}\equiv 2|g|^{2}\mathrm{Re}[E_{+}(\Omega_{m})]=\frac{|g|^{2}\kappa}{( \delta+\Omega_{m})^{2}+(\kappa/2)^{2}} \tag{99}\]
\[\Gamma_{c}\equiv 2|g|^{2}\mathrm{Re}[E_{-}(\Omega_{m})]=\frac{|g|^{2}\kappa}{( \delta-\Omega_{m})^{2}+(\kappa/2)^{2}}. \tag{100}\]
This completes the adiabatic elimination.
Let us briefly discuss the physical implications of the above expression. The steady-state occupation of the mechanical mode can be easily derived from the master equation Eq. (97),
\[\langle\hat{b}^{\dagger}\hat{b}\rangle_{\mathrm{ss}}=\frac{\gamma_{m}\bar{n}+ \Gamma_{h}}{\gamma_{m}+\Gamma_{c}-\Gamma_{h}}. \tag{101}\]
In the absence of the cavity (\(g=0\)) this occupation reduces to \(\langle\hat{b}^{\dagger}\hat{b}\rangle_{\mathrm{ss}}=\bar{n}\), i.e. the system reaches thermal equilibrium with its reservoir. For efficient cooling and
minimal heating of the mechanical motion, the first condition is that the cavity driving is blue-detuned with respect to the mechanical mode by exactly the mechanical frequency, i.e. \(\delta=\Omega_{m}\). In these conditions one has
\[\Gamma_{c}=\frac{4|g|^{2}}{\kappa} \tag{102}\]
and
\[\Gamma_{h}=\frac{|g|^{2}\kappa}{4\Omega_{m}^{2}+(\kappa/2)^{2}}. \tag{103}\]
The second well-known condition for efficient cooling is to work on the "resolved sideband regime"
\[\kappa\ll\Omega_{m}, \tag{104}\]
where one has \(\Gamma_{h}\approx\Gamma_{c}[\kappa/(4\Omega_{m})]^{2}\ll\Gamma_{c}\). In this limit the occupation reduces to
\[\langle\hat{b}^{\dagger}\hat{b}\rangle_{\rm ss}\approx\frac{\bar{n}}{1+C}, \tag{105}\]
where we have defined the cooperativity
\[C\equiv\frac{\Gamma_{c}}{\gamma_{m}}=\frac{4|g|^{2}}{\kappa\gamma_{m}}. \tag{106}\]
The final condition for efficient optomechanical cooling is to work in the high-cooperativity regime \(C\gg 1\), where one can reach occupations \(\langle\hat{b}^{\dagger}\hat{b}\rangle_{\rm ss}\ll\bar{n}\), and even below unity (ground state cooling).
## VI Conclusions
We hope to have convinced the readers of the power of projection methods and perturbative expansion of the Nakajima-Zwanzig equation, not only due to their mathematical beauty and physical depth, but also due to their usefulness. Each physical system is a world on itself, and more often than not we find our description involves more degrees of freedom than what we need. The tools of this tutorial enable us to eliminate the extra degrees of freedom in a wide variety of situations, by finding the proper perturbative expansion of the Nakajima-Zwanzig equation. Even more, it allows us to take conventional expansions (e.g. adiabatic elimination) to further order in the perturbation parameter, uncovering more complex bath-induced dynamics in a systematic way. Although we consider that the contents of this tutorial are more than sufficient for most applied quantum theorists, they are just the tip of the iceberg in the world of open quantum systems. We hope that young scientists will find in this tutorial an open door to such wonderful world.
## VII Acknowledgments
The author acknowledges Katja Kustura, Andreu Riera-Campeny, Oriol Romero-Isart, Oriol Rubies-Bigorda, and Cosimo Rusconi for their support, their revisions and improvements of the final manuscript, and their encouragement to make this tutorial publicly available.
|
2309.11099 | Borel-de Siebenthal Positive Root Systems | Let G be a connected simple Lie group with finite centre, K be a maximal
compact subgroup of G, and rank(G)= rank(K). Let \frak{g}_0=Lie(G),
\frak{k}_0=Lie(K) \subset \frak{g}_0, \frak{t}_0 be a maximal abelian
subalgebra of \frak{k}_0, \frak{g}=\frak{g}_0^\mathbb{C},
\frak{k}=\frak{k}_0^\mathbb{C}, and \frak{h}=\frak{t}_0^\mathbb{C}. In this
article, we have determined all Borel-de Siebenthal positive root systems of
\Delta=\Delta(\frak{g}, \frak{h}), the number of unitary equivalence classes of
all discrete series representations of G with trivial infinitesimal character,
the number of unitary equivalence classes of all holomorphic discrete series
representations of G (if G/K is Hermitian symmetric) with trivial infinitesimal
character, and the number of unitary equivalence classes of all Borel-de
Siebenthal discrete series representations of G (if G/K is not Hermitian
symmetric) with trivial infinitesimal character. | Pampa Paul | 2023-09-20T07:10:14Z | http://arxiv.org/abs/2309.11099v1 | # Borel-de Siebenthal positive root systems
###### Abstract.
Let \(G\) be a connected simple Lie group with finite centre, \(K\) be a maximal compact subgroup of \(G,\) and \(\operatorname{rank}(G)=\operatorname{rank}(K).\) Let \(\mathfrak{g}_{0}=\)Lie\((G),\mathfrak{k}_{0}=\)Lie\((K)\subset\mathfrak{g}_{0},\mathfrak{t}_{0}\) be a maximal abelian subalgebra of \(\mathfrak{k}_{0},\mathfrak{g}=\mathfrak{g}_{0}^{\mathbb{C}},\mathfrak{k}= \mathfrak{k}_{0}^{\mathbb{C}},\) and \(\mathfrak{h}=\mathfrak{t}_{0}^{\mathbb{C}}.\) In this article, we have determined all Borel-de Siebenthal positive root systems of \(\Delta=\Delta(\mathfrak{g},\mathfrak{h}),\) the number of unitary equivalence classes of all discrete series representations of \(G\) with trivial infinitesimal character, the number of unitary equivalence classes of all holomorphic discrete series representations of \(G\) (if \(G/K\) is Hermitian symmetric) with trivial infinitesimal character, and the number of unitary equivalence classes of all Borel-de Siebenthal discrete series representations of \(G\) (if \(G/K\) is not Hermitian symmetric) with trivial infinitesimal character.
2020 _Mathematics Subject Classification._ 17B10, 17B20, 17B22, 17B25, 22E46. Keywords and phrases: Equi-rank Lie algebra, root system, positive root system, Dynkin diagram, discrete series representation
## 1. Introduction
Let \(G\) be a connected simple Lie group with finite centre, \(K\) be a maximal compact subgroup of \(G,\) and \(\operatorname{rank}(G)=\operatorname{rank}(K).\) Let \(\mathfrak{g}_{0}\) be the Lie algebra of \(G,\mathfrak{k}_{0}\) be the subalgebra of \(\mathfrak{g}_{0}\) associated with the Lie subgroup \(K\) of \(G,\) and \(\mathfrak{t}_{0}\) be a maximal abelian subalgebra of \(\mathfrak{k}_{0}.\) Since \(\operatorname{rank}(G)=\operatorname{rank}(K),\mathfrak{h}=\mathfrak{t}_{0}^{ \mathbb{C}}\) is a Cartan subalgebra of \(\mathfrak{g}=\mathfrak{g}_{0}^{\mathbb{C}}\) as well as of \(\mathfrak{k}=\mathfrak{k}_{0}^{\mathbb{C}}.\) Borel and de Siebenthal [1] have proved the existence of a positive root system of \(\Delta=\Delta(\mathfrak{g},\mathfrak{h})\) such that the associated set of simple roots contains exactly one non-compact simple root \(\nu,\) and the coefficient \(n_{\nu}(\delta)\) of \(\nu\) in the highest root \(\delta,\) when expressed as the sum of simple roots is \(1,\) if \(G/K\) is Hermitian symmetric, and \(n_{\nu}(\delta)=2,\) if \(G/K\) is not Hermitian symmetric. Borel-de Siebenthal positive root system has many applications in the representation theory of equi-rank Lie groups. For example, if \(G/K\) is Hermitian symmetric, then a special positive root system which is used in [2] to define holomorphic discrete series representations, is a Borel-de Siebenthal positive root system. If \(G/K\) is not Hermitian symmetric, Orsted and Wolf [6] have defined Borel-de Siebenthal discrete series representations of \(G\) analogous to holomorphic discrete series representations, using a Borel-de Siebenthal positive root system. Also Borel-de Siebenthal positive root systems are used intrinsically to classify equi-rank non-compact real forms of a complex simple Lie algebra. See [3, Ch. X], [4, Ch. VI].
Let \(W_{\mathfrak{g}}\) (repectively, \(W_{t}\)) be the Weyl group of \(\mathfrak{g}\) (respectively, \(\mathfrak{k}\)) with respect to the Cartan subalgebra \(\mathfrak{h}.\) If \(P\) is a Borel-de Siebenthal positive root system of \(\Delta,\) then so is \(w(P)\) for all \(w\in W_{\mathfrak{k}}.\) Thus to determine the Borel-de Siebenthal positive root systems of \(\Delta,\) it is sufficient to determine the Borel-de Siebenthal positive root systems containing
a fixed positive root system \(P_{\mathfrak{t}}\) of \(\Delta_{\mathfrak{t}}=\Delta(\mathfrak{k},\mathfrak{h}).\) Now to describe the main results of this article, we will introduce some more notations.
Let \(\mathfrak{g}_{0}=\mathfrak{t}_{0}\oplus\mathfrak{p}_{0}\) be the Cartan decomposition associated to the maximal compact subgroup \(K\) of \(G,\) and \(\mathfrak{p}=\mathfrak{p}_{0}^{\mathbb{C}}.\) Let \(P\) be a Borel-de Siebenthal positive root system of \(\Delta\) containing the fixed positive root system \(P_{\mathfrak{t}}\) of \(\Delta_{\mathfrak{t}},\Phi\) be the set of all simple roots in \(P,\) and \(\nu\in\Phi\) be the unique non-compact root. Then we have a gradation of \(\mathfrak{g}\) with \(\mathfrak{p}=\mathfrak{l}_{-1}\oplus\mathfrak{l}_{1},\)
\[\mathfrak{k}=\begin{cases}\mathfrak{l}_{0}&\text{if $\mathfrak{t}$ has non-zero centre,}\\ \mathfrak{l}_{-2}\oplus\mathfrak{l}_{0}\oplus\mathfrak{l}_{2}&\text{if $ \mathfrak{t}$ is semisimple;}\end{cases}\]
and \([\mathfrak{l}_{0},\mathfrak{t}_{i}]\subset\mathfrak{l}_{i}\) for all \(i.\) The subalgebra \(\mathfrak{l}_{0}\) is a reductive, and \(\mathfrak{h}\) is a Cartan subalgebra of \(\mathfrak{l}_{0}.\) Let \(\Delta_{0}=\Delta(\mathfrak{l}_{0},\mathfrak{h}).\) Then \(P_{0}=\Delta_{0}\cap P_{\mathfrak{t}}\) is a positive root system of \(\Delta_{0},\) and \(\Phi_{0}=\Phi\setminus\{\nu\}\) is the set of all simple roots in \(P_{0}.\) Let \(W_{\mathfrak{l}_{0}}\) be the Weyl group of \(\mathfrak{l}_{0}\) relative to the Cartan subalgebra \(\mathfrak{h},\) and \(w_{\mathfrak{l}_{0}}^{0}\in W_{\mathfrak{l}_{0}}\) be the longest element with respect to the positive root system \(P_{0}.\) Also the adjoint representation of \(\mathfrak{l}_{0}\) on \(\mathfrak{l}_{i}\) is irreducible for all \(i\neq 0,\) and if \(K\) is semisimple, the adjoint representation of \(\mathfrak{k}\) on \(\mathfrak{p}\) is irreducible. Note that the highest root \(\delta\) of \(\mathfrak{g}\) is the highest weight of the irreducible \(\mathfrak{l}_{0}\)-module \(\mathfrak{l}_{2},\) and \(\nu\) is the lowest weight of the irreducible \(\mathfrak{l}_{0}\)-module \(\mathfrak{l}_{1}.\) Let \(\lambda=w_{\mathfrak{l}_{0}}^{0}(\nu),\) and \(\epsilon=w_{\mathfrak{l}_{0}}^{0}(\delta).\) Then \(\lambda\) is the highest weight of the \(\mathfrak{l}_{0}\)-module \(\mathfrak{l}_{1}\) as well as of the \(\mathfrak{k}\)-module \(\mathfrak{p},\) and \(\Phi_{\mathfrak{t}}=\Phi_{0}\cup\{\epsilon\}\) is the set of all simple roots in \(P_{\mathfrak{t}}.\) If \(\alpha\in\Delta,\) let \(n_{\phi}(\alpha)\) denote the coefficient of \(\phi\) in \(\alpha\) when expressed as the sum of elements in \(\Phi\) for all \(\phi\in\Phi.\) Now we are ready to state the main results of this article.
**Theorem 1.1**.: _Assume that \(\mathfrak{k}\) is semisimple._
_If \(P^{\prime}\) is a Borel-de Siebenthal positive root system of \(\Delta\) containing \(P_{\mathfrak{k}},\) then either \(P^{\prime}=P,\) or the set of all simple roots of \(P^{\prime}\) is given by \((\Phi_{\mathfrak{k}}\setminus\{\phi\})\cup\{\nu^{\prime}\},\) where \(\phi\in\Phi_{0}\) is such that \(w_{\mathfrak{l}_{0}}^{0}(\phi^{\prime})=-\phi,\) with \(n_{\phi^{\prime}}(\delta)=1;\)\(\mathfrak{l}_{0}^{\prime}\) is the reductive subalgebra of \(\mathfrak{k}\) containing \(\mathfrak{h}\) and the Dynkin diagram of \([\mathfrak{l}_{0}^{\prime},\mathfrak{l}_{0}^{\prime}]\) is the subdiagram of the Dynkin diagram of \(\mathfrak{k}\) with vertices \(\Phi_{\mathfrak{k}}\setminus\{\phi\};\) and \(\nu^{\prime}\) is the lowest weight of the irreducible \(\mathfrak{l}_{0}^{\prime}\)-submodule of \(\mathfrak{p}\) with highest weight \(\lambda.\)_
_Conversely, let \(\phi\in\Phi_{0}\) be such that \(w_{\mathfrak{l}_{0}}^{0}(\phi^{\prime})=-\phi,\) with \(n_{\phi^{\prime}}(\delta)=1;\)\(\mathfrak{l}_{0}^{\prime}\) be the reductive subalgebra of \(\mathfrak{k}\) containing \(\mathfrak{h}\) and the Dynkin diagram of \([\mathfrak{l}_{0}^{\prime},\mathfrak{l}_{0}^{\prime}]\) be the subdiagram of the Dynkin diagram of \(\mathfrak{k}\) with vertices \(\Phi_{\mathfrak{k}}\setminus\{\phi\};\) and \(\nu^{\prime}\) be the lowest weight of the irreducible \(\mathfrak{l}_{0}^{\prime}\)-submodule of \(\mathfrak{p}\) with highest weight \(\lambda.\) Then \((\Phi_{\mathfrak{k}}\setminus\{\phi\})\cup\{\nu^{\prime}\}\) is the set of all simple roots of a Borel-de Siebenthal positive root system of \(\Delta\) containing \(P_{\mathfrak{k}}.\)_
**Corollary 1.1.1**.: _Let \(\mathfrak{g}\) be a complex simple Lie algebra, \(\mathfrak{g}_{0}\) be a non-compact real form of \(\mathfrak{g}\) with \(\mathfrak{k}_{0},\) a maximal compactly imbedded subalgebra of \(\mathfrak{g}_{0},\)\(\text{rank}(\mathfrak{g}_{0})=\)rank\((\mathfrak{k}_{0}),\) and \(\mathfrak{k}_{0}\) be semisimple. Let \(\mathfrak{k}=\mathfrak{k}_{0}^{\mathbb{C}},\mathfrak{h}\) be a Cartan subalgebra of \(\mathfrak{g},\) with \(\mathfrak{h}=\mathfrak{k}_{0}^{\mathbb{C}},\mathfrak{t}_{0}\subset\mathfrak{t}_{0},\) and \(P_{\mathfrak{k}}\) be a fixed positive root system of \(\Delta_{\mathfrak{k}}=\Delta(\mathfrak{k},\mathfrak{h}).\) Then the number of Borel-de Siebenthal positive root systems of \(\Delta=\Delta(\mathfrak{g},\mathfrak{h})\) containing \(P_{\mathfrak{k}}\) is the covering index of the Lie group \(Int(\mathfrak{g}),\) the connected component of \(Aut(\mathfrak{g}).\)_
**Theorem 1.2**.: _The number unitary equivalence classes of discrete series representations with trivial infinitesimal character is \(|W_{\mathfrak{g}}|/|W_{\mathfrak{k}}|.\) If \(G/K\) is a Hermitian symmetric space,
_then the number unitary equivalence classes of holomorphic discrete series representations with trivial infinitesimal character is \(2.\) If \(G/K\) is not Hermitian symmetric, then the number unitary equivalence classes of Borel-de Siebenthal discrete series representations with trivial infinitesimal character_
\[=\begin{cases}1&\text{if }\mathfrak{g}=\mathfrak{e}_{8},\mathfrak{f}_{4}, \mathfrak{g}_{2},\\ 2&\text{if }\mathfrak{g}=\mathfrak{h}_{l}(l\geq 2),\mathfrak{c}_{l}(l\geq 2), \mathfrak{e}_{7},\\ 3&\text{if }\mathfrak{g}=\mathfrak{e}_{6},\\ 4&\text{if }\mathfrak{g}=\delta_{l}(l\geq 4).\end{cases}\]
Th. 1.1 is proved assuming the existence of a Borel-de Siebenthal positive root system of \(\Delta.\) Here we have determined all Borel-de Siebenthal positive root system of \(\Delta\) using the positive root system \(P,\) by Th. 1.1. These are precisely all Borel-de Siebenthal positive root systems described in Th. 1.1 and their \(W_{\mathfrak{t}}\)-conjugate. Th. 1.2 is the direct application of Lemma 2.6, Remark 2.5(ii), and Th. 1.1.
## 2. Borel-de Siebenthal positive root system
Let \(\mathfrak{g}_{0}\) be a real simple Lie algebra, \(\mathfrak{g}_{0}=\mathfrak{k}_{0}\oplus\mathfrak{p}_{0}\) be a Cartan decomposition with \(\theta,\) the corresponding Cartan involution, and \(\text{rank}(\mathfrak{g}_{0})=\)rank\((\mathfrak{k}_{0}).\) Let \(\mathfrak{t}_{0}\) be a maximal abelian subalgebra of \(\mathfrak{k}_{0}.\) Then \(\mathfrak{t}_{0}\) is a fundamental Cartan subalgebra of \(\mathfrak{g}_{0}.\) Let \(\mathfrak{g}=\mathfrak{g}_{0}^{\mathbb{C}},\mathfrak{k}=\mathfrak{k}_{0}^{ \mathbb{C}},\mathfrak{p}=\mathfrak{p}_{0}^{\mathbb{C}},\) and \(\mathfrak{h}=\mathfrak{k}_{0}^{\mathbb{C}}.\) Then \(\mathfrak{h}\) is a Cartan subalgebra of \(\mathfrak{k}\) as well as of \(\mathfrak{g}.\) Let \(\Delta=\Delta(\mathfrak{g},\mathfrak{h})\) be the set of all non-zero roots of \(\mathfrak{g}\) relative to the Cartan subalgebra \(\mathfrak{h},\) and \(\mathfrak{g}^{\alpha}\) be the root space corresponding to a root \(\alpha\in\Delta.\) Since \(\mathfrak{h}\subset\mathfrak{k},\mathfrak{g}^{\alpha}\) is one-dimensional, and \([\mathfrak{k},\mathfrak{k}]\subset\mathfrak{k},[\mathfrak{k},\mathfrak{p}] \subset\mathfrak{p},\) we have either \(\mathfrak{g}^{\alpha}\subset\mathfrak{k},\) or \(\mathfrak{g}^{\alpha}\subset\mathfrak{p}.\) A root \(\alpha\in\Delta\) is said to be a compact root if \(\mathfrak{g}^{\alpha}\subset\mathfrak{k},\) and \(\alpha\in\Delta\) is said to be a non-compact root if \(\mathfrak{g}^{\alpha}\subset\mathfrak{p}.\) Let \(\Delta_{\mathfrak{t}}\) denote the set of all compact roots in \(\Delta,\) and \(\Delta_{n}\) denote the set of all non-compact roots in \(\Delta.\) Clearly \(\mathfrak{k}=\mathfrak{h}+\sum_{\alpha\in\Delta_{\mathfrak{t}}}\mathfrak{g}^ {\alpha},\mathfrak{p}=\sum_{\beta\in\Delta_{n}}\mathfrak{g}^{\beta},\Delta_{ \mathfrak{t}}=\Delta(\mathfrak{k},\mathfrak{h}),\) and \(\Delta_{n}=\Delta\setminus\Delta_{\mathfrak{t}}.\) We have the following theorem due to Borel-de Siebenthal.
**Theorem 2.1**.: _[_1_]_ _There is a positive root system \(P\) of \(\Delta\) such that the corresponding set of simple roots \(\Phi\) contains exactly one non-compact root, say \(\nu,\) and \(n_{\nu}(\delta),\) the coefficient of \(\nu\) in the highest root \(\delta\) when expressed as the sum of elements in \(\Phi,\) is \(1,\) if \(\mathfrak{k}\) has non-zero centre; and \(n_{\nu}(\delta)\) is \(2,\) if \(\mathfrak{k}\) is semisimple._
A positive root system \(P\) of \(\Delta\) as in the Th. 2.1 is said to be a Borel-de Siebenthal positive root system. If \(\delta\) is the highest root of \(\mathfrak{g}\) with respect to a Borel-de Siebenthal positive root system, then since \([\mathfrak{k},\mathfrak{p}]\subset\mathfrak{p},[\mathfrak{p},\mathfrak{p}] \subset\mathfrak{k},\) we have \(\mathfrak{g}^{\delta}\subset\mathfrak{p},\) if \(\mathfrak{k}\) has non-zero centre; and \(\mathfrak{g}^{\delta}\subset\mathfrak{k},\) if \(\mathfrak{k}\) is semisimple. Fix a positive root system \(P_{\mathfrak{t}}\) of \(\Delta_{\mathfrak{t}}.\) Let \(W_{\mathfrak{g}}\) be the Weyl group of \(\mathfrak{g}\) relative to the Cartan subalgebra \(\mathfrak{h},\)\(W_{\mathfrak{t}}\) be the Weyl group of \(\mathfrak{k}\) relative to the Cartan subalgebra \(\mathfrak{h},\) and \(w_{\mathfrak{t}}^{\mathfrak{0}}\in W_{\mathfrak{t}}\) be the longest element with respect to the positive root system \(P_{\mathfrak{t}}.\) If \(P\) is a Borel-de Siebenthal positive root system of \(\Delta,\) then so is \(wP\) for all \(w\in W_{\mathfrak{t}}.\) So we may assume that \(P\) is a Borel-de Siebenthal positive root system of \(\Delta\) containing \(P_{\mathfrak{t}}.\) Let \(\Phi\) be the set of all simple roots in \(P,\)\(\nu\in\Phi\) be the unique
non-compact root, \(\delta\) be the highest root of \(\mathfrak{g}\) with respect to \(P,\) and \(\{\omega_{\phi}:\phi\in\Phi\}\) be the set of all fundamental weights corresponding to the set of simple roots \(\Phi.\) For \(i\in\mathbb{Z},\) define \(\mathfrak{l}_{i}=\{X\in\mathfrak{g}:[H_{\omega_{\nu}},X]=iX\},\) where \(H_{\omega_{\nu}}\in i\mathfrak{l}_{0}\) be such that \(B(H,H_{\omega_{\nu}})=\omega_{\nu}(H)\) for all \(H\in i\mathfrak{l}_{0},B\) denotes the Killing form of \(\mathfrak{g}.\) Then \([\mathfrak{l}_{0},\mathfrak{l}_{i}]\subset\mathfrak{l}_{i}\) for all \(i\in\mathbb{Z},\)\(\mathfrak{p}=\mathfrak{l}_{-1}\oplus\mathfrak{l}_{1},\) and
\[\mathfrak{k}=\begin{cases}\mathfrak{l}_{0}&\text{if $\mathfrak{k}$ has non-zero centre},\\ \mathfrak{l}_{-2}\oplus\mathfrak{l}_{0}\oplus\mathfrak{l}_{2}&\text{if $ \mathfrak{k}$ is semisimple}.\end{cases}\]
Note that \(\mathfrak{l}_{0}\) is a reductive Lie subalgebra of \(\mathfrak{k},\) and \(\mathfrak{h}\) is a Cartan subalgebra of \(\mathfrak{l}_{0}.\) Let \(\Delta_{0}=\Delta(\mathfrak{l}_{0},\mathfrak{h}).\) Then \(P_{0}=\Delta_{0}\cap P_{\mathfrak{k}}\) is a positive root system of \(\Delta_{0},\) and \(\Phi_{0}=\Phi\setminus\{\nu\}\) is the set of all simple roots in \(P_{0}.\) Let \(W_{\mathfrak{l}_{0}}\) be the Weyl group of \(\mathfrak{l}_{0}\) relative to the Cartan subalgebra \(\mathfrak{h},\) and \(w_{\mathfrak{l}_{0}}^{0}\in W_{\mathfrak{l}_{0}}\) be the longest element with respect to the positive root system \(P_{0}.\) Choose \(X_{\alpha}(\neq 0)\in\mathfrak{g}^{\alpha},\) for all \(\alpha\in\Delta.\)
**The adjoint representations of \(\mathfrak{l}_{0}\) on \(\mathfrak{l}_{1}\) and \(\mathfrak{l}_{-1}\) are irreducible:** Let \(\beta\in\Delta\) be such that \(\mathfrak{g}^{\beta}\subset\mathfrak{l}_{1}.\) Then \(\beta\) can be written in the form \(\beta=\nu+\phi_{i_{1}}+\phi_{i_{2}}+\cdots+\phi_{i_{n}}\) such that each partial sum from the left lies in \(\Delta,\) where \(\phi_{i_{1}},\phi_{i_{2}},\ldots,\phi_{i_{n}}\in\Phi_{0},\) so that \(ad(X_{\phi_{i_{n}}})\circ\cdots\circ ad(X_{\phi_{i_{1}}})(\mathfrak{g}^{\nu}) =\mathfrak{g}^{\beta}.\) Thus \(\mathfrak{l}_{1}\) is an irreducible \(\mathfrak{l}_{0}\)-module with lowest weight \(\nu\) with respect to the positive root system \(P_{0}.\) Similarly \(\mathfrak{l}_{-1}\) is an irreducible \(\mathfrak{l}_{0}\)-module with highest weight \(-\nu.\)
**If \(\mathfrak{k}\) is semisimple, the adjoint representations of \(\mathfrak{l}_{0}\) on \(\mathfrak{l}_{2}\) and \(\mathfrak{l}_{-2}\) are irreducible:** Since \(\mathfrak{g}\) is a simple complex Lie algebra with highest weight \(\delta,\) any root \(\alpha\in P,\) can be written as \(\alpha=\delta-\phi_{j_{1}}-\phi_{j_{2}}-\cdots-\phi_{j_{m}}\) such that each partial sum from the left lies in \(\Delta,\) where \(\phi_{j_{1}},\phi_{j_{2}},\ldots,\phi_{j_{m}}\in\Phi,\) so that \(ad(X_{-\phi_{j_{m}}})\circ\cdots\circ ad(X_{-\phi_{j_{1}}})(\mathfrak{g}^{ \delta})=\mathfrak{g}^{\alpha}.\) Now \(\mathfrak{g}^{\alpha}\subset\mathfrak{l}_{2}\)_iff_\(\phi_{j_{1}},\phi_{j_{2}},\ldots,\phi_{j_{m}}\in\Phi_{0}.\) Thus \(\mathfrak{l}_{2}\) is an irreducible \(\mathfrak{l}_{0}\)-module with highest weight \(\delta\) with respect to the positive root system \(P_{0}.\) Similarly \(\mathfrak{l}_{-2}\) is an irreducible \(\mathfrak{l}_{0}\)-module with lowest weight \(-\delta.\) Let \(\epsilon\in P\) be the lowest root such that \(\mathfrak{g}^{\epsilon}\subset\mathfrak{l}_{2}.\) Then \(\Phi_{\mathfrak{k}}=\Phi_{0}\cup\{\epsilon\}\) is the set of all simple roots in \(P_{\mathfrak{k}}.\)
**If \(\mathfrak{k}\) is semisimple, the adjoint representation of \(\mathfrak{k}\) on \(\mathfrak{p}\) is irreducible: \(\mathfrak{p}=\mathfrak{l}_{-1}\oplus\mathfrak{l}_{1},\)** and if \(\mathfrak{k}\) is semisimple, \(\mathfrak{k}=\mathfrak{l}_{-2}\oplus\mathfrak{l}_{0}\oplus\mathfrak{l}_{2}.\) Since \(\mathfrak{k}\) is semisimple, and \(\mathfrak{p}\) is a finite dimensional \(\mathfrak{k}\)-representation, \(\mathfrak{p}\) is completely reducible. Also \(\mathfrak{p}=\mathfrak{l}_{-1}\oplus\mathfrak{l}_{1}\) is the irreducible decomposition as an \(\mathfrak{l}_{0}\)-module, \(\mathfrak{l}_{0}\) is a reductive subalgebra of \(\mathfrak{k}.\) Thus \(\mathfrak{p}\) is irreducible as \(\mathfrak{k}\)-module _iff_\(\mathfrak{l}_{1}\) is not invariant under \(\mathfrak{k}.\) Let \(\alpha\in\Delta\) be such that \(\mathfrak{g}^{\alpha}\subset\mathfrak{l}_{2}.\) Then \(\alpha=\phi_{k_{1}}+\cdots+\phi_{k_{r}}+\nu+\phi_{k_{r+1}}+\cdots+\phi_{k_{s}}+ \nu+\phi_{k_{s+1}}+\cdots+\phi_{k_{t}}\) such that each partial sum from the left lies in \(\Delta,\) where \(\phi_{k_{1}},\ldots,\phi_{k_{t}}\in\Phi_{0}.\) Let \(\beta=\phi_{k_{1}}+\cdots+\phi_{k_{r}}+\nu+\phi_{k_{r+1}}+\cdots+\phi_{k_{s}}.\) Then \(\beta,\beta+\nu\in\Delta;\mathfrak{g}^{-\beta}\subset\mathfrak{l}_{-1}, \mathfrak{g}^{\nu}\subset\mathfrak{l}_{1},\mathfrak{g}^{-(\beta+\nu)}\subset \mathfrak{l}_{-2}.\) Since \(-\beta=-(\beta+\nu)+\nu,\) so \([\mathfrak{l}_{-2},\mathfrak{l}_{1}]\neq 0,\) and clearly \([\mathfrak{l}_{-2},\mathfrak{l}_{1}]\subset\mathfrak{l}_{-1}\) which proves that \(\mathfrak{l}_{1}\) is not \(\mathfrak{k}\)-invariant and consequently \(\mathfrak{p}\) is an irreducible \(\mathfrak{k}\)-module. Let \(\lambda\in P\) be the highest root such that \(\mathfrak{g}^{\lambda}\subset\mathfrak{l}_{1}.\) Then \(\lambda\) is the highest weight of the \(\mathfrak{k}\)-module \(\mathfrak{p}\) (with respect to the positive root system \(P_{\mathfrak{k}}\)) as well as of the \(\mathfrak{l}_{0}\)-module \(\mathfrak{l}_{1}\) (with respect to the positive root system \(P_{0}\)).
If \(\alpha\in\Delta\) (respectively, \(\alpha\in\Delta_{\mathfrak{k}}\)), let \(n_{\phi}(\alpha)\) (respectively, \(c_{\phi}(\alpha)\)) denote the coefficient of \(\phi\) in \(\alpha\) when expressed as the sum of elements in \(\Phi\) (repectively, \(\Phi_{\mathfrak{k}}\)) for all \(\phi\in\Phi\) (respectively, for all \(\phi\in\Phi_{\mathfrak{k}}\)). If \(\mathfrak{k}\) is semisimple, let \(C\) denote the component of the
Dynkin diagram of \(\mathfrak{k}\) containing \(\epsilon,\) and \(\mathfrak{k}_{1}\) denote the simple ideal of \(\mathfrak{k}\) whose Dynkin diagram is \(C.\) Then \(\delta\) is the highest root of \(\mathfrak{k}_{1}.\)
**Lemma 2.2**.: _Assume that \(\mathfrak{k}\) is semisimple. Let \(\phi\in\Phi_{0}\) be such that \(w_{i_{0}}^{0}(\phi^{\prime})=-\phi,\) with \(n_{\phi^{\prime}}(\delta)=1.\) Then \(n_{\phi}(\lambda)=1,n_{\phi}(\epsilon)=1,\) and_
\[n_{\phi}(\delta)=\begin{cases}1&\text{if }\phi\notin C,\\ 2&\text{if }\phi\in C.\end{cases}\]
Proof.: We have \(\delta=2\nu+\sum_{\psi\in\Phi_{0}}n_{\psi}(\delta)\psi,\) with \(n_{\phi^{\prime}}(\delta)=1.\) Let \(w_{i_{0}}^{0}(\psi)=-\psi^{\prime}\) for all \(\psi\in\Phi_{0}.\) Then \(w_{i_{0}}^{0}(\delta)=2w_{i_{0}}^{0}(\nu)-\sum_{\psi\in\Phi_{0}}n_{\psi}( \delta)\psi^{\prime},\) that is \(\epsilon=2\lambda-\sum_{\psi\in\Phi_{0}}n_{\psi^{\prime}}(\delta)\psi.\)
So \(2\lambda=\epsilon+\sum_{\psi\in\Phi_{0}}n_{\psi^{\prime}}(\delta)\psi.\) Since \(n_{\psi^{\prime}}(\delta)>0\) for all \(\psi\in\Phi_{0},\) we have \(n_{\psi}(\lambda)>0\) for all \(\psi\in\Phi_{0}.\) Again \(\lambda=\nu+\sum_{\psi\in\Phi_{0}}n_{\psi}(\lambda)\psi\implies w_{i_{0}}^{0}( \lambda)=w_{i_{0}}^{0}(\nu)-\sum_{\psi\in\Phi_{0}}n_{\psi}(\lambda)\psi^{ \prime}\implies\nu=\lambda-\sum_{\psi\in\Phi_{0}}n_{\psi}(\lambda)\psi^{ \prime}.\) So \(\lambda=\nu+\sum_{\psi\in\Phi_{0}}n_{\psi}(\lambda)\psi^{\prime}.\) Hence \(n_{\psi}(\lambda)=n_{\psi^{\prime}}(\lambda)\) for all \(\psi\in\Phi_{0}.\) Since \(0\leq n_{\phi^{\prime}}(\lambda)\leq n_{\phi^{\prime}}(\delta)=1,\) and \(n_{\phi^{\prime}}(\lambda)>0,\) we have \(n_{\phi}(\lambda)=n_{\phi^{\prime}}(\lambda)=1.\)
Now \(\epsilon=2\lambda-\sum_{\psi\in\Phi_{0}}n_{\psi^{\prime}}(\delta)\psi,\) and \(n_{\phi^{\prime}}(\delta)=1,n_{\phi}(\lambda)=1.\) So \(n_{\phi}(\epsilon)=1,\) and \(n_{\phi^{\prime}}(\epsilon)=2-n_{\phi}(\delta).\) Since \(n_{\phi^{\prime}}(\epsilon)\geq 0,\) and \(n_{\phi}(\delta)>0,\) we have \(n_{\phi}(\delta)=1\) or \(2.\)
We have \(\delta=\epsilon+\sum_{\psi\in\Phi_{0}}c_{\psi}(\delta)\psi,\) with \(c_{\phi}(\delta)>0\)_iff_\(\phi\in C.\) Now \(n_{\phi}(\delta)=1\) or \(2,\) and \(n_{\phi}(\epsilon)=1.\) Thus \(c_{\phi}(\delta)=0\)_iff_\(\phi\notin C,\) and \(c_{\phi}(\delta)=1\)_iff_\(\phi\in C.\) Consequently, \(n_{\phi}(\delta)=1\)_iff_\(\phi\notin C,\) and \(n_{\phi}(\delta)=2\)_iff_\(\phi\in C.\)
**Remark 2.3**.: _If \(\mathfrak{k}\) is semisimple, \(\phi\in\Phi_{0}\) is such that \(w_{i_{0}}^{0}(\phi^{\prime})=-\phi,\) with \(n_{\phi^{\prime}}(\delta)=1,\) and \(\delta_{2}\) is the highest weight of the simple ideal of \(\mathfrak{k}\) whose Dynkin diagram is the component of the Dynkin diagram of \(\mathfrak{k}\) containing \(\phi,\) then \(c_{\phi}(\delta_{2})=1.\)_
**Lemma 2.4**.: _Let \(\mathfrak{l}\) be a complex simple Lie algebra, \(\mathfrak{t}\) be a Cartan subalgebra, \(\Delta^{+}\) be a positive root system of \(\Delta(\mathfrak{l},\mathfrak{t})\), and \(V\) be a non-trivial irreducible \(\mathfrak{l}\)-module with highest weight \(\xi.\) Then the lowest weight of \(V\) is given by \(\xi-\sum_{1\leq i\leq n}m_{i}\alpha_{i},m_{i}\in\mathbb{N};\) where \(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\}\) is the set of all simple roots in \(\Delta^{+}.\)_
Proof.: Let \(\langle.,.\rangle\) be an \(Aut(\Delta(\mathfrak{l},\mathfrak{t}))\) -invariant inner product on \(\mathfrak{k}_{\mathbb{R}}^{*}=\mathbb{R}\alpha_{1}+\mathbb{R}\alpha_{2}+ \cdots+\mathbb{R}\alpha_{n},\) and \(\eta\) be the lowest weight of \(V.\) Clearly \(\eta=\xi-\sum_{1\leq i\leq n}m_{i}\alpha_{i},\) where \(m_{i}\in\mathbb{N}\cup\{0\}\) for all \(i.\) Let \(S_{1}\) consists of all those simple roots \(\alpha_{i}\) for which \(m_{i}>0,\) and \(S_{2}\) consists of all those simple roots \(\alpha_{j}\) for which \(m_{j}=0.\) Then the set of all simple roots \(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\}\) is the disjoint union of \(S_{1}\) and \(S_{2}.\) Since \(\xi\) is dominant weight, and \(\xi\neq 0,\langle\xi,\alpha_{i_{0}}\rangle>0,\) and so \(\xi-\alpha_{i_{0}}\) is a weight of \(V\) for some \(1\leq i_{0}\leq n.\) Consequently \(m_{i_{0}}>0,\) and \(\alpha_{i_{0}}\in S_{1}.\) So \(S_{1}\) is non-empty. If \(\alpha_{j}\in S_{2},\) then \(\langle\eta,\alpha_{j}\rangle=\langle\xi,\alpha_{j}\rangle-\sum_{1\leq i\leq n,m_{i}>0}m_{i}\langle\alpha_{i},\alpha_{j}\rangle\geq 0,\) as \(\langle\xi,\alpha_{j}\rangle\geq 0,\langle\alpha_{i},\alpha_{j}\rangle\leq 0\) for all \(\alpha_{i}\in S_{1}.\) Since \(\eta\) is a negative dominant weight, we have \(\langle\eta,\alpha_{j}\rangle=0,\) and so \(\langle\alpha_{i},\alpha_{j}\rangle=0\) for all \(\alpha_{i}\in S_{1}.\) Hence \(S_{1}\) is orthogonal to \(S_{2},\) and \(S_{1}\) is non-empty. Since \(\mathfrak{l}\) is simple, we have \(S_{1}=\{\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\}.\) Consequently \(S_{2}\) is empty and the lemma is proved.
Now we are ready to prove Th. 1.1.
**Proof of Th. 1.1:** Let \(P^{\prime}\) be a Borel-de Siebenthal positive root system of \(\Delta\) containing \(P_{\mathfrak{t}}\), \(\Phi^{\prime}\) be the set of all simple roots of \(P^{\prime}\), and \(\nu^{\prime}\in\Phi^{\prime}\) be the unique non-compact root. Then we have a gradation \(\mathfrak{g}=\mathfrak{l}^{\prime}_{-2}\oplus\mathfrak{l}^{\prime}_{-1}\oplus \mathfrak{l}^{\prime}_{0}\oplus\mathfrak{l}^{\prime}_{1}\oplus\mathfrak{l}_{2}\) with \(\mathfrak{k}=\mathfrak{l}^{\prime}_{-2}\oplus\mathfrak{l}^{\prime}_{0} \oplus\mathfrak{l}^{\prime}_{2},\mathfrak{p}=\mathfrak{l}^{\prime}_{-1}\oplus \mathfrak{l}^{\prime}_{1}\), and \([\mathfrak{l}^{\prime}_{0},\mathfrak{l}^{\prime}_{i}]\subset\mathfrak{l}^{ \prime}_{i}\) for all \(i\). The subalgebra \(\mathfrak{l}^{\prime}_{0}\) is a reductive, and \(\mathfrak{h}\) is a Cartan subalgebra of \(\mathfrak{l}^{\prime}_{0}\). Let \(\Delta^{\prime}_{0}=\Delta(\mathfrak{l}^{\prime}_{0},\mathfrak{h})\). Then \(P^{\prime}_{0}=\Delta^{\prime}_{0}\cap P_{\mathfrak{t}}\) is a positive root system of \(\Delta^{\prime}_{0}\), and \(\Phi^{\prime}_{0}=\Phi^{\prime}\setminus\{\nu^{\prime}\}\) is the set of all simple roots in \(P^{\prime}_{0}\). Let \(W_{\mathfrak{l}^{\prime}_{0}}\) be the Weyl group of \(\mathfrak{l}^{\prime}_{0}\) relative to the Cartan subalgebra \(\mathfrak{h}\), and \(w^{0}_{\mathfrak{l}^{\prime}_{0}}\in W_{\mathfrak{l}^{\prime}_{0}}\) be the longest element with respect to the positive root system \(P^{\prime}_{0}\). Also the adjoint representation of \(\mathfrak{l}^{\prime}_{0}\) on \(\mathfrak{l}^{\prime}_{1}\) is irreducible, and \(\mathfrak{l}^{\prime}_{1}\) is the \(\mathfrak{l}^{\prime}_{0}\)-submodule of \(\mathfrak{p}\) with highest weight \(\lambda\) and lowest weight \(\nu^{\prime}\) with respect to the positive root system \(P^{\prime}_{0}\). Let \(\epsilon^{\prime}\in P^{\prime}\) be the lowest root such that \(\mathfrak{g}^{\epsilon^{\prime}}\subset\mathfrak{l}^{\prime}_{2}\). Then \(\Phi^{\prime}_{0}\cup\{\epsilon^{\prime}\}\) is the set of all simple roots in \(P^{\prime}\cap\Delta_{\mathfrak{t}}=P_{\mathfrak{t}}\). So \(\Phi^{\prime}_{0}\cup\{\epsilon^{\prime}\}=\Phi_{0}\cup\{\epsilon\}\), which implies either \(\Phi^{\prime}_{0}=\Phi_{0}\), or \(\Phi^{\prime}_{0}=\Phi_{\mathfrak{t}}\setminus\{\phi\}\) for some \(\phi\in\Phi_{0}\). Now \(\Phi^{\prime}_{0}=\Phi_{0}\implies\mathfrak{l}^{\prime}_{0}=\mathfrak{l}^{ \prime}_{0}=\mathfrak{l}_{0},\mathfrak{l}^{\prime}_{1}=\mathfrak{l}_{1},\nu^{ \prime}=\nu\); so that \(\Phi^{\prime}=\Phi\), and hence \(P^{\prime}=P\). Assume that \(P^{\prime}\neq P\) so that \(\Phi^{\prime}_{0}=\Phi_{\mathfrak{t}}\setminus\{\phi\}\) for some \(\phi\in\Phi_{0}\), and \(\epsilon^{\prime}=\phi\). It only remains to prove that if \(w^{0}_{\mathfrak{l}_{0}}(\phi^{\prime})=-\phi\), then \(n_{\phi^{\prime}}(\delta)=1\). Since \(\phi\in P^{\prime}\) with \(\mathfrak{g}^{\phi}\subset\mathfrak{l}^{\prime}_{2}\), we have \(\phi=2\nu^{\prime}+m_{\epsilon}\epsilon+\sum_{\psi\in(\Phi_{0}\setminus\{ \phi\})}m_{\psi}\psi\); \(m_{\epsilon},m_{\psi}\in\mathbb{N}\cup\{0\}\) for all \(\psi\in\Phi_{0}\setminus\{\phi\}\). Also \(w^{0}_{\mathfrak{l}^{\prime}_{0}}(\phi)\) is the highest weight of the \(\mathfrak{l}^{\prime}_{0}\)-module \(\mathfrak{l}^{\prime}_{2}\) with respect to the positive root sytem \(P^{\prime}_{0}\). So \(w^{0}_{\mathfrak{l}^{\prime}_{0}}(\phi)=\phi+k_{\epsilon}\epsilon+\sum_{\psi \in(\Phi_{0}\setminus\{\phi\})}k_{\psi}\psi\); \(k_{\epsilon},k_{\psi}\in\mathbb{N}\cup\{0\}\) for all \(\psi\in\Phi_{0}\setminus\{\phi\}\). This implies \(w^{0}_{\mathfrak{l}^{\prime}_{0}}(\phi)=2\nu^{\prime}+(m_{\epsilon}+k_{ \epsilon})\epsilon+\sum_{\psi\in(\Phi_{0}\setminus\{\phi\})}(m_{\psi}+k_{ \psi})\psi=2\nu^{\prime}+d_{\epsilon}\epsilon+\sum_{\psi\in(\Phi\setminus\{\phi \})}d_{\psi}\psi\), where \(d_{\epsilon}=m_{\epsilon}+k_{\epsilon},d_{\psi}=m_{\psi}+k_{\psi}\) for all \(\psi\in\Phi_{0}\setminus\{\phi\}\). So \(-w^{0}_{\mathfrak{l}^{\prime}_{0}}(\phi)=-2\nu^{\prime}-d_{\epsilon}\epsilon- \sum_{\psi\in(\Phi_{0}\setminus\{\phi\})}d_{\psi}\psi=-2\nu^{\prime}+d_{ \epsilon}^{\prime}w^{0}_{\mathfrak{l}^{\prime}_{0}}(\epsilon)+\sum_{\psi\in( \Phi_{0}\setminus\{\phi\})}d^{\prime}_{\psi}w^{0}_{\mathfrak{l}^{\prime}_{0}}(\psi)\) for some \(d^{\prime}_{\epsilon},d^{\prime}_{\psi}\in\mathbb{N}\cup\{0\}\) for all \(\psi\in\Phi_{0}\setminus\{\phi\}\). Thus \(d^{\prime}_{\epsilon}w^{0}_{\mathfrak{l}^{\prime}_{0}}(\epsilon)=2\nu^{\prime}- \sum_{\psi\in\Phi_{0}}d^{\prime}_{\psi}w^{0}_{\mathfrak{l}^{\prime}_{0}}(\psi)\), where \(d^{\prime}_{\phi}=1\). This implies \(d^{\prime}_{\epsilon}\epsilon=2\lambda-\sum_{\psi\in\Phi_{0}}d^{\prime}_{\psi}\psi\). Since \(n_{\nu}(\epsilon)=2=2n_{\nu}(\lambda)\), we have \(d^{\prime}_{\epsilon}=1\). Hence \(\epsilon=2\lambda-\sum_{\psi\in\Phi_{0}}d^{\prime}_{\psi}\psi\implies w^{0}_{ \mathfrak{l}_{0}}(\epsilon)=2w^{0}_{\mathfrak{l}^{\prime}_{0}}(\lambda)+\sum_{ \psi\in\Phi_{0}}d^{\prime}_{\psi}\psi^{\prime}\), where \(w^{0}_{\mathfrak{l}_{0}}(\psi)=-\psi^{\prime}\) for all \(\psi\in\Phi_{0}\). That is, \(\delta=2\nu+\sum_{\psi\in\Phi_{0}}d^{\prime}_{\psi}\psi^{\prime}\), so that \(n_{\phi^{\prime}}(\delta)=d^{\prime}_{\phi}=1\).
Conversely assume that \(\phi\in\Phi_{0}\) be such that \(w^{0}_{\mathfrak{l}_{0}}(\phi^{\prime})=-\phi\), with \(n_{\phi^{\prime}}(\delta)=1\); \(\mathfrak{l}^{\prime}_{0}\) be the reductive subalgebra of \(\mathfrak{k}\) containing \(\mathfrak{h}\) and the Dynkin diagram of \([\mathfrak{l}^{\prime}_{0},\mathfrak{l}^{\prime}_{0}]\) be the subdiagram of the Dynkin diagram of \(\mathfrak{k}\) with vertices \(\Phi^{\prime}_{0}=\Phi_{\mathfrak{t}}\setminus\{\phi\}\); and \(\nu^{\prime}\) be the lowest weight of the irreducible \(\mathfrak{l}^{\prime}_{0}\)-submodule \(W\) of \(\mathfrak{p}\) with highest weight \(\lambda\). Let \(P^{\prime}_{0}\) be the positive root system associated with the simple system \(\Phi^{\prime}_{0}\). Then \(\Delta(\mathfrak{l}^{\prime}_{0},\mathfrak{h})=\Delta([\mathfrak{l}^{\prime}_{0}, \mathfrak{l}^{\prime}_{0}],[\mathfrak{l}^{\prime}_{0},\mathfrak{l}^{\prime}_{0}] \cap\mathfrak{h})\) is given by \(\Delta^{\prime}_{0}=P^{\prime}_{0}\cup(-P^{\prime}_{0})\). Since the extended Dynkin diagram of \(\mathfrak{g}\) is connected, the non-compact simple root \(\nu\) is connected to each component of the Dynkin diagram of \(\mathfrak{k}\). Thus \(\mathfrak{l}^{\prime}_{0}\)-module \(W\) is non-trivial. So \(\nu^{\prime}=\lambda-\sum_{\psi\in\Phi^{\prime}_{0}}c_{\psi}\psi\), with \(c_{\epsilon}>0\), by Lemma 2.4. Since \(P\) is Borel-de Siebenthal, \(n_{\nu}(\epsilon)=2,n_{\nu}(\lambda)=1\) and \(\nu^{\prime}\in\Delta\), we have \(c_{\epsilon
for all \(\psi\in\Phi^{\prime}_{0}\setminus\{\epsilon\}.\) Thus we have if \(\beta\in\Delta_{n},\) then for all \(\psi\in\Phi^{\prime}_{0}\cup\{\nu^{\prime}\},\ m_{\psi}(\beta)\in\mathbb{Z}\) and are of same sign. Also \(m_{\nu^{\prime}}(\beta)=\pm 1.\) If \(\alpha\in P_{\mathfrak{t}},\) then \(c_{\phi}(\alpha)=0\) or \(1,\) by Remark 2.3. If \(c_{\phi}(\alpha)=0,\) then clearly \(m_{\nu^{\prime}}(\alpha)=0,\) and \(m_{\psi}(\alpha)\in\mathbb{N}\cup\{0\}\) for all \(\psi\in\Phi^{\prime}_{0}.\) Since \(n_{\phi}(\nu^{\prime})=0,n_{\phi}(\epsilon)=1;\phi=2\nu^{\prime}+\epsilon+ \sum_{\psi\in(\Phi^{\prime}_{0}\setminus\{\epsilon\})}m_{\psi}(\phi)\psi.\) If \(m_{\psi}(\phi)\in\mathbb{N}\cup\{0\}\) for all \(\psi\in\Phi^{\prime}_{0}\setminus\{\epsilon\},\) then \(m_{\psi}(\alpha)\in\mathbb{N}\cup\{0\}\) for all \(\psi\in\Phi^{\prime}_{0}\) and \(m_{\nu^{\prime}}(\alpha)=2,\) if \(c_{\phi}(\alpha)=1.\) Consequently \(P^{\prime}=\Phi^{\prime}_{0}\cup\{\nu^{\prime}\}\) is a Borel-de Siebenthal positive root system of \(\Delta\) containing \(P_{\mathfrak{t}}\) and the proof is complete.
Thus it remains to prove that \(m_{\psi}(\phi)\in\mathbb{N}\cup\{0\}\) for all \(\psi\in\Phi^{\prime}_{0}\setminus\{\epsilon\},\) which is done below with case by case consideration:
1. \(\mathfrak{g}=\mathfrak{b}_{l}(l\geq 2),\mathfrak{g}_{0}=\mathfrak{so}(2p,2l-2p+1),2\leq p\leq l:\Phi=\{\phi_{1},\phi_{2},\ldots,\phi_{l}\}\) with \(\nu=\phi_{p},\) and \(\delta=\phi_{1}+2\phi_{2}+\cdots+2\phi_{l}.\)
Here \(P_{\mathfrak{t}}=\{\phi_{i}+\cdots+\phi_{j-1}:1\leq i<j\leq p\}\cup\{\phi_{i}+ \cdots+\phi_{j-1},\phi_{i}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l}:p+1\leq i <j\leq l\}\cup\{\phi_{i}+\cdots+\phi_{l}:p+1\leq i\leq l\}\cup\{\phi_{i}+ \cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l}:1\leq i<j\leq p\},\) and \(P\cap\Delta_{n}=\{\phi_{i}+\cdots+\phi_{l}:1\leq i\leq p\}\cup\{\phi_{i}+ \cdots+\phi_{j-1},\phi_{i}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l}:1\leq i \leq p<j\leq l\}.\) So \(\epsilon=\phi_{p-1}+2\phi_{p}+\cdots+2\phi_{l},\lambda=\phi_{1}+\cdots+\phi_{p} +2\phi_{p+1}+\cdots+2\phi_{l}.\) The only possibility of \(\phi^{\prime}\in\Phi_{0}\) with \(n_{\phi^{\prime}}(\delta)=1\) is \(\phi^{\prime}=\phi_{1}.\)
(i) Let \(\phi^{\prime}=\phi_{1}.\) Then \(\phi=-w^{0}_{{}_{l0}}(\phi^{\prime})=\phi_{p-1},\) and \(\nu^{\prime}=-(\phi_{p}+2\phi_{p+1}+\cdots+2\phi_{l}).\) So \(\phi_{p-1}=\epsilon+2\nu^{\prime}+2\phi_{p+1}+\cdots+2\phi_{l}.\)
2. \(\mathfrak{g}=\mathfrak{c}_{l}(l\geq 3),\mathfrak{g}_{0}=\mathfrak{sp}(p,l-p),1 \leq p\leq l-1:\Phi=\{\phi_{1},\phi_{2},\ldots,\phi_{l}\}\) with \(\nu=\phi_{p},\) and \(\delta=2\phi_{1}+\cdots+2\phi_{l-1}+\phi_{l}.\)
\(\mathfrak{c}_{l}:\)\(\phi_{1}\)\(\phi_{p}\)\(\phi_{l-1}\)\(\phi_{l}\)\(\mathfrak{c}_{l}:\)\(\phi_{1}\)\(\phi_{p-1}\)\(\mathfrak{c}_{l}:\)\(\phi_{p-1}\)\(\mathfrak{c}_{l}:\)\(\phi_{p-1}\)\(\mathfrak{c}_{l}:\)\(\phi_{p-1}\)\(\mathfrak{c}_{l}:\)\(\phi_{p-1}\)\(\mathfrak{c}_{l-1}\)\(\mathfrak{c}_{l}\)\(\mathfrak{c}_{l}\)
Here \(P_{\mathfrak{t}}=\{\phi_{i}+\cdots+\phi_{j-1}:1\leq i<j\leq p\}\cup\{\phi_{i}+ \cdots+\phi_{j-1},\phi_{i}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l-1}+\phi_{ l}:p+1\leq i<j\leq l\}\cup\{2\phi_{i}+\cdots+2\phi_{l-1}+\phi_{l}:1\leq i \leq l\}\cup\{\phi_{i}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l-1}+\phi_{l}:1 \leq i<j\leq p\},\) and \(P\cap\Delta_{n}=\{\phi_{i}+\cdots+\phi_{j-1},\phi_{i}+\cdots+\phi_{j-1}+2\phi_{j }+\cdots+2\phi_{l-1}+\phi_{l}:1\leq i\leq p<j\leq l\}.\) So \(\epsilon=2\phi_{p}+\cdots+2\phi_{l-1}+\phi_{l},\lambda=\phi_{1}+\cdots+\phi_{p} +2\phi_{p+1}+\cdots+2\phi_{l-1}+\phi_{l}.\)
The only possibility of \(\phi^{\prime}\in\Phi_{0}\) with \(n_{\phi^{\prime}}(\delta)=1\) is \(\phi^{\prime}=\phi_{l}.\)
(i) Let \(\phi^{\prime}=\phi_{l}.\) Then \(\phi=-w^{0}_{{}_{l0}}(\phi^{\prime})=\phi_{l},\) and \(\nu^{\prime}=-(\phi_{1}+\cdots+\phi_{p}+\phi_{p+1}+\cdots+\phi_{l-1}).\) So \(\phi_{l}=\epsilon+2\nu^{\prime}+2\phi_{1}+\cdots+2\phi_{p-1}.\)
3. \(\mathfrak{g}=\delta_{l}(l\geq 4),\mathfrak{g}_{0}=\mathfrak{so}(2p,2l-2p),2\leq p \leq l-2:\Phi=\{\phi_{1},\phi_{2},\ldots,\phi_{l}\}\) with \(\nu=\phi_{p},\delta=\phi_{1}+2\phi_{2}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}.\)
\(\delta_{l}:\)\(\phi_{2}\)\(\phi_{p}\)\(\phi_{l-2}\)\(\phi_{l-1}\)\(\mathfrak{t}:\)\(\phi_{1}\)\(\phi_{2}\)\(\phi_{p-2}\)\(\phi_{p-1}\)\(\phi_{p+1}\)\(\phi_{l-2}\)\(\phi_{l-1
\(\phi_{2}\)\(\phi_{2}\)\(\phi_{3}\)\(\phi_{4}\)\(\phi_{3}\)\(\phi_{1}\)\(\mathfrak{e}\)\(\
roots in \(\Delta^{+}\) whose coefficient in \(\delta,\) when expressed as the sum of simple roots, is \(1.\) Thus
\[A=\begin{cases}1&\text{if }\mathfrak{g}=\mathfrak{e}_{8},\mathfrak{f}_{4}, \mathfrak{g}_{2},\\ 2&\text{if }\mathfrak{g}=\mathfrak{b}_{l}(l\geq 2),\mathfrak{e}_{l}(l\geq 2), \mathfrak{e}_{7},\\ 3&\text{if }\mathfrak{g}=\mathfrak{e}_{6},\\ 4&\text{if }\mathfrak{g}=\delta_{l}(l\geq 4);\end{cases}\]
which is equal to the covering index of \(Int(\mathfrak{u}),\) where \(\mathfrak{u}\) is the compact real form of \(\mathfrak{g}\)[3, Th. 3.32, Ch. X]. Since the covering index of \(Int(\mathfrak{g})=\) the covering index of \(Int(\mathfrak{u}),\) the proof is complete.
**Remark 2.5**.: _(i) Let \(\mathfrak{t}\) be semisimple, \(\phi\in\Phi_{0}\) be such that \(w_{{}_{0}}^{0}(\phi^{\prime})=-\phi,\) with \(n_{\phi^{\prime}}(\delta)=1,\) and \(P^{\prime}\) be the Borel-de Siebenthal positive root system of \(\Delta\) whose simple roots are given by \((\Phi_{\mathfrak{t}}\setminus\{\phi\})\cup\{\nu^{\prime}\},\) where \(\nu^{\prime}\) be the lowest weight of the irreducible \(\mathfrak{t}_{0}^{\prime}\)-submodule of \(\mathfrak{p}\) with highest weight \(\lambda,\) and \(\mathfrak{t}_{0}^{\prime}\) be the reductive subalgebra of \(\mathfrak{t}\) containing \(\mathfrak{h}\) and the Dynkin diagram of \([\mathfrak{t}_{0}^{\prime},\mathfrak{t}_{0}^{\prime}]\) be the subdiagram of the Dynkin diagram of \(\mathfrak{t}\) with vertices \(\Phi_{\mathfrak{t}}\setminus\{\phi\}.\) Then \(P^{\prime}\cap\Delta_{n}=\{\beta\in\Delta_{n}:n_{\phi}(\beta)=1\}\cup\{\beta \in(-P)\cap\Delta_{n}:n_{\phi}(\beta)=0\}.\)_
_(ii) If \(\mathfrak{t}\) has non-zero centre, then obviously there are exactly two Borel-de Siebenthal positive root systems of \(\Delta\) containing \(P_{\mathfrak{t}}=P_{0}.\) The corresponding sets of simple roots are given by \(\Phi_{0}\cup\{\nu\},\) and \(\Phi_{0}\cup\{-\delta\}.\)_
The number of all positive root systems of \(\Delta\) containing \(P_{\mathfrak{t}}\) is \(|W_{\mathfrak{g}}|/|W_{\mathfrak{t}}|,\) by the following lemma.
**Lemma 2.6**.: _Let \(V\) be a finite dimensional real vector space, \(R\) be a root system in \(V,R^{\prime}\) be a subsystem of \(R,\) and \(W(R),W(R^{\prime})\) be Weyl groups of \(R,\) and \(R^{\prime}\) respectively. Let \(P(R^{\prime})\) be a fixed positive root system of \(R^{\prime}.\) Then the number of positive root systems of \(R\) containing \(P(R^{\prime})\) is \(|W(R)|/|W(R^{\prime})|.\)_
Proof.: The Weyl group \(W(R)\) is finite, \(W(R^{\prime})\) is a subgroup of \(W(R),\) and the group \(W(R)\) (respectively, \(W(R^{\prime})\)) acts simply transitively on the set of all positive root systems of \(R\) (respectively, \(R^{\prime}\)). Let \(W(R^{\prime})\backslash W(R)=\{W(R^{\prime})v_{1},W(R^{\prime})v_{2},\ldots,W( R^{\prime})v_{k}\}\) be the set of all right cosets of \(W(R^{\prime})\) in \(W(R).\) Here we have assumed that \(v_{1}\) is the identity element of \(W(R),\) and \(W(R^{\prime})v_{i}\neq W(R^{\prime})v_{j}\) for \(i\neq j.\) Fix a positive root system \(P(R)\) of \(R.\) Then the set of all positive root systems of \(R\) is \(\{wv_{1}P(R),wv_{2}P(R),\ldots wv_{k}P(R):w\in W(R^{\prime})\}.\) Here the positive root systems in this set are all distinct. Since \(W(R^{\prime})\) acts simply transitively on the set of all positive root systems of \(R^{\prime},\) there exists exactly one positive root system of \(R\) containing \(P(R^{\prime})\) in the set \(\{wv_{i}P(R):w\in W(R^{\prime})\}\) for each \(1\leq i\leq k.\) So the number of positive root systems of \(R\) containing \(P(R^{\prime})\) is \(k=|W(R^{\prime})\backslash W(R)|=|W(R)|/|W(R^{\prime})|.\)
## 3. Discrete series representations
We will continue with notations from the previous section. Also assume that \(G\) be a connected simple Lie group with finite centre and the Lie algebra of \(G\) is \(\mathfrak{g}_{0},\) and
be the connected Lie subgroup of \(G\) with Lie algebra \(\mathfrak{k}_{0}.\) Since \(\operatorname{rank}(G)=\operatorname{rank}(K),G\) admits discrete series representations. A non-singular linear function \(\gamma\) on \(i\mathfrak{t}_{0}\) relative to \(\Delta\) defines uniquely a positive root system \(P_{\gamma}\) of \(\Delta.\) Define \(\rho_{\mathfrak{g}}=\frac{1}{2}\sum_{\alpha\in P_{\gamma}}\alpha,\rho_{ \mathfrak{t}}=\frac{1}{2}\sum_{\alpha\in P_{\gamma}\cap\Delta_{\mathfrak{t}}}\alpha.\) If \(\gamma+\rho_{\mathfrak{g}}\) is analytically integral(that is, \(\gamma+\rho_{\mathfrak{g}}\) is the differential of a Lie group homomorphism on the Cartan subgroup of \(G\) corresponding to \(\mathfrak{t}_{0}\)), then there exists a discrete series representation \(\pi_{\gamma}\) with infinitesimal character \(\chi_{\gamma};\) the associated \((\mathfrak{g},K)\)-module \(\pi_{\gamma,K}\) contains an irreducible \(K\)-submodule with highest weight \(\Gamma=\gamma+\rho_{\mathfrak{g}}-2\rho_{\mathfrak{t}};\) it occurs with multiplicity one in \(\pi_{\gamma,K}.\) Any other irreducible \(K\)-module that occurs in \(\pi_{\gamma,K}\) has highest weight of the form \(\Gamma+\sum_{\alpha\in P_{\gamma}}n_{\alpha}\alpha,\) with \(n_{\alpha}\) a non-negative integer; and two such representations \(\pi_{\gamma},\) and \(\pi_{\gamma^{\prime}}\) are unitarily equivalent _iff_\(\gamma=w\gamma^{\prime}\) for some \(w\in W_{\mathfrak{t}}\)[5, Th. 9.20, Ch. IX]. This \(\gamma\) is called the _Harish-Chandra parameter_, \(\Gamma\) is called the _Blattner parameter_ of the discrete series representation \(\pi_{\gamma},\) and the positive root system \(P_{\gamma}\) is called the _Harish-Chandra root order_ corresponding to \(\gamma.\) Upto unitary equivalence, these are all discrete series representations of \(G\)[5, Th. 12.21, Ch. XII]. Hence to get non-equivalent discrete series representations, we may assume that the Harish-Chandra root order \(P_{\gamma}\) corresponding to \(\gamma\) contains \(P_{\mathfrak{t}}\) so that \(P_{\gamma}\cap\Delta_{\mathfrak{t}}=P_{\mathfrak{t}}.\) If \(G/K\) is Hermitian symmetric that is, if \(\mathfrak{k}_{0}\) has non-zero centre, then \(\pi_{\gamma}\) is a holomorphic discrete series representation _iff_ the Harish-Chandra root order corresponding to \(\gamma\) is a Borel-de Siebenthal positive root system [2]. If \(G/K\) is not Hermitian symmetric that is, if \(\mathfrak{k}_{0}\) is semisimple, then \(\pi_{\gamma}\) is a Borel-de Siebenthal discrete series representation (defined in [6] analogous to holomorphic discrete series representations) _iff_ the Harish-Chandra root order \(P_{\gamma}\) is a Borel-de Siebenthal positive root system [6]. Note that the infinitesimal character \(\chi_{\gamma}\) is the character of the Verma module of \(\mathfrak{g}\) with highest weight \(\gamma-\rho_{\mathfrak{g}}.\) Thus \(\chi_{\gamma}\) is trivial _iff_\(\gamma=\rho_{\mathfrak{g}}.\) Now the proof of Th. 1.2 is given below.
**Proof of Th. 1.2:** Since the number of unitary equivalence classes of discrete series representations with trivial infinitesimal character are in bijective correspondence with the set of all positive root systems of \(\Delta\) containing \(P_{\mathfrak{t}},\) the first part follows from the Lemma 2.6. If \(G/K\) is Hermitian symmetric, then since there are exactly two Borel-de Siebenthal positive root systems containing \(P_{\mathfrak{t}}\)[Remark 2.5(ii)], the second part follows. If \(G/K\) is not Hermitian symmetric, then since the number of Borel-de Siebenthal positive root systems containing \(P_{\mathfrak{t}}\) is the covering index of the Lie group \(Int(\mathfrak{g})\) by Cor 1.1.1, the third part also follows.
## acknowledgement
The author acknowledges the financial support from the Department of Science and Technology (DST), Govt. of India under the Scheme "Fund for Improvement of S&T Infrastructure (FIST)" [File No. SR/FST/MS-I/2019/41].
|
2309.15599 | OceanBench: The Sea Surface Height Edition | The ocean profoundly influences human activities and plays a critical role in
climate regulation. Our understanding has improved over the last decades with
the advent of satellite remote sensing data, allowing us to capture essential
quantities over the globe, e.g., sea surface height (SSH). However, ocean
satellite data presents challenges for information extraction due to their
sparsity and irregular sampling, signal complexity, and noise. Machine learning
(ML) techniques have demonstrated their capabilities in dealing with
large-scale, complex signals. Therefore we see an opportunity for ML models to
harness the information contained in ocean satellite data. However, data
representation and relevant evaluation metrics can be the defining factors when
determining the success of applied ML. The processing steps from the raw
observation data to a ML-ready state and from model outputs to interpretable
quantities require domain expertise, which can be a significant barrier to
entry for ML researchers. OceanBench is a unifying framework that provides
standardized processing steps that comply with domain-expert standards. It
provides plug-and-play data and pre-configured pipelines for ML researchers to
benchmark their models and a transparent configurable framework for researchers
to customize and extend the pipeline for their tasks. In this work, we
demonstrate the OceanBench framework through a first edition dedicated to SSH
interpolation challenges. We provide datasets and ML-ready benchmarking
pipelines for the long-standing problem of interpolating observations from
simulated ocean satellite data, multi-modal and multi-sensor fusion issues, and
transfer-learning to real ocean satellite observations. The OceanBench
framework is available at github.com/jejjohnson/oceanbench and the dataset
registry is available at github.com/quentinf00/oceanbench-data-registry. | J. Emmanuel Johnson, Quentin Febvre, Anastasia Gorbunova, Sammy Metref, Maxime Ballarotta, Julien Le Sommer, Ronan Fablet | 2023-09-27T12:00:40Z | http://arxiv.org/abs/2309.15599v1 | # OceanBench:
###### Abstract
The ocean is a crucial component of the Earth's system. It profoundly influences human activities and plays a critical role in climate regulation. Our understanding has significantly improved over the last decades with the advent of satellite remote sensing data, allowing us to capture essential sea surface quantities over the globe, e.g., sea surface height (SSH). Despite their ever-increasing abundance, ocean satellite data presents challenges for information extraction due to their sparsity and irregular sampling, signal complexity, and noise. Machine learning (ML) techniques have demonstrated their capabilities in dealing with large-scale, complex signals. Therefore we see an opportunity for these ML models to harness the full extent of the information contained in ocean satellite data. However, data representation and relevant evaluation metrics can be _the_ defining factors when determining the success of applied ML. The processing steps from the raw observation data to a ML-ready state and from model outputs to interpretable quantities require domain expertise, which can be a significant barrier to entry for ML researchers. In addition, imposing fixed processing steps, like committing to specific variables, regions, and geometries, will narrow the scope of ML models and their potential impact on real-world applications. **OceanBench** is a unifying framework that provides standardized processing steps that comply with domain-expert standards. It is designed with a flexible and pedagogical abstraction: it a) provides plug-and-play data and pre-configured pipelines for ML researchers to benchmark their models w.r.t. ML and domain-related baselines and b) provides a transparent and configurable framework for researchers to customize and extend the pipeline for their tasks. In this work, we demonstrate the OceanBench framework through a first edition dedicated to SSH interpolation challenges. We provide datasets and ML-ready benchmarking pipelines for the long-standing problem of interpolating observations from simulated ocean satellite data, multi-modal and multi-sensor fusion issues, and transfer-learning to real ocean satellite observations. The OceanBench framework is available at github.com/jejohnson/oceanbench and the dataset registry is available at github.com/quentinf00/oceanbench-data-registry.
Motivation
The ocean is vital to the Earth's system [28]. It plays a significant role in climate regulation regarding carbon [40] and heat uptake [87]. It is also a primary driver of human activities (e.g., maritime traffic and world trade, marine resources and services) [105; 92]. However, monitoring the ocean is a critical challenge: the ocean state can only partially be determined because most of the ocean consists of subsurface quantities that we cannot directly observe. Thus, to quantify even a fraction of the physical or biochemical ocean state, we must often rely only on surface quantities that we can monitor from space, drifting buoys, or autonomous devices. Satellite remote sensing, in particular, is one of the most effective ways of measuring essential sea surface quantities [2] such as sea surface height (SSH) [94], sea surface temperature (SST) [77], and ocean color (OC) [53]. While these variables characterize only a tiny portion of the ocean ecosystem, they present a gateway to many other derived physical quantities [92].
Although we can access observable sea surface quantities, they are generally irregularly and extremely sparsely sampled. For instance, satellite-derived SSH data has less than 5% coverage of the globe daily [94]. These sampling gaps make the characterization of ocean processes highly challenging for operational products and downstream tasks that depend on relevant gap-free variables. This has motivated a rich literature in geoscience over the last decades, mainly using geostatistical kriging methods [94; 101] and model-driven data assimilation schemes [55; 60]. Despite significant progress, these schemes often need to improve their ability to leverage available observation datasets' potential fully. This has naturally advocated for exploring data-driven approaches like shallow ML schemes [7; 6; 96; 71]. Very recently, deep learning schemes [115; 74; 9] have become appealing solutions to benefit from existing large-scale observation and simulation datasets and reach significant breakthroughs in the monitoring of upper ocean dynamics from scarcely and irregularly sampled observations. However, the heterogeneity and characteristics of the observation data present major challenges for effectively applying these methods beyond idealized case studies. A data source could have different variables, geometries, and noise levels, resulting in many domain-specific preprocessing procedures that can vastly change the solution outcome. Furthermore, the evaluation procedure of the methods and their effectiveness can be regionally-dependent as the physical phenomena vary in space and time, which adds another layer of complexity in convincing domain scientists of their trustworthiness. So the entire ML pipeline now requires a unified framework for dealing with heterogeneous data sources, different pre- and post-processing methodologies, and regionally-dependent evaluation procedures.
To address these challenges, we introduce **OceanBench**, a framework for co-designing machine-learning-driven high-level experiments from ocean observations. It consists of an end-to-end framework for piping data from its raw form to an ML-ready state and from model outputs to interpretable quantities. We regard OceanBench as a key facilitator for the uptake of MLOPs tools and research [66; 93] for ocean-related datasets and case studies. This first edition provides datasets and ML-ready benchmarking pipelines for SSH interpolation problems, an essential topic for the space oceanography community, related to ML communities dealing with issues like in-painting [110], denoising [98; 97], and super-resolution [106]. We expect OceanBench to facilitate new challenges to the applied machine learning community and contribute to meaningful ocean-relevant breakthroughs. The remainder of the paper is organized as follows: in SS2, we outline some related work that was inspirational for this work; in SS3, we formally outline OceanBench by highlighting the target audience, code structure, and problem scope; in SS4, we outline the problem formulation of SSH interpolation and provide some insight into different tasks related to SSH interpolation where OceanBench could provide some helpful utility; and in SS5 we give some concluding remarks while also informally inviting other researchers to help fill in the gaps.
## 2 Related Work
Machine learning applied to geosciences is becoming increasingly popular, but there are few examples of transparent pipelines involving observation data. After a thorough literature review, we have divided the field into three camps of ML applications that pertain to this work: 1) toy simulation datasets, 2) reanalysis datasets, and 3) observation datasets. We outline the literature for each of the three categories below.
**Toy Simulation Data**. One set of benchmarks focuses on learning surrogate models for well-defined but chaotic dynamical systems in the form of ordinary differential equations (ODEs) and partial differential equations (PDEs) and there are freely available code bases which implement different ODEs/PDEs [52; 95; 3; 64; 8; 102; 56; 85]. This is a great testing ground for simple toy problems that better mimic the structures we see in real-world observations. Working with simulated data is excellent because it is logistically simple and allows users to test their ideas on toy problems without increasing the complexity when dealing with real-world data. However, these are ultimately simple physical models that often do not reflect the authentic structures we see in real-world, observed data.
**Reanalysis Data**. This is assimilated data of real observations and model simulations. There are a few major platforms that host ocean reanalysis data like the Copernicus Marine Data Store [36; 33; 34; 37], the Climate Data Store [25], the BRAN2020 Model [26], and the NOAA platform [15]. However, to our knowledge, there is no standard ML-specific ocean-related tasks to accompany the data. On the atmospheric side, platforms like WeatherBench[86], ClimateBench[107], ENS10[10] were designed to assess short-term and medium-term forecasting using ML techniques with recent success of ML [69; 84] The clarity of the challenges set by the benchmark suites has inspired the idea of OceanBench, where we directly focus on problems dealing with ocean observation data.
**Observation Data**. These observation datasets (typically sparse) stem from satellite observations that measure surface variables or in-situ measurements that measure quantities within the water column. Some major platforms to host data include the Marine Data Store [32; 31], the Climate Data Store [23; 24; 22], ARGO [109], and the SOCAT platform [11]. However, it is more difficult to assess the efficacy of operational ML methods that have been trained only on observation data and, to our knowledge, there is no coherent ML benchmarking system for ocean state estimation. There has been significant effort by the _Ocean-Data-Challenge_ Group1 which provides an extensive suite of datasets and metrics for SSH interpolation. Their efforts heavily inspired our work, and we hope that OceanBench can build upon their work by adding cohesion and facilitating the ease of use for ML research and providing a high-level framework for providing ML-related data products.
Footnote 1: Ocean Data Challenge group: Freely associated scientist for oceanographic algorithm and product improvements (ocean-data-challenges.github.io)
## 3 OceanBench
### Why OceanBench?
There is a high barrier to entry in working with ocean observations for researchers in applied machine learning as there are many processing steps for both the observation data and the domain-specific evaluation procedures. OceanBench aims to lower the barrier to entry cost for ML researchers to make meaningful progress in the field of state prediction. We distribute a standardized, transparent, and flexible procedure for defining data and evaluation pipelines for data-intensive geoscience applications. Proposed examples and case studies provide a plug-and-play framework to benchmark novel ML schemes w.r.t. state-of-the-art, domain-specific ML baselines. In addition, we adopt a pedagogical abstraction that allows users to customize and extend the pipelines for their specific tasks. To our knowledge, no framework embeds processing steps for earth observation data in a manner compatible with MLOps abstractions and standards regarding reproducibility and evaluation procedures. Ultimately, we aim to facilitate the uptake of ML schemes to address ocean observation challenges and to bring new challenges to the ML community to extend additional ML tools and methods for irregularly-sampled and partially-observed high-dimensional space-time dynamics. The abstractions proposed here apply beyond ocean sciences and SSH interpolation to other geosciences with similar tasks that intersect with machine learning.
### Code Structure
OceanBench is lightweight in terms of the core functionality. We keep the code base simple and focus more on how the user can combine each piece. We adopt a strict functional style because it is easier to maintain and combine sequential transformations. There are five features we would like to highlight about OceanBench: 1) Data availability and version control, 2) an agnostic suite of geoprocessing tools for xarray datasets that were aggregated from different sources, 3) Hydra integration to pipe sequential transformations, 4) a flexible multi-dimensional array generator from xarray datasets that
are compatible with common deep learning (DL) frameworks, and 5) a JupyterBook [38] that offers library tutorials and demonstrates use-cases. In the following section, we highlight these components in more detail.
**Data Availability**. The most important aspect is the public availability of the datasets. We aggregate all pre-curated datasets from other sources, e.g. the _Ocean-Data-Challenge_[13; 12], and organize them to be publicly available from a single source 2. We also offer a few derived datasets which can be used for demonstrations and evaluation. Data is never static in a pipeline setting, as one can have many derived datasets which stem from numerous preprocessing choices. In fact, in research, we often work with derived datasets that have already been through some preliminary preprocessing methods. To facilitate the ever-changing nature of data, we use the Data Version Control (DVC) tool [67], which offers a git-like version control of the datasets.
Footnote 2: Available at: oceanbench-data-registry.github.com
**Geoprocessing Tools**. The core OceanBench library offers a suite of functions specific to processing geo-centric data. While a few particular functionalities vary from domain to domain, many operations are standard, e.g., data variable selections, filtering/smoothing, regridding, coordinate transformations, and standardization. We almost work exclusively with the xarray[58] framework because it is a coordinate-aware, flexible data structure. In addition, the geoscience community has an extensive suite of specialized packages that operate in the xarray framework to accomplish many different tasks. Almost all OceanBench toolsets are exclusively within the xarray framework to maintain compatibility with a large suite of tools already available from the community.
**Hydra Integration**. As discussed above, many specific packages accomplish many different tasks. However, what needs to be added is the flexibility to mix and match these operations as the users see fit. Hydra[111] provides a configurable way to aggregate and _pipe_ many sequential operations together. It also maintains readability, robustness, and flexibility through the use of.yaml files which explicitly highlights the function used, the function parameters chosen, and the sequence of operations performed. In the ML software stack, Hydra is often used to manage the model, optimizer, and loss configurations which helps the user experiment with different options. We apply this same concept in preprocessing, geoprocessing, and evaluation steps, often more important than the model configuration in geoscience-related tasks.
XRPatcher 3. Every machine learning pipeline will inevitably require moving data from the geospecific data structure to a multi-dimensional array easily digestible for ML models. A rather underrated, yet critical, feature of ML frameworks such as PyTorch[83] (Lightning[45]) and TensorFlow[1] (Keras[30]) is the abstraction of the dataset, dataloader, datamodules, and data pipelines. In applied ML in geosciences, the data pipelines are often more important than the actual model [89]. The user can control the _patch_-size and the _stride_-step, which can generate arbitrary coordinate-aware items directly from the xarray data structure. In addition, XRPatcher provides a way to reconstruct the fields from an arbitrary patch configuration. This robust reconstruction step is convenient to extend the ML inference step where one can reconstruct entire fields of arbitrary dimensions beyond the training configuration, e.g., to account for the border effects within the field (see appendix E) or to reconstruct quantities in specific regions or globally.
Footnote 3: Available at: github.com/jejjohnson/xrpatcher
**JupyterBook**. Building a set of tools is relatively straightforward; however, ensuring that it sees a broader adoption across a multi-disciplinary community is much more challenging. We invested heavily in showing use cases that appeal to different users with the JupyterBook platform [38]. Code with context is imperative for domain and ML experts as we need to explain and justify each component and give many examples of how they can be used in other situations. Thus, we have paid special attention to providing an extensive suite of tutorials, and we also highlight use cases for how one can effectively use the tools.
### Problem Scope
There are many problems that are of great interest the ocean community [29] but we limit the scope to state estimation problems [21]. Under this scope, there are research questions that are relevant to operational centers which are responsible for generating the vast majority of global ocean state maps [36; 34; 33; 37] that are subsequently used for many downstream tasks [92]. For example: how can we effectively use heterogeneous observations to predict the ocean state on the sea
surface [55; 62; 101; 44; 14; 177]; how can we incorporate prior physics knowledge into our predictions of ocean state trajectories [55; 29; 92]; and how can we use the current ocean state at time \(T\) to predict the future ocean state at time \(T+\tau\)[42; 86; 16]. In the same vain, there are more research questions that are of interest to the academic modeling community. For example: is simulated or reanalysis data more effective for learning ML emulators that replace expensive ocean models [49; 113]; what metrics are more effective for assessing our ability to mimic ocean dynamics [75; 48]; and how much model error can we characterize when learning from observations [18; 68].
We have cited many potential applications of how ML can be applied to tackle the state estimation problem. However, to our knowledge there is no publicly available, standardized benchmark system that is caters to ML-research standards. We believe that, irrespective of the questions posed above and the data we access, there are many logistical similarities for each of the problem formulations where we can start to set standards for a subset of tasks like interpolation or forecasting. On the front-end, we need a way to select regions, periods, variables, and a valid train-test split (see sec. D.1). On the back-end, we need a way to transform the predictions into more meaningful variables with appropriate metrics for validation (see sec. D.2 and D.3). OceanBench was designed to be an agnostic tool that is extensible to the types of datasets, processing techniques and metrics needed for working with a specific class of Ocean-related datasets. We strongly feel that a suite like this is the first step in designing task-specific benchmarks within the ocean community that is compatible with ML standards. In the remainder of the paper, we will demonstrate how OceanBench can be configured to facilitate a ML-ready data challenge involving our first edition to demonstrate OceanBench's applicability: sea surface height interpolation.
## 4 _Sea Surface Height Edition_
Sea surface height (SSH) is one of the most critical, observable quantities when determining the ocean state. It is widely used to study ocean dynamics and the adverse impact on global climate and human activities [78]. SSH enables us to track phenomena such as currents and eddies [78; 27; 82], which leads to a better quantification of the transport of energy, heat, and salt. In addition, SSH helps us quantify sea level rise at regional and global scales [4; 39], which is used for operational monitoring of the marine environment [105]. Furthermore, SSH characterization provides a plethora of data products that downstream tasks can use for many other applications [79; 20]. Due to the irregular sampling delivered by satellite altimeter, state-of-the-art operational methods using optimal interpolation schemes [94; 101] or model-driven data assimilation [7; 6; 71; 96] fail to fully retrieve SSH dynamics at fine scales below 100-200km on a global or regional scale, so improving the space-time resolution of SSH fields has been a critical challenge in ocean science. Beyond some technological developments [51], recent studies support the critical role of ML-based schemes in overcoming the current limitations of the operational systems [14; 55; 115]. The rest of this section gives an overview of the general problem definition for SSH interpolation, followed by a brief ontology for ML approaches to address the problem. We also give an overview of some experimental designs and datasets with a demonstration of metrics and plots generated by the OceanBench platform.
### Problem Definition
We are dealing with satellite observations, so we are interested in the domain across the Earth's surface. Let us define the Earth's domain by some spatial coordinates, \(\mathbf{x}=[\text{Longitude},\text{Latitude}]^{\top}\in\mathbb{R}^{D_{s}}\), and temporal coordinates, \(t=[\text{Time}]\in\mathbb{R}^{+}\), where \(D_{s}\) is the dimensionality of the coordinate vector. We can define some spatial (sub-)domain, \(\Omega\subseteq\mathbb{R}^{D_{s}}\), and a temporal (sub-)domain, \(\mathcal{T}\subseteq\mathbb{R}^{+}\). This domain could be the entire globe for 10 years or a small region within the North Atlantic for 1 year.
\[\text{Spatial Coordinates}: \mathbf{x}\in\Omega\subseteq\mathbb{R}^{D_{s}}\] (1) Temporal Coordinates \[: t\in\mathcal{T}\subseteq\mathbb{R}^{+}. \tag{2}\]
In this case \(D_{s}=2\) because we only have a two coordinates, however we can do some coordinate transformations like spherical to Cartesian. Likewise, we can do some coordinate transformation for the temporal coordinates like cyclic transformations or sinusoidal embeddings [104]. We have two fields of interest from these spatiotemporal coordinates: the state and the observations.
\[\text{State}: \mathbf{u}(\mathbf{x},t):\Omega\times\mathcal{T}\rightarrow\mathbb{R} ^{D_{u}} \tag{3}\] \[\text{Observations}: \mathbf{y}_{obs}(\mathbf{x},t):\Omega\times\mathcal{T}\rightarrow \mathbb{R}^{D_{obs}} \tag{4}\]
The state domain, \(u\in\mathcal{U}\), is a scalar or vector-valued field of size \(D_{u}\) which is typically the quantity of interest and the observation domain, \(y_{obs}\in\mathcal{Y}_{obs}\), is the observable quantity which is also a scalar or vector-valued field of size \(D_{obs}\). Now, we make the assumption that we have an operator \(\mathcal{H}\) that transforms the field from the state space, \(\mathbf{u}\), to the observation space, \(\mathbf{y}_{obs}\).
\[\mathbf{y}_{obs}(\mathbf{x},t)=\mathcal{H}\left(\mathbf{u}(\mathbf{x},t),t,\mathbf{ \varepsilon},\mathbf{\mu}\right) \tag{5}\]
This equation is the continuous function defined over the entire spatiotemporal domain. The operator, \(\mathcal{H}(\cdot)\), is flexible and problem dependent. For example, in a some discretized setting there are 0's wherever there are no observations, and 1's wherever there are observations, and in other discretized settings it takes a weighted average of the neighboring pixels. We also include a generic noise function, \(\mathbf{\varepsilon}(\mathbf{x},t)\). This could stem from a distribution, it could stationary noise operator, \(\mathbf{\varepsilon}(\mathbf{x})\), or it could be constant in space but vary with Time, \(\mathbf{\varepsilon}(t)\). We also include a control parameter, \(\mathbf{\mu}\), representing any external factors or latent variables that could connect the state vector to the observation vector, e.g., sea surface temperature. Our quantity of interest is SSH, \(\eta\), a scalar-valued field defined everywhere on the domain. In our application, we assume that the SSH we observe from satellite altimeters, \(\eta_{obs}\), is the same as the SSH state, except it could be missing for some coordinates due to incomplete coverage from the satellite. So our transformation is defined as follows:
\[\mathbf{\eta}_{obs}(\mathbf{x},t)=\mathcal{H}\left(\mathbf{\eta}(\mathbf{x},t),t,\mathbf{ \varepsilon},\mathbf{\mu}\right) \tag{6}\]
In practice, the satellite providers have a reasonable estimation of the amount of structured noise level we can expect from the satellite altimetry data; however, unresolved noise could still be present. Finally, we are interested in finding some model, \(\mathcal{M}\), that maps the SSH we observe to the true SSH given by
\[\mathcal{M}:\mathbf{\eta}_{obs}(\mathbf{x},t,\mathbf{\mu})\rightarrow\mathbf{\eta}( \mathbf{x},t), \tag{7}\]
which is essentially an inverse problem that maps the observations to the state. One could think of it as trying to find the inverse operator, \(\mathcal{M}=\mathcal{H}^{-1}\), but this could be some other arbitrary operator.
### Machine Learning Model Ontology
In general, we are interested in finding some parameterized operator, \(\mathcal{M}_{\mathbf{\theta}}\), that maps the incomplete SSH field to the complete SSH field
\[\mathcal{M}_{\mathbf{\theta}}:\mathbf{\eta}_{obs}(\mathbf{x},t,\mathbf{\mu})\rightarrow\bm {\eta}(\mathbf{x},t), \tag{8}\]
whereby we learn the parameters from data. The two main tasks we can define from this problem setup are 1) interpolation and 2) extrapolation. We define _interpolation_ as the case when the boundaries of the inferred state domain lie within a predefined shape for the boundaries of the spatiotemporal observation domain. For example, the shape of the spatial domain could be a line, box, or sphere, and the shape of the temporal domain could be a positive real number line. We define _extrapolation_ as the case where the boundaries of the inferred state domain are outside the boundaries of the spatiotemporal observation domain. In this case, the inferred state domain could be outside of either domain or both. A prevalent specific case of extrapolation is _hindcasting_ or _forecasting_, where the inferred state domain lies within the spatial observation domain's boundaries but outside of the temporal observation domain's. In the rest of this paper, we will look exclusively at the interpolation problem. However, we refer the reader to appendix F for a more detailed look at other subtasks that can arise.
From a ML point of view, we can explore various ways to define the operator in equation (7). We may distinguish three main categories: (i) coordinate-based methods that learn a parameterized continuous function to map the domain coordinates to the scalar values, (ii) the explicit mapping of the state from the observation, (iii) implicit methods defined as the solution of an optimization problem. The first category comprises of kriging approaches, which have been used operationally with historical success [112; 94]. Beyond such covariance-based approaches, recent contributions explore more complex trainable functional models [72], basis functions [101], and neural networks [62]. The second category of schemes bypasses the physical modeling aspect and amortizes the prediction directly using state-of-the-art neural architectures such as UNets and ConvLSTMs [115; 74; 9]. This category may straightforwardly benefit from available auxiliary observations [23; 24; 22] to state the interpolation problem as a super-resolution [106] or image-to-image translation problem [81; 59]. The third category relates to inverse problem formulations and associated deep learning schemes, for example
deep unfolding methods and plug-and-play priors [114]. Interestingly, recent contributions explore novel neural schemes which combine data assimilation formulations [21] and learned optimizer strategies [14; 44]. We provide a more detailed ontology of methods used for interpolation problems in appendix G. We consider at least one baseline approach from each category for each data challenge described in section 4.4. While all these methods have pros and cons, we expect the OceanBench platform to showcase to new experimental evidence and understanding regarding their applicability to SSH interpolation problems.
### Experimental Design
The availability of multi-year simulation and observation datasets naturally advocates for the design of synthetic (or twin) experiments, referred to as observing system simulation experiments (OSSE), and of real-world experiments, referred to as observing system experiments (OSE). We outline these two experimental setups below.
**Observing System Simulation Experiments (OSSE)**. A staple and groundtruthed experimental setup uses a reference simulation dataset to simulate the conditions we can expect from actual satellite observations. This setup allows researchers and operational centers to create a fully-fledged pipeline that mirrors the real-world experimental setting. An ocean model simulation is deployed over a specified spatial domain and period, and a satellite observation simulator is deployed to simulate satellite observations over the same domain and period. This OSSE setup has primarily been considered for performance evaluation, as one can assess a reconstruction performance over the entire space-time domain. It also provides the basis for the implementation of classic supervised learning strategies [9; 74; 115]. The domain expert can vary the experimental conditions depending on the research question. For example, one could specify a region based on the expected dynamical regime [12] or add a certain noise level to the observation tracks based on the satellite specifications. The biggest downside to OSSE experiments is that we train models exclusively with ocean simulations which could produce models that fail to generalize to the actual ocean state. Furthermore, the simulations are often quite expensive, which prevents the community from having high spatial resolution over very long periods, which would be essential to capture as many dynamical regimes as possible.
**Observing System Experiments (OSE)**. As more observations have become available over the past few decades, we can also design experiments using real data. This involves aggregating as many observations from real ocean altimetry satellites as possible with some specific independent subset left out for evaluation purposes. A major downside to OSE experiments is that the sparsity and spatial coverage of the observations narrow the possible scope of performance metrics and make it very challenging to learn directly from observation datasets. The current standard altimetry data are high resolution but cover a tiny area. As such, it can only inform fine-scale SSH patterns in the along-track satellite direction and cannot explicitly reveal two-dimensional patterns. Despite these drawbacks,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & OSSE & OSSE NADIR + SWOT & OSSE SST & OSE NADIR \\ \hline Data Type & Simulations & Pseudo-Observations & Simulations & Observations \\ Source & NEMO [5] & NEMO [5] & NEMO [5] & Altimetry [32] \\ Region & GulfStream & GulfStream & GulfStream & GulfStream \\ Domain Size & \(10\times 10^{\circ}\) & \(10\times 10^{\circ}\) & \(10\times 10^{\circ}\) & \(10\times 10^{\circ}\) \\ Longitude Extent & \([-65^{\circ},-55^{\circ}]\) & \([-65^{\circ},-55^{\circ}]\) & \([-65^{\circ},-55^{\circ}]\) & \([-65^{\circ},-55^{\circ}]\) \\ Latitude Extent & \([33^{\circ},43^{\circ}]\) & \([33^{\circ},43^{\circ}]\) & \([33^{\circ},43^{\circ}]\) & \([33^{\circ},43^{\circ}]\) \\ Resolution & \(0.05^{\circ}\times 0.05^{\circ}\) & \(0.05^{\circ}\times 0.05^{\circ}\) & \(0.05^{\circ}\times 0.05^{\circ}\) & \(7\) km \\ Grid Size & \(200\times 200\) & \(200\times 200\) & \(200\times 200\) & N/A \\ Num Datapoints & \(\sim\)14.6M & \(\sim\)14.6M & \(\sim\)1.6M \\ Period Start & 2012-10-01 & 2012-10-01 & 2012-10-01 & 2016-12-01 \\ Period End & 2013-09-30 & 2013-09-30 & 2013-09-30 & 2018-01-31 \\ Frequency & Daily & Daily & Daily & 1 Hz \\ \hline \hline \end{tabular}
\end{table}
Table 1: This table gives a brief overview of the datasets provided to complete the data challenges listed in 4.4 and A. Note that the OSSE datasets are all gridded products whereas the OSE NADIR is an alongtrack product. See figure 1 for an example of the OSSE NEMO Simulations for SSH and SST and pseudo-observations for NADIR & SWOT.
it provides a quantitative evaluation of the generalizability of the ML methods concerning the true ocean state.
### Data Challenges
We rely on existing OSSE and OSE experiments for SSH interpolation designed by domain experts [13; 12] and recast them into OceanBench framework to deliver a ML-ready benchmarking suites. The selected data challenges for this first edition address SSH interpolation for a 1000km\(\times\)1000km Gulfstream region. We briefly outline them below.
Figure 1: A snapshot at \(27^{th}\) October, 2012 of the sea level anomaly (SLA) from the NEMO simulation for the OSSE experiment outlined in section 4.3. The top row showcases the aggregated NADIR altimetry tracks and the aggregated SWOT altimetry tracks (12 hours before and 12 hours after) as well as the SST from the NEMO simulation. Each subsequent row showcases the following physical variables found in appendix B: (a) Sea Level Anomaly, (b) Kinetic Energy, (c) Relative Vorticity, and (d) Strain. Each column in the subsequent rows showcase the following reconstructed field from the NEMO simulation found in column (a): (b) MOST [101], (c) BFN-QG [55], and (d) 4DVarNet [14].
**Experiment I (_OSSE NADIR_) addresses SSH interpolation using NADIR altimetry tracks which are very fine, thin ocean satellite observations (see Figure 1). It relies on an OSSE using high-resolution (\(1/60^{\circ}\) resolution) ocean simulations generated by the NEMO model over one year with a whole field every day.
**Experiment II (_OSSE SWOT_)** addresses SSH interpolation using jointly NADIR and SWOT altimetry data where we complement the **OSSE NADIR** configuration with simulated SWOT observations. SWOT is a new satellite altimetry mission with a much higher spatial coverage but a much lower temporal resolution as illustrated in Figure 1. The higher spatial resolution allows us to see structures at a smaller resolution but at the cost of a massive influx of observations (over \(\times 100\)).
**Experiment III (_OSSE SST_)** addresses SSH interpolation using altimetry and SST satellite data jointly. We complement the **OSSE SWOT** challenge with simulated SST observations. Satellite-derived SST observations are more abundantly available in natural operational settings than SSH at a finer resolution, and structures have visible similarities [51, 55]. So this challenge allows for methods to take advantage of multi-modal learning [44, 115].
**Experiment IV (_OSE NADIR_)** addresses SSH interpolation for real NADIR altimetry data. In contrast to the three OSSE data challenges, it only looks at actual observations aggregated from the currently available ocean altimetry data from actual satellites. It involves a similar space-time sampling as Experiment (**OSSE NADIR**) to evaluate the generalization of ML methods trained in Experiment I to real altimetry data. The training problem's complexity increases significantly due to the reference dataset's sparsity compared with the **OSSE NADIR** dataset. One may also explore transfer learning or fine-tuning strategies from the available OSSE dataset.
### OceanBench Pipelines
For the four data challenges presented in the previous section, we used OceanBench pipelines to deliver a ML-ready benchmarking framework. We used the hydra and the geoprocessing tools outlined in section 3.2 with specialized routines for regridding the ocean satellite data to a uniformly gridded product and vice versa when necessary. Appendix D showcases an example of the hydra integration for the preprocessing pipeline. A key feature is the creation of a custom patcher for the
Figure 2: This figure showcases some statistics for evaluation of the SSH field reconstructions for the OSSE NADIR experiment outlined in section 4. Subfigure (a) showcases the normalized root mean squared error (nRMSE), (b) showcases the isotropic power spectrum decomposition (PSD), (c) showcases isotropic PSD scores. The bottom row showcases the space-time PSD for the NEMO simulation (subfigure (d)) and the PSD scores for three reconstruction models: (e) the MIOST model [101], (f) the BFN-QG model [55], and (g) the 4DVarNet model [14].
appropriate geophysical variables using our XRPatcher tool, which is later integrated into custom datasets and dataloaders for the appropriate model architecture, e.g., coordinate-based or grid-based. We provide an example snippet of how this can be done easily in section E. OceanBench also features some tools specific to the analysis of SSH. For example, physically-interpretable variables like geostrophic currents and relative vorticity, which can be derived from first-order and second-order derivatives of the SSH, are essential for assessing the quality of the reconstructions generated by the models. Figure 1 showcases some fields of the most common physical variables used in the oceanography literature for the SSH-based analysis of sea surface dynamics. For more details regarding the nature of the physical variables, see appendix B.
Regarding the evaluation framework, we include domain-relevant performance metrics beyond the standard ML loss and accuracy functions. They account for the sampling patterns of the evaluation data. Spectral analytics are widely used in geoscience [55], and here, we consider spectral scores computed as the minimum spatial and temporal scales resolved by the reconstruction methods proposed in [55]. For example, figure 2 showcases how OceanBench generated the isotropic power spectrum and score and the space-time power spectrum decomposition and score. Table 2 outlines some standard and domain-specific scores for the experiments outlined in section 4.3. We give a more detailed description of the rationale and construction of the power-spectrum-specific metrics in appendix C. In terms of baselines, we report for each data challenge the performance of at least one approach for each of the category outlined in Section 4.2.
## 5 Conclusions
The ocean community faces technological and algorithmic challenges to make the most of available observation and simulation datasets. In this context, recent studies evidence the critical role of ML schemes in reaching breakthroughs in our ability to monitor ocean dynamics for various space-time scales and processes. Nevertheless, domain-specific preprocessing steps and evaluation procedures slow down the uptake of ML toward real-world applications. Our application of choice was SSH mapping which facilities the production of many crucial derived products that are used in many downstream tasks like subsequent modeling [92], ocean health monitoring [100, 73, 47] and maritime risk assessment [105].
Through OceanBench framework, we embed domain-level requirements into the MLOPs considerations by building a flexible framework that adds this into the hyperparameter considerations for ML models. We proposed four challenges towards a ML-ready benchmarking suite for ocean observation challenges. We outlined the inner workings OceanBench and demonstrated its usefulness by recreating some preprocessing and analysis pipelines from a few data challenges involving SSH interpolation. We firmly believe that the OceanBench platform is a crucial step to lowering the barrier of entry for new ML researchers interested in applying and developing their methods to relevant problems in the ocean sciences.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Experiment & Algorithm & Algorithm Class & nRMSE Score & \(\lambda_{\text{x}}\) [km] & \(\lambda_{t}\) [days] \\ \hline \hline OSSE NADIR & OI [94] & Coordinate-Based & 0.92 \(\pm\) 0.01 & 175 & 10.8 \\ OSSE NADIR & MIOST [101] & Coordinate-Based & 0.93 \(\pm\) 0.01 & 157 & 10.1 \\ OSSE NADIR & BFNQG [55] & Hybrid Model & 0.93 \(\pm\) 0.01 & 139 & 10.6 \\ OSSE NADIR & 4DVarNet [14] & Bi-Level Opt. & 0.95 \(\pm\) 0.01 & 117 & 7.7 \\ \hline \hline \end{tabular}
\end{table}
Table 2: This table highlights some of the results for the **OSSE NADIR** experiment outlined in section 4.4 and appendix A. This table highlights the performance statistically in the real and spectral space; the normalized RMSE score for the real space and the minimum spatial and temporal scales resolved in the spectral domain. For more information about the class of models displayed and class of metrics, see appendix G and appendix C respectively. We only showcase the model performance on the alonqtrack NADIR data available. For the extended table for each of the challenges, see Table 3.
## Acknowledgments and Disclosure of Funding
This work was supported by the French National Research Agency (ANR), through projects number ANR-17- CE01-0009-01, ANR-19-CE46-0011 and ANR-19-CHIA-0016); by the French National Space Agency (CNES) through the SWOT Science Team program (projects MIDAS and DIEGO) and the OSTST program (project DUACS-HR); by the French National Centre for Scientific Research (CNRS) through the LEFE-MANU program (project IA-OAC). This project also received funding from the European Union's Horizon Europe research and innovation programme under the grant No 101093293 (EDITO-Model Lab project). This project benefited from HPC and GPU computing resources from GENCI-IDRIS (Grant 2021-101030).
## Checklist
1. For all authors... 1. Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes] All the contributions listed in the abstract are elaborated in sections 3.2, 4.4 and 5 2. Did you describe the limitations of your work? [Yes] See the last paragraph of section 5 and the appendix as well. 3. Did you discuss any potential negative societal impacts of your work? [Yes] We do not believe that our work has any potential negative societal impacts directly as we do not deal with any confidential or private data. However, we do outline in the appendix how there may be some adverse effects related to downstream uses which could have some negative societal impacts. 4. Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] We do not include any confidential or private data. We only include numerical values which stem from general physical systems or machine learning models. We do not believe they hold any ethical issues. However, we do acknowledge that there would be environmental damage should users go forward and explore methods which obscenvely high computing hours. This discussion outlined in the appendix.
2. If you are including theoretical results... 1. Did you state the full set of assumptions of all theoretical results? [N/A] We do not include any theoretical results. 2. Did you include complete proofs of all theoretical results? [N/A] We do not include any theoretical results.
3. If you ran experiments (e.g. for benchmarks)... 1. Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We include the parameters used to reproduce the dataset preprocessing and evaluation procedure in Appendix A and instructions are given to download the data via [https://github.com/quentinf00/oceanbench-data-registry](https://github.com/quentinf00/oceanbench-data-registry) and rerun the evaluation procedure in our code repository which is available at [https://github.com/jejohnson/oceanbench](https://github.com/jejohnson/oceanbench). 2. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] We showcase all preprocessing steps necessary to reproduce the experimental configurations in Appendix A and the configuration files are available in our code repository at [https://github.com/jejohnson/oceanbench](https://github.com/jejohnson/oceanbench). 3. Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] This is not applicable for this instantiation because we do not include any randomness within the experiment procedure nor the results. 4. Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] We do not do any model training and leave it up the user for their local or cloud machine. However, we do provide the cloud provider for the data found the the data registry which can be found at [https://github.com/quentinf00/oceanbench-data-registry](https://github.com/quentinf00/oceanbench-data-registry)
4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
1. If your work uses existing assets, did you cite the creators? [Yes] We adopted the implementation of the preprocessing procedures and evaluation steps with some modifications. We give proper citation and credit to the authors as well as all other existing software packages included in this work.
2. Did you mention the license of the assets? [Yes] The appropriate license notices are included in the source code files.
3. Did you include any new assets either in the supplemental material or as a URL? [Yes] All the processing and evaluation scripts are included in the GitHub repository.
4. Did you discuss whether and how consent was obtained from people whose data you're using/curating? [Yes] We only include data that is already publicly available. We also discussed with the original generators of the datasets and keep the appropriate licenses.
5. Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] We do not include any personal information or offensive content in our datasets.
6. If you used crowdsourcing or conducted research with human subjects... 1. Did you include the full text of instructions given to participants and screenshots, if applicable? [NA] We do not use crowdsourcing and we do not conduct research with human subjects. 2. Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [NA] See the previous point. 3. Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [NA] See the previous point.
|
2309.14687 | On The Effects of The Variations In Network Characteristics In Cyber
Physical Systems | The popular robotic simulator, Gazebo, lacks the feature of simulating the
effects of control latency that would make it a fully-fledged cyber-physical
system (CPS) simulator. The CPS that we address to measure is a robotic arm
(UR5) controlled remotely with velocity commands. The main goal is to measure
Quality of Control (QoC) related KPIs during various network conditions in a
simulated environment. We propose a Gazebo plugin to make the above measurement
feasible by making Gazebo capable to delay internal control and status messages
and also to interface with external network simulators to derive even more
advanced network effects. Our preliminary evaluation shows that there is
certainly an effect on the behavior of the robotic arm with the introduced
network latency in line with our expectations, but a more detailed further
study is needed. | Géza Szabó, Sándor Rácz, József Pető, Rafael Roque Aschoff | 2023-09-26T05:50:27Z | http://arxiv.org/abs/2309.14687v1 | # On the effects of the variations in network characteristics in cyber physical systems
###### Abstract
The popular robotic simulator, Gazebo, lacks the feature of simulating the effects of control latency that would make it a fully-fledged cyber-physical system (CPS) simulator. The CPS that we address to measure is a robotic arm (UR5) controlled remotely with velocity commands. The main goal is to measure Quality of Control (QoC) related KPIs during various network conditions in a simulated environment. We propose a Gazebo plugin (OurPlugin 2017) to make the above measurement feasible by making Gazebo capable to delay internal control and status messages and also to interface with external network simulators to derive even more advanced network effects. Our preliminary evaluation shows that there is certainly an effect on the behavior of the robotic arm with the introduced network latency in line with our expectations, but a more detailed further study is needed.
network characteristics, cyber-physics, Gazebo
## Introduction
A cyber-physical system (CPS) is a mechanism controlled or monitored by computer-based algorithms, tightly integrated with the internet and its users. Unlike more traditional embedded systems, a full-fledged CPS is typically designed as a network of interacting elements with physical input and output instead of as standalone devices. For tasks that require more resources than are locally available, one common mechanism is that nodes utilize the network connectivity to link the sensor or actuator part of the CPS with either a server or a cloud environment, enabling complex processing tasks that are impossible under local resource constraints. Among the wide diversity of tasks that CPS is applied we focus on robot control in this paper.
Currently, one of the main focus of cloud based robotics is to speed up the processing of input data collected from many sensors with big data computation. Another approach is to collect various knowledge bases in centralized locations e.g., possible grasping poses of various 3D objects.
Another aspect of cloud robotics is the way in which the robot control related functionality is moved into the cloud. The simplest way is to run the original robot specific task in a cloud without significant changes in it. For example, in a Virtual Machine (VM), in a container, or in a virtualized Programmable Logic Controller (PLC). Another way is to update, modify or rewrite the code of robot related tasks to utilize existing services or APIs of the cloud. The third way is to extend the cloud platform itself with new features that make robot control more efficient. These new robot-aware cloud features can be explicitly used by robot related tasks (i.e. new robot-aware services or APIs offered by cloud) or can be transparent solutions (e.g., improving the service provided by the cloud to meet the requirement of the robot control).
Designing cyber-physical systems is challenging because of a) the vast network and information technology environment connected with physical elements involves multiple domains such as controls, communication, analog and digital physics, and logic and b) the interaction with the physical world varies widely based on time and situation. To ease the design of CPS, robot simulators have been used by robotics experts. A well-designed simulator makes it possible to rapidly test algorithms, design robots, perform regression testing, and train AI system using realistic scenarios.
There are various alternatives, sets of tools that make it possible to put together a CPS simulation environment, but it is very difficult, needs a lot of interfacing with various tools and impractical. The requirements of a widely applicable CPS are the following:
* Should be modular in terms of interfacing with the CPS
* Should be modular in terms of interfacing with network simulator, realization environment
* Should be able to cooperate with widely applied environments
We chose Gazebo as our target robot simulation environment that we intend to extend with new functionalities to make it capable of being applied as a CPS. Gazebo [1] offers the ability to accurately and efficiently simulate populations of robots in complex indoor and outdoor environments. It has a robust physics engine, high-quality graphics, and convenient programmatic and graphical interfaces. Gazebo is free and widely used among robotic experts.
The main challenge with the design principle of Gazebo is that the control of actuators is deployed and run practically locally to the actuators. In this case, there is no need to consider the effects of a non-ideal link between the actuator and the controller. Considering the CPS context, as controllers are moved away from actuators, it becomes natural and even necessary to analyze the effects of the network link between them.
Gazebo has a plugin system that we target to use to provide us an interface to our modular network simulation environment. The goal of this paper is to show the design principles of the network plugin and provide the research community with a tool for further research in CPS.
## The Measurement Setup that we go for
The CPS that we address to measure is a robotic arm (UR5 [1]) controlled remotely with velocity commands. The main goal is to measure Quality of Control (QoC) e.g., cumulated PID error during trajectory execution, cumulated difference in joint space between the executed and calculated trajectories, etc. related KPIs during various network conditions in this setup.
Figure 1 shows the use case with real hardware that we target to simulate in Gazebo. The left side of the figure (Hardware) shows the same data elements described in [1], whereas the right side of the picture (Realization) uses the same colors for the boxes to describe a specific realization. In the specific case, the UR5 can be accessed via TCP/IP ports 50001 to send command messages and port 50003 to read the robot status messages. The trajectories are computed by MoveIt [1]. MoveIt sends trajectories to the controller manager which starts a velocity controller (yellow), a specific type of ros_control. The ur_modern_driver [1] implements the hardware resource interface layer by simply copying the velocity control packets to the proper TCP sockets. A middle node can be deployed between the robot driver and the robot (green) that can alter the network characteristics.
A trivia approach to setup the above architecture in a simulation environment is provided by Universal Robots. Universal Robots simulator software [1] is a java software package that makes it possible to create and run programs on a simulated robot, with some limitations. The limitation of this solution is that it is capable to simulate only one robot. There is no chance to integrate the robot in complex environments as you can configure with Gazebo e.g., interacting with other mechanical elements in the workspace, check collisions with the environment, etc.
## Motivation and related work competitions
A frontier method to push research groups to their limits is to organize competitions. DARPA, a research group in the U.S. Department of Defense, announced the DARPA Robotics Challenge with a US $2 million dollar prize for the team that could produce a first responder robot performing a set of tasks required in an emergency situation. During the DARPA Trials of December 2013, a restrictive device was inserted into the control computers of each competing team and the computer that formed the 'brain' of the robot. The intent of the network degradation was to roughly simulate the kind of less than perfect communications that might exist during those kinds of emergency or disaster situations in which these robots would be deployed. The restrictive device -, a Mini Maxwell network emulator from InterWorking Labs - alternated between a 'good' mode and a 'bad' mode of network communication, every sixty seconds. 'Good' minutes permitted communications at a rate of 1 Mbps (in either direction) and a base delay of 50 ms (in each direction.) 'Bad' minutes permitted communications at a rate of 100 Kbps (in either direction) and a base delay of 500 ms (in each direction.) At the end of each minute, a transition occurred from bad-to
Figure 1: Target architecture to be realized with simulator
good or good-to-bad. A side effect of these transitions was packet-reordering.
The impact of network degradation on the teams was larger than expected. Informal feedback suggested that several teams did not realize that rate limitation induces network congestion or the ramifications of that congestion. Some teams appeared to have been surprised by the behavior of the network protocol stacks, particularly TCP stacks, in the operating systems underneath their code. [1] The above experiences would have been probably less striking to the teams if they were able to test the network characteristics changes in a simulation environment.
A recent competition Agile Robotics for Industrial Automation Competition (ARIAC)(ARIAC 2017) targets industrial related applications. ARIAC is a simulation-based competition is designed to promote agility in industrial robot systems by utilizing the latest advances in artificial intelligence and robot planning. There is no tricky network environment in the ARIAC competition. The industry relies on robust low-delay protocols. That is why it is an interesting aspect to see what happens when those links and protocols are exchanged. For instance, what are the possible performance improvements or degradation when the control or sensors data processing in an industrial scenario are moved further away from the actuators and how different protocols would fare under various network characteristics?
## Why Gazebo?
In both of the above competitions, Gazebo provided the simulation infrastructure. In a more structured study about the level of how wide-spread the various simulator tools were done in [2]. It showed that Gazebo emerges as the best choice among the open-source projects.
Authors of [1] describes some early experiments in linking the OMNET++ simulation framework with the ROS middleware for interacting with robot simulators in order to get within the OMNET++ simulation a robot's position which is accurately simulated by an external simulator based on ROS. The motivation is to use well-tested and realistic robot simulators for handling all the robot navigation tasks (obstacle avoidance, navigation towards goals, velocity, etc.) and to only get the robot's position in OMNET++ for interacting with the deployed sensors. Our goal is the other way around, thus to introduce the effects of the network simulator into the robot simulator.
The roadmap of Gazebo development shows that version 9.0 arriving at 2018-01-25 will have support to integrate network simulation (ns-3 or EMANE). Further information regarding if this feature will be like [1] or the one we propose in this paper is not available yet.
## Proposed method to simulate the effects of network characteristics
In ROS, topics [1] are named buses over which nodes exchange messages. ROS currently supports TCP/IP-based and UDP-based message transport. ROS nodes are standalone executables running with individual process IDs in the operating system. One practical way to introduce latency in current ROS deployment is via defining network namespaces among nodes. For a certain namespace, custom delay, jitter, drop characteristics can be defined with tc like in [1]. The main issue is that there is a MoveIt node as an individual process, but the whole joint controller-actuator control loop is realized within Gazebo as one other process. The only topic based communication happens between the MoveIt and the monolith Gazebo process. So this kind of solution cannot be applied to our problem.
Figure 2: Gazebo architecture
We have to dig deeper in the architecture of Gazebo and realize the CPS system within. To keep the architecture modular, we decided to implement the proposed method as a Gazebo plugin. While the setup most of these plug-ins fits well in the current Gazebo architecture and can be done via configuration files, there are still patches needed to be applied on core functional elements of the Gazebo code.
Figure 2 shows the architecture of the proposed method. The coloring of the figure follows the way in (roscontrol 2017). Green represents new added plugins, modules, functionalities. The working of the system is the following. As a first step -, a launch file that triggers the whole simulation to run - setups a parameter on the ROS parameter server. This parameter defines the specific latency plugin that will be loaded.
The launch file initiates the Gazebo simulation. Gazebo loads the gazebo_ros_control plugin (left most blue box) that main purpose is to interface with the ROS controller manager. This module needed a small tweak. The original code passed the address of the messages from the controller manager to the simulation, performed the actions (read status, calculate commands) triggered by the update() function in a sequential manner. There was no modification of these input variables during the calculations in the original code. In our system, the messages are copied and stored to make it possible to perform further actions on the messages.
Gazebo loads configuration files from the common.gazebo.xacro file in which it is specified that our custom RobotHWSimLatency plugin should be loaded instead of the DefaultRobothWSim plugin. Our RobotHWSimLatency plugin is the extension of the DefaultRobothWSim plugin with modified read and write functions and with the task to load a custom latency plugin. The latency plugin to be loaded is the one that was setup by the parameter server. The current options include a) the default latency plugin that practically returns the messages with no introduced latency and b) the simple queue latency plugin. This later has a configurable size of the queue to store the messages in them. In each simulation tick (100Hz), the messages are shifted one position forward in the queue and when they reach the end of the queue they are provided to Gazebo as the currently valid message. In the same way, an interface plugin to cooperate with external network simulators like ns3 (ns3 2017) can be also implemented here. We described the detailed call sequence of the plugin system in details on (OurPlugin 2017).
## Evaluation
We evaluated our proposed method on various Key Performance Indicators (KPIs). The most straightforward evaluation is the visual inspection of the robotic arm movement. For this purpose, we loaded the robot model into rviz and used a ros package to visualize the markers on the way the robotic arm passed through. Figure 3 is a screenshot from rviz which shows the visualized trajectories. The bottom left corner of the picture is the starting point of the robotic arm. It passes through the waypoints one-by-one from number 1 to 5. The black lines are the trajectories, while the lines with various colors show the effect of introducing latency into the system. The cyan color shows the reference scenario with 0 latency. In all other cases, we introduced latency in the system in both the command writing and status reading direction and rerun the trajectory planning and execution scenario. The upper right corner of the picture shows a magnified part around the trajectories. The trajectories were planned with the RRTConnectkConfigDefault planner.
The visualized trajectories show the expected behavior of the system. Increasing the latency increases the deviance from the original trajectories. It should be noted that the planned trajectories are straight in Cartesian-space. To move along these trajectories the robotic arm needs complex movements in the joint-space, thus even the movement in a straight line causes deviation from the reference trajectory. In the other way around, if the planned trajectories were straight in the joint-space, we would see a movement in circles by the robotic arm, but the effect of the latency was more negligible.
Figure 4 shows the velocity commands sent to the robot in the function of time. Analyzing the velocity commands in such details reveals that comparing the different scenarios are not straightforward for several reasons. One is that the planning is non-deterministic, and a slight difference during the initialization of the gazebo environment ends up with some different planned trajectory. The execution of the trajectories depends on the environment status as well, and it is never the same. Joint 4 shows the expected effect on the velocity commands levels as well, thus the induced latency causes increased velocity command deviation compared to the reference scenario. It is also a clear observation that around 10 ms latency, the system starts to get unstable.
Figure 3: The visualized trajectories
This is likely due to the various updating frequency parameters that Gazebo employs to run the simulation. It needs definitely further work to make it clear how the introduced latency affects other characteristics or behaviors, such as the robot commanding frequency, whole physical simulation steps, internal message timings.
Figure 5 shows the cumulated difference of the velocity commands comparing to the reference scenario. The 2 ms latency scenario is the closest to the reference as it is expected. In the first 3 sec of the trajectory execution the 5 ms scenario is closer to the reference than the 7 ms scenario, but around 6 sec, the 5 ms scenario collects so much error that shows bigger deviation than the 7 sec scenario. The 10 sec scenario has another magnitude of error, and thus cut off the diagram after the first second.
## 6 Conclusion and Further Work
In this paper, we proposed a plugin [10] to extend the capabilities of the current Gazebo robotic simulator and turn it into a CPS system. The realization of the proposed method is a plugin to Gazebo. The plugin fits into the modular design of Gazebo. As of the interface is available, it eases to test various network effects on the robot control. Based on our preliminary evaluations it does affect the QoC KPIs of the robot control.
The evaluation showed behavior which is expected and reasonable, but also cases which show that the whole system needs fine-tuning. We plan to evaluate the working mechanism of the system with the help of the ROS, gazebo and research communities. We plan to do more extensive measurements with the tool. We plan to interface it with various radio network simulators and see the effects of the radio on the QoC KPIs. In a similar way, we plan to investigate the how the system behaves when taking into account not only the network links characteristics but also the protocols for message exchanging. We also plan to compare the level of similarity of the simulation to real robot HW controlled in a real radio network. We are taking part in the ARIAC competition and we plan to evaluate if the tool can provide any advantage for us in any of the use cases of the competition.
|
2309.05952 | ChatMPC: Natural Language based MPC Personalization | We address the personalization of control systems, which is an attempt to
adjust inherent safety and other essential control performance based on each
user's personal preferences. A typical approach to personalization requires a
substantial amount of user feedback and data collection, which may result in a
burden on users. Moreover, it might be challenging to collect data in
real-time. To overcome this drawback, we propose a natural language-based
personalization, which places a comparatively lighter burden on users and
enables the personalization system to collect data in real-time. In particular,
we consider model predictive control (MPC) and introduce an approach that
updates the control specification using chat within the MPC framework, namely
ChatMPC. In the numerical experiment, we simulated an autonomous robot equipped
with ChatMPC. The result shows that the specification in robot control is
updated by providing natural language-based chats, which generate different
behaviors. | Yuya Miyaoka, Masaki Inoue, Tomotaka Nii | 2023-09-12T04:18:10Z | http://arxiv.org/abs/2309.05952v2 | # ChatMPC: Natural Language based MPC Personalization
###### Abstract
We address the personalization of control systems, which is an attempt to adjust inherent safety and other essential control performance based on each user's personal preferences. A typical approach to personalization requires a substantial amount of user feedback and data collection, which may result in a burden on users. Moreover, it might be challenging to collect data in real-time. To overcome this drawback, we propose a natural language-based personalization, which places a comparatively lighter burden on users and enables the personalization system to collect data in real-time. In particular, we consider model predictive control (MPC) and introduce an approach that updates the control specification using chat within the MPC framework, namely ChatMPC. In the numerical experiment, we simulated an autonomous robot equipped with ChatMPC. The result shows that the specification in robot control is updated by providing natural language-based chats, which generate different behaviors.
## I Introduction
Defining the control specification is the primary concern in the designing of control systems. Specifications such as the control objective, and safety constraints are determined by a few people with domain knowledge and skills, aiming at the control system's robustness. As usual, once the specification is decided, it will be fixed. All users would use the control system with the same specification, although there are various environments and user preferences.
To enhance the performance in each environment, it is necessary to incorporate the capability of adaption into the control system. For example, [1, 2], and [3] propose a method to estimate the state transition equation or transfer function of the plant from the input and output data of the plant, and reflect on the plant model in model predictive control (MPC). Also, [4] and [5] propose a method to update the control barrier function (CBF) based on collected data. In [4], CBF is expressed in the affine form and the optimal CBF is generated by learning its coefficients depending on the user's safety report. In [5], an agent observes the movements of other agents to calculate the degree of danger to itself, then the CBF is updated by the degree of danger.
Adapting specifications is not limited to environmental factors. Recently, there is also specification updating according to user preferences, which is called "personalization". In the context of personalization, the specifications of the control system are updated so that the system suits each user's preference. For example, [6, 7], and [8] propose a method to personalize the MPC. In the method proposed in [6, 7], and [8], two different specifications are applied to the MPC controller and both behaviors are presented to the user. The user chooses the preferred one. To repeat this, the optimal specification for the user is obtained. In addition, [9] proposes a method to optimize the specification of the optimal control system. The specification is updated by the rating provided by the user. Moreover, [10] proposes an algorithm to generate a path that both satisfies the constraints and maximizes the user's satisfaction.
Since [6, 7], and [8] require repeatedly collecting user feedback to optimize the specification, it imposes a burden on the user since the user has to answer the survey one by one. Another drawback can be seen in [9]: it expects a questionnaire format as the survey from the user. The survey answers are collected every few days, so the low update frequency might be the problem.
In this paper, we attempt to overcome the drawbacks existing in the previous studies on personalization. To reduce the burden on the user and collect data in high frequency, we use the chat in natural languages to communicate with the system. We introduce the novel framework "ChatMPC", for personalizing MPC by collecting chat in natural language from the user.
Fig. 1 illustrates a typical use case of the ChatMPC. In this figure, we consider an autonomous traveling robot, running toward the goal point while avoiding obstacles, vaseand toy. This robot adjusts its behavior according to instructions in natural language by the user. For instance, if the user instructs the robot to keep the vase at enough distance, the robot runs on a path away from the vase. Next, if the user instructs that the robot does not have to keep away from the toy, the robot generates a path not far enough away from the toy and prioritizes arrival at the goal point. In this way, the control system is updated by user instructions, and the behavior of the robot adapts to the user's preferences.
There have been several studies on cooperation between natural languages and robots. For example, [11, 12], and [13]
Fig. 1: One example of the use case of ChatMPC
proposed a method to generate the robot's preferred actions based on sentences in natural language. A language model that translates sentences in natural language into a Python code is used in [11], and reinforcement learning is used in [12] and [13]. Other works [14] and [15] propose a method to translate sentences in natural language into constraints enabling a control system to achieve the purpose of controlling. Additionally, [16] and [17] utilize a large language model (LLM) to manipulate a robot. In [16], LLM is used to translate sentences into reward functions enabling a robot to achieve the purpose of controlling, and in [17], LLM is used to generate a quadrupedal locomotion's desired foot contact patterns. One can see that these methods are to generate a control objective or a trajectory according to natural language instructions.
ChatMPC introduced in this paper is intended to determine the specification of the controller. While control objectives and trajectories are completed once a single control trial is finished, the specifications of the controller function as persistently effective characteristics even after a single control trial is finished. By personalizing the controller's specifications, its effectiveness is consistently demonstrated regardless of the operating environment.
## II ChatMPC
### _Overview_
Fig. 2 shows the overall structure of ChatMPC. In this figure, \(\mathcal{P}\) is a plant, \(\mathcal{C}_{\theta}\) is an MPC controller, \(\mathcal{H}\) is a user, and \(\mathcal{A}\) is called _interpreter_. Symbol \(\theta\) represents a parameter associated with the MPC controller and is externally adjustable. In Fig. 2, the gray region enclosed by the black line represents the ChatMPC.
The black line in Fig. 2 represents a typical loop of feedback control, which is called the _control loop_. In the control loop, the MPC controller determines the control input \(u\) based on the state of the plant \(x\). The red line is called the _personalization loop_, which is an original in this paper. In the personalization loop, the user \(\mathcal{H}\) receives the evaluation output \(z\) from the plant \(\mathcal{P}\), and the user provides natural language strings to the interpreter \(\mathcal{A}\) as opinions and advice regarding the evaluation output \(z\). The natural language string is called the prompt \(p\) in this paper. Then, the interpreter \(\mathcal{A}\) updates the parameter \(\theta\) of the MPC controller \(\mathcal{C}_{\theta}\). Through the personalization loop, the specification of the MPC controller changes, enabling the behavior of the MPC controller to match the user's preferences.
Note that the time scale is different between the two loops; the control loop (black line) and the personalization loop (red line). For instance, assuming the autonomous electric wheelchair, the control loop runs once a few milliseconds, while the personalization loop runs once a few seconds or minutes. In this paper, the time step of the control loop and the personalization loop are denoted by \(k\) and \(\tau\) respectively.
In the following subsections, we provide detailed explanations of each component of ChatMPC. We describe the MPC controller in II-B, we present an example of the MPC controller and the interpreter in II-C and II-D, respectively.
### _MPC Controller_
First, we assume a control system that consists of the plant \(\mathcal{P}\) and the MPC controller \(\mathcal{C}_{\theta}\). The plant is described by the discrete-time state-space representation:
\[\mathcal{P}:\left\{\begin{aligned} x(k+1)=f(x(k),u(k)),\\ z(k)=g(x(k),u(k)),\end{aligned}\right. \tag{1}\]
where \(x\in\mathbb{R}^{n}\) and \(u\in\mathbb{R}^{m}\) are the plant state and the plant input, respectively, and \(z\in\mathbb{R}^{l}\) is the evaluation output. Symbol \(f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) and \(g:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{l}\) represent the plant state and the evaluation output mapping, respectively.
Suppose the MPC controller with a time tick of \(\Delta t\), a prediction horizon of \(N_{p}\), and a control horizon of \(1\). At each time step \(k\), the controller calculates the optimal input \(u(k|k)\), where \(u(k+i|k)\) represents the control input of the future time step \(k+i\) calculated in the time step \(k\). For simplicity, we introduce the state sequence \(X(k)\in\mathbb{R}^{N_{p}n}\) and the input sequence \(U(k)\in\mathbb{R}^{N_{p}m}\) are defined as follows:
\[U(k) =[u(k|k)^{\top}~{}\cdots~{}u(k+N_{p}-1|k)^{\top}]^{\top}\] \[X(k) =[x(k+1|k)^{\top}~{}\cdots~{}x(k+N_{p}|k)^{\top}]^{\top}.\]
In this case, the optimal input \(u(k|k)\) is obtained by solving the following optimization problem:
\[\mathcal{C}_{\theta}:\left\{\begin{aligned} \min_{U(k)}& J_{ \theta}(X(k),U(k))\\ &\text{s.t.}& x(k|k)=x(k),\\ & x(k+i+1|k)=f(x(k+i|k),u(k+i|k)),\\ &\qquad\qquad\forall i\in\{0,\ldots,N_{p}-1\}\\ & X(k)\in\mathcal{X}_{\theta},\\ & U(k)\in\mathcal{U}_{\theta},\end{aligned}\right. \tag{2}\]
where \(\theta\in\mathbb{R}^{q}\) is the adjustable parameter, \(J_{\theta}(X,U):\mathbb{R}^{N_{p}n}\times\mathbb{R}^{N_{p}m}\rightarrow\mathbb{R}\) is the cost function, \(\mathcal{X}_{\theta}\subseteq\mathbb{R}^{N_{p}n}\), \(\mathcal{U}_{\theta}\subseteq\mathbb{R}^{N_{p}m}\) are sets of the allowable state sequence and the input sequence, respectively.
It should be emphasized here that symbols \(J_{\theta}\), \(\mathcal{X}_{\theta}\), and \(\mathcal{U}_{\theta}\) are parameterized by \(\theta\), and \(\theta\) is updated through the personalization loop.
Fig. 2: Overall structure of ChatMPC
### _Safety Constraint in MPC_
In the previous subsection, we described the basic structure of an MPC controller. In this subsection, we present a specialization of the MPC controller, named "MPC-CBF". MPC-CBF is proposed in [18] which incorporates control barrier functions (CBF) into the optimization problem of the MPC controller (2).
CBF is a function \(h\) that satisfies the following inequality [19]:
\[\exists u\text{ s.t. }\dot{h}(x)=\frac{\partial h(x)}{\partial x}f(x,u)\geq- \gamma(h(x)), \tag{3}\]
where \(f\) is a function that characterizes the plant dynamics, i.e., \(f\) satisfies \(\dot{x}=f(x,u)\), and \(\gamma(h)\) is a continuously differentiable strictly increasing function with \(\gamma(0)=0\) and \(\gamma(h)\rightarrow\infty,h\rightarrow\infty\).
Next, we convert (3) to a discrete-time, simplified form that aligns with the notation of MPC. For simplicity, we introduce \(h_{i}=h(x(k+i|k))\) and \(\Delta h_{i}=h_{i+1}-h_{i}\). Then, we have (3) as:
\[\Delta h_{i}(x)\geq-\gamma h_{i}(x), \tag{4}\]
where \(\gamma\) is the positive constant. Now, the CBF and the state \(x\) hold the following property: If the initial state at time \(0\) satisfies \(h(x(0))\geq 0\), and for all times \(k=0,1,\ldots\) the inequality (4) holds, \(x\) continues to satisfy \(h(x(t))\geq 0\) for all times \(k\in\{0,1,\ldots\}\).
MPC-CBF utilizes the property and incorporates the CBF inequality (4) as a constraint on the state sequence \(X(k)\). Specifically, we describe the set of allowable state sequence \(\mathcal{X}_{\theta}\) as follows:
\[\mathcal{X}_{\theta}=\left\{X(k)\ |\ \Delta h_{i}\geq-\gamma h_{i},\ \forall i\in\{1,\ldots,N_{p}\}\right\}. \tag{5}\]
### _Internal Structure of Interpreter_
The interpreter \(\mathcal{A}\) is an important component of the personalization loop in ChatMPC. The interpreter has the role of adjusting the parameter \(\theta\) according to the content of the provided prompt \(p\).
Fig. 3 shows the structure of the interpreter. Denoting the iteration count in the personalization loop as \(\tau\in\{1,2,\ldots\}\), we assume that the interpreter \(\mathcal{A}\) receives the \(\tau\)-th prompt \(p_{\tau}\) from the user \(\mathcal{H}\). The interpreter \(\mathcal{A}\) consists of an intent extractor \(f_{\mathrm{int}}\) and a parameter updater \(f_{\mathrm{up}}\).
First, the intent extractor \(f_{\mathrm{int}}\) analyzes the content of the prompt \(p_{\tau}\) and outputs \(s_{\tau}\in\mathbb{R}^{q}\) based on the intent of the prompt. Symbol \(s_{\tau}\) is called the update marker, which provides the information about which element of parameter \(\theta_{\tau-1}\) should be updated. The update marker can be expressed as follows:
\[s_{\tau}=f_{\mathrm{int}}(p_{\tau})\in\{-1,0,+1\}^{n}\subset\mathbb{R}^{q} \tag{6}\]
and a non-zero element indicates that the corresponding element of the parameter should be updated.
Next, the parameter updater \(f_{\mathrm{up}}\) calculates the updated parameter \(\theta_{\tau}\) based on the update marker \(s_{\tau}\) and the previous parameter \(\theta_{\tau-1}\). It can be expressed as follows:
\[\theta_{\tau}=f_{\mathrm{up}}(s_{\tau},\theta_{\tau-1})=\mathrm{pow}(d,s_{ \tau})\odot\theta_{\tau-1}, \tag{7}\]
where \(\mathrm{pow}(a,b)\) represents the element-wise power, i.e., \(\mathrm{pow}(a,b)=[a_{1}^{b_{1}}\ \cdots\ a_{n}^{b_{n}}]^{\top}\), and \(d\in\mathbb{R}^{q}\) is a constant vector that specifies the increase/decrease of the parameter, which is called the update constant. In this way, the parameter is updated according to the intent of the prompt.
For the implementation of \(f_{\mathrm{int}}\), it can be effective to incorporate a natural language model such as Sentence BERT model[20]. Details of the implementation are given in Subsection III-D. In addition, the formulations of \(f_{\mathrm{up}}\) can be different from (7) like \(f_{\mathrm{up}}(x_{\tau},\theta_{\tau-1})=\theta_{\tau-1}+d\odot s_{\tau}\).
## III Numerical Experiment
In this numerical experiment, we build a simulation of an autonomous cleaning robot equipped with ChatMPC. In the simulation environment, there are multiple obstacles, and the robot is required to navigate to the goal point while avoiding these obstacles. There are two types of obstacles; vase and toy. In this simulation, we are able to personalize the avoidance behavior for each type of obstacle. 1
Footnote 1: The code for the numerical experiment is available on [https://github.com/Mya-Mya/ChatMPC](https://github.com/Mya-Mya/ChatMPC).
### _Obstacle_
We assume that we have \(N\) obstacles in a simulation environment and let \(j\in\{1,\ldots,N\}\) denote the index of obstacle \(j\). We use \(m(j)\in\{\texttt{vase},\texttt{toy}\}\), \((x_{1}^{(j)},x_{2}^{(j)})\), and \(R^{(j)}\) to represent the type, position, and safety margin of obstacle \(j\), respectively.
### _Plant Model_
Let \((x_{1},x_{2})\) denote the position of the robot, and let \(v_{1},v_{2}\) denote the velocities in the \(x_{1}\) and \(x_{2}\) directions, respectively. We define the state of the robot as \(x=[x_{1}\ x_{2}\ v_{1}\ v_{2}]^{\top}\). Then, the state equation of the robot is given as follows:
\[x(k+1)=Ax(k)+Bu(k), \tag{8}\]
where \(u\) represents the input applied to the robot and the robot operates with a time interval \(\Delta t=0.2\) s. The matrices \(A\) and \(B\) are defined as:
\[A=\begin{bmatrix}1&0&\Delta t&0\\ 0&1&0&\Delta t\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},B=\begin{bmatrix}0&0\\ 0&0\\ \Delta t&0\\ 0&\Delta t\end{bmatrix}. \tag{9}\]
Fig. 3: Structure of interpreter
### _Control Problem and Parameter_
The control objective in this numerical experiment is to navigate the robot, starting from the initial state \(x_{0}\), to the goal position \((0,0)\) while avoiding obstacles. In order to personalize the avoidance behavior for each type of obstacle, we select the constants used within CBF as the adjustable parameter \(\theta\).
We use MPC-CBF presented in Subsection II-C as the MPC controller \(\mathcal{C}_{\theta}\), with the prediction horizon of \(N_{p}=8\), and the time tick of \(\Delta t=0.2\) s.
The cost function \(J_{\theta}(X(k),U(k))\) is defined as follows:
\[J(X(k),U(k)) \tag{10}\] \[=\sum_{i=1}^{N_{p}}l(x(k+i|k),u(k+i|k))\ +\phi(x(k+N_{p}|k),\]
where
\[l(x,u)=x^{\top}Qx+u^{\top}Ru, \tag{11}\] \[\phi(x)=x^{\top}Px \tag{12}\]
with \(Q=\mathrm{diag}\{1,1,1,1\}\),\(R=\mathrm{diag}\{1,1\}\),\(P=\mathrm{diag}\{100,100,100,100\}\). The set of input sequence \(U(k)\) is defined as follows:
\[\mathcal{U}=\{U(k)\mid u(k+i|k)\in[-1,1]^{2}, \tag{13}\] \[\forall i\in\{0,\ldots,N_{p}-1\}\},\]
and the constraint on the input sequence can be expressed as \(U(k)\in\mathcal{U}\). Note that the cost function \(J\) and input constraint \(\mathcal{U}\) are independent of the adjustable parameter \(\theta\).
Next, we use CBF to construct the constraint on the state sequence. First, we define a function \(h^{(j)}(x)\) for each obstacle \(j\in\{1,\ldots,N\}\) on the simulation environment as follows:
\[h^{(j)}(x)=(x_{1}-x_{1}^{(j)})^{2}+(x_{2}-x_{2}^{(j)})^{2}-(R^{(j)})^{2}. \tag{14}\]
This equation only allows the robot to navigate outside the safety margin \(R^{(j)}\) of obstacle \(j\). Next, using this function \(h^{(j)}(x)\), we define a barrier inequality for the states \(x(k+i|k)\) and \(x(k+i+1|k),\ i\in\{0,\ldots,N_{p}-1\}\).
\[\Delta h^{(j)}_{i}\geq-\gamma_{m(j)}h^{(j)}_{i}, \tag{15}\]
where \(h^{(j)}(x(k+i|k))=h^{(j)}_{i}\), \(\Delta h^{(j)}_{i}=h^{(j)}_{i+1}-h^{(j)}_{i}\). Note that the inequality (15) depends on the adjustable parameter \(\gamma_{m(j)}\). \(\gamma_{m(j)}\) represents a value of the parameter depending on the type of the obstacle \(j\). If the type of the obstacle is vase, \(\gamma_{m(j)}=\gamma_{\texttt{vase}}\) and if the type of the obstacle is toy, \(\gamma_{m(j)}=\gamma_{\texttt{tcy}}\). Both \(\gamma_{\texttt{vase}}\) and \(\gamma_{\texttt{tcy}}\) are the values of the parameter and can be updated in the personalization loop. Finally, by collecting (15) for all obstacles \(j,\ j\in\{1,\ldots,N\}\) and all time steps \(i,\ i\in\{0,\ldots,N_{p}-1\}\), we obtain the set of the state sequence \(\mathcal{X}_{\theta}\) as follows:
\[\mathcal{X}_{\theta}=\{X(k)\mid \Delta h^{(j)}_{i}\geq-\gamma_{m(j)}h^{(j)}_{i}\] \[\forall j\in\{1,\ldots,N\},\forall i\in\{0,\ldots,N_{p}-1\}\}.\]
Therefore, the constraint on \(X(k)\) is expressed as \(X(k)\in\mathcal{X}_{\theta}\).
In this numerical experiment, there are two values as the adjustable parameter \(\theta\) as follows:
\[\theta=[\gamma_{\texttt{vase}}\ \gamma_{\texttt{tcy}}]^{\top}. \tag{17}\]
These values parameterize the constraint on the state sequence \(\mathcal{X}_{\theta}\).
### _Interpeter_
The internal structure of the intent extractor \(f_{\mathrm{int}}\) is shown in Fig. 4. The intent extractor consists of two parts; Sentence BERT and an embedding classifier. Sentence BERT is a language model that takes a sentence and outputs an embedding \(e\in\mathbb{R}^{M}\) that represents the contextual information of the sentence [20]. In this numerical experiment, we used deepset/sentence_bert by deepest as the pre-trained weight of the Sentence BERT model, available at [21]. Then, the embedding \(e\) is classified into multiple classes by the embedding classifier. An update marker \(s\) is defined for each class and serves as the final output of \(f_{\mathrm{int}}\).
In this numerical experiment, the intent extractor \(f_{\mathrm{int}}\) classifies the prompt \(p\) into four classes. We prepare some example prompts and update markers for the train data of the embedding classifier. The example prompts are shown in Table I.
The constant \(d\) used in the parameter updater \(f_{\mathrm{up}}\) is set to \(d=[2\ 2]^{\top}\).
### _Procedure_
The initial values of the parameter are set to \(\theta_{0}=[0.4\ 0.4]^{\top}\) and two environments, called A and B, are prepared. In Environment A, the vase is placed at \((-1,-3)\), the toy is placed at \((-3,-1)\), and the initial state of the robot is \(x_{0}=[-5\ -5\ 0\ 0]^{\top}\). In Environment B, the vase is placed at \((-1,-4)\) and \((-1,-2)\), the toy is placed at \((1.5,-3)\), and the initial state of the robot is \(x_{0}=[0\ -10\ 0\ 0]^{\top}\). The safety margin for all obstacles is set to \(R^{(j)}=R=0.5\).
In each environment, the robot runs from the start point to the goal point three times, Trial 1, Trial 2, and Trial 3 respectively. Trial 1 is performed without providing any prompt. In Trial 2 and Trial 3, we provide the prompts as shown in Table II.
### _Result_
The robot's trajectories in Environment A are shown in Fig. 5, and the robot's trajectories in Environment B are shown in Fig. 6. With each trial, the robot's trajectory gradually moves away from the vase and towards the toy. The specification of the MPC controller is updated based on the prompts provided, resulting in more preferable behavior. Additionally, the behavior changes that are consistent across all environments imply that the personalization of the control system is applied consistently regardless of the operating environment.
## IV Conclusion
We proposed ChatMPC, a personalization framework. ChatMPC updates the specification of the MPC controller based on natural language prompts provided and aims to provide behavior that is preferable for the user. By personalizing the specifications of the controller, its efficacy is consistently demonstrated regardless of the surrounding environment.
In the numerical experiment, we equipped a robot with ChatMPC and simulated it in multiple environments while providing prompts. The result shows that the robot effectively adapted its behavior according to the intentions of the prompts.
We emphasize that the interpreter, which is the core component in ChatMPC and given in Subsection II-D, is applicable to any control systems that are based on optimization problems other than MPC. In addition, we will address the analysis of the personalization loop in ChatMPC, including its convergence analysis under some assumptions on user models.
## Acknowledgement
The authors would like to thank for Prof. J. M. Maestre for his valuable comments on this work.
This work was supported by Grant-in-Aid for Scientific Research (B), No. 20H02173 from JSPS.
\begin{table}
\begin{tabular}{c|l} \hline Trial number & Provided prompt \\ \hline Trial 1 & _no prompt_ \\ Trial 2 & \(p_{1}\) =“Separate from the vase.” \\ Trial 3 & \(p_{2}\) =“You don’t have to be so careful about the toy.” \\ \hline \end{tabular}
\end{table} TABLE II: Prompt provided in each trial
Fig. 4: Structure of the intent extractor: Sentence BERT consists of Tokenizer, BERT model, and MEAN Pooling. First, the Tokenizer tokenizes the provided prompt \(p\) into \(N_{\mathrm{tok}}\) tokens. Next, the BERT model processes the tokens and outputs the token embeddings \(t_{j}\in\mathbb{R}^{M},\ j\in\{1,\dots,N_{\mathrm{tok}}\}\). Finally, the Mean Pooling calculates the average of the token embedding and outputs the embedding \(e\in\mathbb{R}^{M}\). The embedding classifier classifies the provided embedding \(e\) into the update marker \(s\) using the train data.
\begin{table}
\begin{tabular}{c|l|l} \hline Example prompts & Update marker \(s\) \\ \hline “Can you separate from the vase?” & & \\ “Please separate from the vase.” & & \\ “It is too close to the vase.” & & \\ “Too close to the vase.” & & \\ “You are too closing to the vase” & & \\ \hline “Can you approach to the vase?” & & \\ “Please approach to the vase.” & & \\ “You do not need to care about the vase.” & & \\ “You do not need to be careful about the vase.” & & \\ “You do not have to care about the vase so much.” & & \\ \hline “Can you separate from the toy?” & & \\ “It is too close to the toy.” & & \\ “Too close to the toy.” & & \\ “You are too closing to the toy?” & & \\ \hline “Can you approach to the toy?” & & \\ “Please approach to the toy.” & & \\ “You do not need to care about the toy.” & & \\ “You do not need to be careful about the toy.” & & \\ “You do not have to care about the toy so much.” & & \\ \hline \end{tabular}
\end{table} TABLE I: Prompts and update markers for the train data |
2306.00077 | Axion Helioscopes as Solar Thermometers | Axions, if discovered, could serve as a powerful new messenger for studying
astrophysical objects. In this study we show how the Sun's spatial and spectral
"axion image" can be inverted to infer the radial dependence of solar
properties in a model-independent way. In particular, the future helioscope
IAXO may allow us to accurately reconstruct the Sun's temperature profile
$T(r)$ in the region up to about 80% (40%) of the solar radius for an
axion-photon coupling $g_{a\gamma\gamma}$ of $6 \times 10^{-11}$ GeV$^{-1}$
($10^{-11}$ GeV$^{-1}$). The statistical fluctuations in the photon data lead
to a median precision of better than 10% (16%) in this region, and the
corresponding median accuracy was better than 4% (7%). While our approach can
simultaneously infer the radial profile of the Debye scale
$\kappa_\text{s}(r)$, its weaker connection to the axion production rate leads
to median accuracy and precision of worse than 30% and 50%, respectively. We
discuss possible challenges and improvements for realistic setups, as well as
extensions to more general axion models. We also highlight advantages of
helioscopes over neutrino detectors. | Sebastian Hoof, Joerg Jaeckel, Lennert J. Thormaehlen | 2023-05-31T18:00:25Z | http://arxiv.org/abs/2306.00077v2 | # Axion Helioscopes as Solar Thermometers
###### Abstract
Axions, if discovered, could serve as a powerful new messenger for studying astrophysical objects. In this study we show how the Sun's spatial and spectral "axion image" can be inverted to infer the radial dependence of solar properties in a model-independent way. In particular, the future helioscope IAXO may allow us to accurately reconstruct the Sun's temperature profile \(T(r)\) in the region up to about 80% (40%) of the solar radius for an axion-photon coupling \(g_{a\gamma\gamma}\) of \(6\times 10^{-11}\,\mathrm{GeV}^{-1}\) (\(10^{-11}\,\mathrm{GeV}^{-1}\)). The statistical fluctuations in the photon data lead to a median precision of better than 10% (16%) in this region, and the corresponding median accuracy was better than 4% (7%). While our approach can simultaneously infer the radial profile of the Debye scale \(\kappa_{\mathrm{s}}(r)\), its weaker connection to the axion production rate leads to median accuracy and precision of worse than 30% and 50%, respectively. We discuss possible challenges and improvements for realistic setups, as well as extensions to more general axion models. We also highlight advantages of helioscopes over neutrino detectors.
###### Contents
* 1 Introduction
* 2 Reconstructing solar properties from helioscope data
* 2.1 The expected helioscope signal from the Primakoff process
* 2.2 Extraction of solar properties from data fitting
* 3 Case study for the IAXO helioscope
* 3.1 Experimental setup
* 3.2 Solar model
* 3.3 Fitting procedure
* 3.4 Monte Carlo simulations of IAXO pseudodata
* 3.5 Expected reconstruction abilities for solar properties
* 4 Discussion of limitations and related methods
* 4.1 Limitations and extensions
* 4.2 Connections to helioseismology and neutrinos
* 5 Summary and concluding remarks
* A Reconstructing the averaged Primakoff rates
* A.1 Piecewise-constant interpolation
* A.2 Standard cubic spline interpolation
* A.3 Shape-preserving cubic spline interpolation
## 1 Introduction
Numerous experimental searches [1] are currently underway or planned to look for QCD axions [2; 3; 4; 5] and axion-like particles (ALPs) [6; 7]. Axions are hypothetical particles, which could solve many of the open questions in physics. These include the Strong CP problem [2; 3], dark matter [8; 9; 10; 11; 12; 13], and anomalous observations in a variety of astrophysical environments [e.g. 14; 15; 16; 17].
An important class of axion experiments are helioscopes [18; 19; 20], which track the position of the Sun and use strong magnetic fields to convert solar axions into X-ray photons. Previous helioscope campaigns [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32] placed limits on ALP-photon interactions for sub-meV axions, with the strongest limit of \(g_{a\gamma\gamma}<0.66\times 10^{-10}\,\mathrm{GeV}^{-1}\) (at 95% confidence level) coming from the CAST experiment [32]. The upcoming helioscope IAXO [33; 34; 35] is expected to improve on the CAST sensitivity by more than one order of magnitude.
In addition to the potential discovery of axions, IAXO may even determine the axion's mass and couplings [36; 37]. Axions could also be used to study solar properties such as metal abundances [38], macroscopic magnetic fields inside the Sun [39], or to distinguish different QCD axion or solar models [40].
In this study, we continue to explore the capabilities of helioscopes as a tool for studying solar properties. Specifically, we demonstrate how helioscopes can leverage pixel-based detectors and the excellent angular resolution of X-ray optics to determine the solar temperature and Debye screening scale in different layers of the Sun's interior.
In section 2 and appendix A, we outline the methodology for inferring solar properties as a function of distance from the solar core using the axion's energy-averaged interaction rates. In section 3, we present a case study that showcases the feasibility of our approach using simulated IAXO data. Section 4 provides a comprehensive discussion of our assumptions, potential practical challenges, and draws connections to neutrino experiments and inversion methods previously employed in solar physics. Finally, in section 5, we summarise our key findings and close with a few concluding remarks.
## 2 Reconstructing solar properties from helioscope data
Depending on their interactions, QCD axions and ALPs can be created inside the Sun through various processes, which we extensively reviewed in ref. [40]. For concreteness we focus on massless axions coupled to two photons via the axion-photon coupling \(g_{a\gamma\gamma}\). This coupling allows axions to be produced through the Primakoff process [41], where plasmons are converted into axions in the presence of electromagnetic fields generated by electrons or ions.
Primakoff production is the dominant axion production process in the Sun for \(m_{a}\lesssim\text{meV}\) as long as the axion-electron coupling \(g_{aee}\) is sufficiently weak, \(g_{aee}\ll 0.01\,g_{a\gamma\gamma}/\text{GeV}^{-1}\)[40]. However, once \(g_{aee}\sim 0.01\,g_{a\gamma\gamma}/\text{GeV}^{-1}\), its additional contributions to the axion flux cannot be disregarded. While this requires an extension of the methodology presented here, we do not foresee any fundamental obstacles to this (see section 4 for further comments). It is important to note that an axion detection in a helioscope implies \(g_{a\gamma\gamma}\neq 0\).
### The expected helioscope signal from the Primakoff process
The Primakoff production rate of axions in the Sun, including the effects of charge screening via the Debye screening scale \(\kappa_{\text{s}}\)[42, 43], is given by
\[\Gamma^{\text{P}}(E_{a})=\frac{g_{a\gamma\gamma}^{2}\,\kappa_{\text{s}}^{2}\, T}{32\pi}\left[\left(1+\frac{\kappa_{\text{s}}^{2}}{4E_{a}^{2}}\right)\,\log \left(1+\frac{4E_{a}^{2}}{\kappa_{\text{s}}^{2}}\right)-1\right]\frac{2}{ \text{e}^{E_{a}/T}-1}\,, \tag{1}\]
where \(T\) is the temperature inside the Sun and \(E_{a}\) is the axion's energy. The quantities \(\kappa_{\text{s}}\) and \(T\) are not entirely independent since \(\kappa_{\text{s}}\) can be expressed as [42]
\[\kappa_{\text{s}}^{2}=\frac{4\pi\alpha_{\text{\tiny EM}}}{T}\left(n_{e}+\sum_ {z}Q_{z}^{2}\,n_{z}\right)\,. \tag{2}\]
In the expression above, \(\alpha_{\text{\tiny EM}}\approx 1/137\) is the fine-structure constant, while \(n_{z}\) and \(Q_{z}\) are the number density and electric charge (in units of the elementary charge) of each ion species (labelled by \(z\)) in the plasma.
Equation (1) holds for relativistic axions, i.e. \(E_{a}\gg\omega_{\text{pl}}\). Here, \(\omega_{\text{pl}}\) is the plasma frequency, which is less than about \(0.3\,\text{keV}\) inside the Sun. Moreover, eq. (1) neglects electron degeneracy effects, which reduce the Primakoff axion flux by about \(3\%\) at most [40]. Incorporating them would complicate our analysis, and we disregard them for simplicity. One might expect that such small systematic shifts only mildly affect our fitting procedure and results. However, as we will see in section 3.5, once a sufficiently high number of axions are detected, estimating \(T\) and \(\kappa_{\text{s}}\) from the approximate eq. (1) while calculating the event rates with the full result can lead to problems with the fitting procedure and systematic shifts in the inferred solar properties. While this is only a small issue for the reconstruction of \(T\), the weaker dependence of the Primakoff rate on \(\kappa_{\text{s}}\) leads to much larger deviations. In a more
refined setup this could be addressed by allowing for a more realistic fit function that takes the electron degeneracy into account, cf. ref. [40].
Furthermore, we can ignore parallaxes due to the vast difference in scale between the solar radius and the distance between Earth and the Sun, \(\mathrm{R}_{\odot}\approx 0.005\,d_{\mathrm{E}}\), as discussed in e.g. ref. [40, sec. 2.6]. Consequently, we can project the axion flux (in units of \(\mathrm{cm}^{-2}\mathrm{s}^{-1}\mathrm{keV}^{-1}\)) from the solar disc onto the helioscope detector surface on Earth. The spatial and spectral differential axion flux at Earth is given by [e.g. 40, sec. 2.6]
\[\frac{\mathrm{d}^{2}\Phi^{\mathrm{P}}}{\mathrm{d}E_{a}\,\mathrm{d}\rho}=\frac{ \mathrm{R}_{\odot}^{3}E_{a}^{2}}{2\pi^{2}d_{\mathrm{E}}^{2}}\,\int_{\rho}^{1} \!\mathrm{d}r\ \frac{\rho\,r}{\sqrt{r^{2}-\rho^{2}}}\ \Gamma^{\mathrm{P}}(E_{a},\,r)\,, \tag{3}\]
where \(\rho\) is the distance from the centre of the solar disc in units of \(\mathrm{R}_{\odot}\). Figure 1 shows the resulting differential axion flux on the solar, which we compute numerically utilising the publicly available SolarAxionFlux library [40, 44].
The differential axion flux in eq. (3) is then converted into a differential photon flux inside the magnetic field of a helioscope. The converted photons will have the same energy as the incoming axions, and we can set \(\omega=E_{a}\). The conversion probability after traversing a magnetic field of effective strength \(B\) and length \(L\) is given by
\[P_{a\gamma}=\left(\frac{2\,g_{a\gamma\gamma}BE_{a}}{m_{a}^{2}}\right)^{2} \mathrm{sinc}^{2}\left(\frac{m_{a}^{2}L}{4E_{a}}\right)\rightarrow\frac{1}{4} \,g_{a\gamma\gamma}^{2}B^{2}L^{2}\quad(m_{a}\to 0)\,, \tag{4}\]
where \(\mathrm{sinc}(x)\equiv\sin(x)/x\). Conveniently, for light axions with masses \(m_{a}\lesssim 0.01\,\mathrm{eV}\), the conversion probability \(P_{a\gamma}\) is essentially independent of \(E_{a}\).
To obtain the number of detected photons, we need to multiply eq. (3) by eq. (4), the data-taking time \(\Delta t\), and the effective exposure \(A_{\mathrm{eff}}=\epsilon A\), which is the product of the physical detector cross section \(A\) and the total efficiency \(\epsilon\) of the X-ray optics and detector.
Figure 1: Distribution of the differential B16-AGSS09 solar axion flux \(\frac{1}{\rho}\,\frac{\mathrm{d}^{2}\Phi^{\mathrm{P}}}{\mathrm{d}E_{a}\,d\rho}\), normalised to its maximum value. The central panel shows the full distribution, while the adjacent panels show the relative marginal fluxes, i.e. where either \(\rho\) or \(E_{a}\) has been integrated out.
We also need to average the Primakoff rate over the relevant energy range. For energies in the interval \(\omega\in[\omega_{j},\,\omega_{j+1}]\), we define the corresponding averaged rate \(\bar{\Gamma}^{\rm P}_{j}\) as
\[\bar{\Gamma}^{\rm P}_{j}(r)\equiv\int_{\omega_{j}}^{\omega_{j+1}}\!\!\mathrm{d} \omega\ \frac{\omega^{2}}{2\pi^{2}}\,\Gamma^{\rm P}(\omega,\,r)\,. \tag{5}\]
With this definition, the expected number of photons in the \(i\)th radial and \(j\)th spectral bin, \(\rho\in[\rho_{i},\,\rho_{i+1}]\) and \(\omega\in[\omega_{j},\,\omega_{j+1}]\), can be computed as
\[\bar{n}_{i,j}=\frac{P_{a\gamma}\,A_{\rm eff}\,\Delta t\,{\rm R}_{\odot}^{3}}{d _{\rm E}^{2}}\,\int_{\rho_{i}}^{\rho_{i+1}}\!\!\mathrm{d}\rho\ \int_{\rho}^{1}\!\!\mathrm{d}r\ \frac{r\,\rho}{\sqrt{r^{2}-\rho^{2}}}\ \bar{\Gamma}^{\rm P}_{j}(r)\,. \tag{6}\]
Note that the \(\bar{n}_{i,j}\) depend on the two functions of interest, \(T(r)\) and \(\kappa_{\rm s}(r)\). However, due to the finite amount of data available, we can only hope to infer these functions at a finite number of points, \(r_{i}\). These values correspond to points \(\rho_{i}\) on the solar disc, which we will identify with one another throughout our analysis. In particular, this also implies that \(n_{r}=n_{\rho}\).
### Extraction of solar properties from data fitting
In a helioscope, the photons resulting from axion conversion within the magnetic field have the same energy and follow the same direction as the incoming axions. Using an energy-resolving, pixel-based detector with sufficiently well characterised X-ray optics, a helioscope thus provides an "axion image" of the Sun i.e. the integrated axion flux emitted from the solar disc on the sky (cf. ref. [25, fig. 3]). Moreover, an axion false-colour image can be created based on the detected photon energies.
We now show that the axion image contains information about the different layers of the Sun and not just its bulk properties. As can be seen from eq. (3), the axion flux at radius \(\rho\) on the detector is obtained from integrating over the solar radius \(r\) along the direction perpendicular to the solar disc. To reconstruct its original \(r\) dependence from the dependence on the disc parameter \(\rho\), we need to "invert" the integral relation eq. (3). This inversion enables the reconstruction of the underlying solar parameters, \(T(r)\) and \(\kappa_{\rm s}(r)\), at various points \(r_{i}\) within the Sun.
One possible approach involves interpolating the functions \(T(r)\) and \(\kappa_{s}(r)\) based on a chosen set of points \(r_{i}\). However, only \(\bar{\Gamma}^{\rm P}_{j}(r)\) appears in eq. (6). It is thus technically more straightforward and closer to the measured quantities to interpolate \(\bar{\Gamma}^{\rm P}_{j}(r)\) at the different \(r_{i}\). The values for \(\bar{\Gamma}^{\rm P}_{j}(r_{i})\) can be computed from \(T_{i}\equiv T(r_{i})\) and \(\kappa_{i}\equiv\kappa_{\rm s}(r_{i})\) using eq. (5).
As described in more detail in the appendix, in particular appendix A.3, we assume that \(\bar{\Gamma}^{\rm P}_{j}(r)\) is given by a spline interpolation. Using the Python routine PchipInterpolator from the scipy library [45], we can compute the corresponding spline coefficients for piecewise-cubic Hermite interpolating polynomials (PCHIPs) [46] from values for \(g_{a\gamma\gamma}\), \(\kappa_{i}\), and \(T_{i}\).
Note that PCHIPs preserve monotonicity of the function in each interval, which does however not imply that the \(\bar{\Gamma}^{\rm P}\)s need to be monotone for all radii. The use of PCHIPs will guarantee that the nonnegative photon count data will result in nonnegative \(\bar{\Gamma}^{\rm P}\)s. This condition is a fundamental, physical property of the \(\bar{\Gamma}^{\rm P}\)s thus not in conflict with the model-independence of our method. Still, it would be more rigorous to use a different algorithm that only guarantees positivity instead of monotonicity, but unfortunately software codes implementing such methods do not seem to be widely available in C++ or Python.
Estimates for the true \(g_{a\gamma\gamma}\), \(\kappa_{i}\), and \(T_{i}\) from the helioscope data can then be inferred by fitting. To this end, we use a Poisson-inspired fitting metric,
\[\Delta\chi^{2}\equiv-2\log L(g_{a\gamma\gamma},\,\{\kappa_{i},\,T_{i}\})=2\sum_ {j}\bar{n}_{i,j}-\hat{n}_{i,j}\,\log(\bar{n}_{i,j})-\hat{n}_{i,j}+\hat{n}_{i,j} \,\log(\hat{n}_{i,j})\,, \tag{7}\]
where \(\hat{n}_{i,j}\) is an estimate for the number of photons \(n_{i,j}\) observed in the \(i\)th spatial annulus and the \(j\)th spectral bin.1 The expected number of counts \(\bar{n}_{i,j}\) is given by eq. (6).
Footnote 1: In general, the \(\hat{n}_{i,j}\) are non-integer because we reconstruct a circle from a square detector. As a consequence, the likelihood in eq. (7) could be based on, e.g., a mixed Gamma-Poisson process, whose parameters can be estimated from our error analysis shown in the right panel figure 2. While this would provide a more accurate description, especially in the regime of smaller counts, we argue that eq. (7) is sufficient for a proof of principle for our method.
Since we need to determine \(2n_{r}+1=2n_{\rho}+1\) parameters in total, we expect that \(n_{\omega}\geq 3\) energy bins are needed, test the fit is underdetermined. We verified that choosing \(n_{\omega}=1\) leads to the expected degeneracies between the \(\kappa_{i}\) and \(T_{i}\) coefficients, which start to disappear for \(n_{\omega}\geq 2\).
## 3 Case study for the IAXO helioscope
To illustrate our method, we investigate the expected reconstruction abilities in the upcoming solar helioscope IAXO [33; 34; 35]. We provide the associated experimental parameters in table 1, noting that the radial bins in the range \(r\in[0,\,1]\,\mathrm{R}_{\odot}\) are equally spaced while the spectral bins \(\omega\in[0.3,\,15.0]\,\mathrm{keV}\) are chosen to equally distribute the observed number of counts between them. The two benchmark models that we consider are very light axions (\(m_{a}=0\)) with couplings of \(g_{10}=0.6\) (Case A) and \(g_{10}=0.1\) (Case B), where \(g_{10}\equiv g_{a\gamma\gamma}/10^{-10}\,\mathrm{GeV}^{-1}\). Case A corresponds to a detection below the CAST limit of \(g_{10}<0.66\)[32], while the amount of counts in Case B would typically still lead to an axion detection in excess of \(5\sigma\), even when including the background rates assumed for the "IAXO baseline" setup [34, table 5]. The total number of expected photons are \(\bar{n}=\sum_{i,j}\bar{n}_{i,j}\approx 330\,000\) and \(\bar{n}\approx 250\) photons for cases A and B, respectively.
### Experimental setup
The helioscope X-ray optics project the axion-induced photon signal onto an \(n_{\mathrm{px}}\times n_{\mathrm{px}}\) detector grid, as shown schematically in the left panel of figure 2. The rings (dashed red lines) delimit the different annuli from which we infer the photon counts. For illustrative purposes, we only
\begin{table}
\begin{tabular}{l r r} \hline IAXO parameter & Value & Parameter & Value \\ \hline Magnetic field \(B\) & 2.5 & T \\ Length \(L\) & 20.0 & m \\ Cross section \(A\) & 2.26 m\({}^{2}\) \\ Total effective efficiency \(\epsilon\) & 0.56 & \\ Data-taking time \(\Delta t\) & 3.0 & yr \\ \hline \end{tabular}
\begin{tabular}{l r} \hline Parameter & Value \\ \hline Grid bins per side \(n_{\mathrm{px}}\) & 128 \\ Energy bins \(n_{\omega}\) & 4 \\ Radial bins \(n_{\rho}=n_{r}\) & 20 \\ MC simulations \(N_{\mathrm{MC}}\) & 1000 \\ \hline \end{tabular}
\end{table}
Table 1: Parameter values used in our case study. _Left:_ Parameters for the for “IAXO baseline” setup [34, table 5], except that we assume a runtime for IAXO of 6 yr i.e. 3 yr of data. _Right:_ Detector grid and simulation-related parameters.
show a grid of \(16\times 16\) photon-counting detector pixels. However, in the subsequent analysis, we assume a grid of \(128\times 128\) pixels. The proportions depicted in figure 2 correspond to our case study.
It is important to choose a sufficiently large value for \(n_{\rm px}\) to accurately estimate the observed photon counts \(n_{i,j}\). Since the photon signal exhibits rotational symmetry, it makes sense to bin it radially in annuli. However, it is mathematically impossible to exactly cover such radial bins with quadratic detector pixels (see section 4 for comments on ring-shaped detectors). While the arising geometrical errors thus never vanish, they can be reduced by making the pixel size sufficiently small or, equivalently, \(n_{\rm px}\) sufficiently large.
Furthermore, estimating the \(n_{i,j}\) in each annulus requires an assumption about the spatial photon distribution in each pixel. In our analysis we assume a uniform distribution, which we expect to be a good approximation as long as \(n_{\rm px}\) is sufficiently large. Still, as the signal increases towards the centre (cf. figure 1), a more quantitative investigation is warranted. To address this, we examine the total error between the estimated \(\hat{n}_{i,j}\) compared to the true \(n_{i,j}\) as a function of \(n_{\rm px}\). We employ the fitting metric in eq. (7) with the roles of \(\hat{n}_{i,j}\) and \(\bar{n}_{i,j}=n_{i,j}\) reversed, \(g_{10}=0.6\), and \(n_{\omega}=1\), while keeping the relative proportions of the signal region shown in the left panel of figure 2. The right panel of figure 2 shows the total estimation error relative to its maximum. As \(n_{\rm px}\) increases, the error decreases while fluctuating, until it reaches the lowest possible values for \(n_{\rm px}\gtrsim 64\). For larger values, the error appears to asymptotically settle at the lowest possible value. This behaviour confirms our expectation that sufficiently large \(n_{\rm px}\) should allow for an excellent estimate \(\hat{n}_{i,j}\) of the photon counts contained in the annuli. Although the exact shape of the estimation error may depend slightly on the relative location of the signal region, the size of the signal region's radius scales simply with the values of \(n_{\rm px}\). Therefore, we can focus on the asymptotic region
Figure 2: _Left_: Axion-induced photon flux on the detector grid. The centre of signal region (blue cross) is at pixel coordinates (8.75, 9.25), where \({\rm R}_{\odot}\) corresponds to 6 px. The solid red circle represents the boundary of the signal region, which is divided into five annuli (areas between the dashed red circles). The grey shading scales with the square root of the photon counts \(n_{i,j}\), reflecting the magnitude of fluctuations. _Right:_ Total estimation error as a function of number of grid pixels per side, \(n_{\rm px}\).
of sufficiently large \(n_{\rm px}\), where a relatively low error can be reliably guaranteed.
In addition to the considerations above, our setup relies on three additional assumptions. First, that the X-ray optics do not distort or spread the axion image. Second, that we have perfect control of the location and size of the axion image on the detector grid. Third, that the background levels in IAXO are sufficiently low to justify ignoring detector dark counts or other backgrounds. A detailed discussion of these assumptions is provided in section 4.
### Solar model
We choose the B16-AGSS09 solar model [47; 48] for our case study. Figure 1 shows the associated two-dimensional differential solar axion flux, cf. eq. (3), calculated using the publicly available SolarAxionFlux library [40; 44]. It is important to point out that the various standard solar models predict very similar \(T(r)\) and \(\kappa_{\rm s}(r)\). For example, comparing the B16-AGSS09 and B16-GS98 [48; 49] solar models, the respective \(T(r)\) and \(\kappa_{\rm s}(r)\) deviate less than 3% across the whole Sun. The most significant difference between the solar models appear in their metallicities, which we do not explicitly consider in this work. We refer the reader to ref. [40], where axions are used to distinguish low- and high-metallicity solar models. In fact, the most recent generation of the solar models might resolve the metallicity problem altogether [50].
### Fitting procedure
Before describing the generation of IAXO pseudodata, let us comment on difficulties that can arise when optimising the fitting metric \(\Delta\chi^{2}\) in eq. (7).
One issue is that the flux is concentrated around the centre of the signal region on the detector grid, requiring a sufficiently large \(n_{\rm px}\) (cf. section 3.1). We also need to choose radial bins to be somewhat evenly distributed across the entire range for a faithful reconstruction (cf. appendix A.3). Since 99% of all axions are produced within \(r\lesssim 0.5\,{\rm R}_{\odot}\), almost no data is available from beyond this. The corresponding values of \(T(r)\) and \(\kappa_{\rm s}(r)\) can thus only be inferred indirectly from interpolating between the inner region and the edge of the Sun. In this sense, the choice of interpolating function can have a noticeable impact of the results. For example, choosing PCHIPs with \(n_{r}=5\) will introduce a 10% error on \(\bar{n}\), which only drops below 1% for \(n_{r}\gtrsim 20\).
Another issue is that, in the limits of \(\kappa_{\rm s}\to 0\) and \(\kappa_{\rm s}\to\infty\), the Primakoff production rate in eq. (1) tends to 0 and a constant value, respectively. In practice, this behaviour of \(\kappa_{\rm s}\) results in rather shallow minima of \(\Delta\chi^{2}\) around \(\kappa_{i}\), which presents a challenge for numerical optimisation. This is in contrast to the strong dependence on the temperature \(T\), which typically leads to steeper minima. We use a combination of the adaptive Nelder-Mead algorithm [51; 52] from scipy[45] and subsequently the MINUIT algorithm [53], as implemented in iminuit[54], to ensure convergence.2
Footnote 2: We checked that the optimisation algorithms can infer correctly the input parameters when the data corresponds to the expectation value of eq. (6), computed with the interpolated \(\bar{\Gamma}_{j}^{p}(r_{i})\). We also cross-checked our procedure with the Bayesian emcee sampler [55] and the scipy implementation of the heuristic global optimisation strategy of differential evolution [45; 56].
One may also consider expanding our fitting approach by incorporating additional likelihoods or incorporating physical prior information. One option is to utilise more information from solar modelling (see ref. [57] for a review), allowing for a Bayesian analysis with priors informed by physical considerations. For instance, we could leverage the knowledge that the ratio \(\xi_{i}^{2}\equiv(\kappa_{i}/T_{i})^{2}\approx 48\) remains approximately constant within the relevant regions of the
Sun, with an accuracy of about 15% [58, p. 169]. Additionally, data from helioseismology or neutrino experiments could be used. For example, the Borexino experiment can determine \(T(r\approx 0)\) to sub-percent precision [59], thanks to the strong scaling of the \({}^{8}\)B decay flux with temperature [e.g. 60]. Surface temperature measurements at \(T(r\approx 1)\) could provide further insights.
We do not explore these possibilities in detail in this study, as our main focus is on presenting a model-independent reconstruction of \(T\) and \(\kappa_{\mathrm{s}}\). However, it is worth noting that the established practice of "global fits" in solar modeling indicates the potential for complementary information to enhance our understanding after detecting axions.
### Monte Carlo simulations of IAXO pseudodata
The Monte Carlo (MC) procedure is used to generate photon pseudodata sets for IAXO. This procedure involves the following steps, which are repeated \(N_{\mathrm{MC}}\) times:
1. Draw a random number of observed photons \(n\) from a Poisson distribution, \(n\sim\mathcal{P}(\bar{n})\), where the expected number of photon counts \(\bar{n}\) is given by eq. (6).
2. Determine the \(n_{\omega}\) energy bin boundaries such that each bin contains approximately the same number of photon counts.
3. Distribute the \(n\) photons on \(n_{\omega}\) spectral detector grids according to the spectral and radial differential flux (cf. figure 1) and rotational symmetry.
4. Estimate the photon counts \(\hat{n}_{i,j}\) for each of the \(n_{\rho}\) annuli on the \(n_{\omega}\) grids.
5. Fit \(g_{a\gamma\gamma}\) and the coefficients \(\kappa_{i}\) and \(T_{i}\), as described in section 3.3.
The 20 radial bins are distributed equally in the interval \(r\in[0,\,1]\). All other quantities are set to the values given in table 1.
### Expected reconstruction abilities for solar properties
Figure 3 shows the results of our case studies. Comparing to the reference solar model (black lines), we immediately see that the temperature profiles can be determined fairly accurately up to about \(0.8\,\mathrm{R}_{\odot}\) in Case A and \(0.4\,\mathrm{R}_{\odot}\) in Case B. The median accuracy in these regions, as computed by the absolute deviations from the solar model values, is about 3% and 7% for the two cases, respectively. Of course, we will only obtain one IAXO data set over the course of our measurement period, and we thus need to also consider the precision with which we expect to typically determine the \(T_{i}\) in the relevant regions. We measure it by determining the median precision within the relevant region, where the individual precision of the \(T_{i}\) are computed as half the size of the central 68% interval of all best-fitting points compared to the respective median values. Doing so, we find a precision of better than 10% in Case A and 16% in Case B.
These findings are not unexpected, given that 99% of the photon counts can be found within \(\rho\lesssim 0.5\,\mathrm{R}_{\odot}\), as mentioned previously. While the continuity of the \(\bar{\Gamma}^{\mathrm{P}}\)s and assumptions about the interpolating function helps to infer information for larger \(\rho\), and thus \(r\), there is still a fundamental limit given by the number of counts. This is also why the best-fitting points for larger radii in figure 4 can be found at values close to zero.
Continuing the discussion with observed systematic deviations, we observed that these mostly stem from the simplified Primakoff rate in eq. (1), but also the residual estimation and interpolation errors from having to integrate over the square detector grid and our choice
of \(n_{\rho}=20\). By generating data based on the interpolated, rather than the solar model values of the \(\bar{\Gamma}^{\rm P}\)s, we can verify that the systematics mostly disappear. This also allows us to exclude major issues in the minimisation procedure, which however still adds a small contribution due to the relatively high number of parameters to be optimised. In this sense, we expect there to be a trade-off between reducing systematic errors by further increasing \(n_{\rho}\), and still maintaining a reliable fitting procedure given a finite amount of computational resources.
Regarding the observed precision, it is clear that the indirect reconstruction procedure with a nontrivial algorithm behind the PCHIPs does not allow for a straightforward error propagation for the \(\bar{\Gamma}^{\rm P}\)s compared to the alternative interpolations discussed in appendix A. However, the errors on \(T\) and \(\kappa_{\rm s}\) do not seem to follow the simple a simple scaling relation based on the ratio of \(g_{a\gamma\gamma}\) in Case A and Case B either. In particular, we also observe a balancing of opposing effects for achievable precision: more photon counts towards the central region of the detector (smaller uncertainty) balance against accumulating errors towards the centre, cf. eq. (12), and a volume effect for our equally spaced \(r_{i}\) (larger uncertainties).
Considering the radial profile of \(\kappa_{\rm s}\), we find a median accuracy of about 30% for both cases in the regions where we can determine \(T(r)\) with good accuracy. Conversely, the median precision is about 50% for Case A and 90% for Case B. This means that a model-independent determination within our method becomes challenging for \(\kappa_{\rm s}\).
As explained before, the shallow minima of \(\Delta\chi^{2}\) (maxima of \(L\)) with respect to the \(\kappa_{i}\) render the minimisation more difficult. This is to be compared to the Primakoff energy loss rate in the Sun, which scales with \(T^{7}\), i.e. the number of produced relativistic axions scales with \(T^{6}\). Conversely, the Primakoff rate only weakly depends on \(\kappa_{\rm s}\) through the nearly constant ratio \(\xi=\kappa_{\rm s}/T\) [e.g. 58, sec. 5.2.1].
To guide the eye, we connect the median best-fitting \(T_{i}\) and \(\kappa_{i}\) values with PCHIPs i.e. the interpolating splines that we assumed for the \(\bar{\Gamma}^{\rm P}\)s (see appendix A.3). However, we did not make any assumptions on the shape of \(T(r)\) and \(\kappa_{\rm s}(r)\), and our analysis can thus
Figure 3: Results for the inferred radial profile of \(T\) (_left_) and \(\kappa_{\rm s}\) (_right_). Black lines indicate true values from the B16-AGSS09 solar model. Blue and red points and error bars show the median and 68% central intervals best-fitting parameter values for Case A and Case B, respectively. The dotted lines between the median values are only to guide the eye as we do not assume any interpolating function for \(T\) and \(\kappa_{\rm s}\).
only determine the \(T_{i}\) and \(\kappa_{i}\) at the pre-selected values of \(r_{i}\). Even for a piecewise-constant interpolation of \(\bar{\Gamma}^{\rm P}\), discussed in appendix A.1, the underlying \(T(r)\) and \(\kappa_{\rm s}(r)\) could in principle be very complicated functions, as long as the resulting \(\bar{\Gamma}^{\rm P}\)s are piecewise constant.
Overall, our method can yield a fairly precise estimate for \(T(r)\) in at least the inner half of the Sun, even with a moderate number of detected photon counts as in Case B. The small inaccuracies in the reconstruction could be reduced by using a more accurate Primakoff rate for the modelling and more detector pixels.
## 4 Discussion of limitations and related methods
This section is to summarise our simplifying assumptions and to discuss problems and their possible solutions in more realistic settings. We also comment on the applicability of our results to neutrino experiments.
### Limitations and extensions
More general axion models.We assumed effectively massless axion models that predomunately couple to photons. While helioscopes typically cannot distinguish axion models with \(m_{a}\lesssim\) meV from the truly massless case, it is known that IAXO can determine an axion mass of \(m_{a}\gtrsim\) meV at the \(3\sigma\) level [37]. Similar to \(g_{a\gamma\gamma}\), \(m_{a}\) is a global parameter, which thus affects all contributions to \(\Delta\chi^{2}\). For this reason, we do not expect that the inclusion of \(m_{a}\) presents a major difficulty for the fitting procedure. However, the computation will become more involved since the conversion probability in eq. (4) now depends on \(E_{a}\), whose dependence cannot be factored out anymore.
When considering additional axion couplings such as \(g_{aee}\), more computational complications arise. This is because the associated rates now depend on many more solar parameters - in principle (combinations of) all solar abundances! In theory, this can be accounted for by additional spectral information, i.e. by increasing \(n_{\omega}\). This will necessarily dilute the total number of photons per energy bin, which consequently introduces a larger level of uncertainty for all fitted parameters. Due to this and the large number of fitting parameters, we expect an extension to \(g_{aee}\) to be possible but challenging in practice.
Helioscope detectors.As demonstrated in figure 2, the accuracy of our inference of the \(\bar{\Gamma}^{\rm P}_{j}\)s relies on having sufficiently large \(n_{\rm px}\). This compensates for the systematic error introduced by our assumption, used in step (iv), that the fractional photon counts are equally distributed in each pixel. Helioscope detectors with enough pixels exist, such as the grid-based pn-CCD detector used in CAST with \(200\times 64\) pixels [61]. Gridpix detectors with \(n_{\rm px}=256\) have also been used successfully [62].
We also ignore intrinsic and extrinsic backgrounds, as their rates in the context of IAXO are typically assumed to be very low [e.g. 34]. For instance, CAST achieved integrated background rates of \(4.44\times 10^{-5}\,\rm s^{-1}\) in the 1 keV to 7 keV energy range in 2004 [61]. This corresponds to a total of about 1400 photons for \(\Delta t=3\,\rm yr\), which is larger than the number of signal photons expected in Case B. However, we anticipate improved detector capabilities for the next generation of axion helioscopes. Additionally, the background rate can be determined extremely accurately in helioscopes before sunrise and after sundown. This calibration allows for background subtraction or modelling in the statistical inference step.
Other types of detectors, such as metallic magnetic calorimeters (MMCs), which are currently being actively developed, could also be well-suited for IAXO (see refs [63, 64, 65]
for reviews). MMCs have different systematics than gaseous detectors and offer excellent energy resolution, making them an appealing additional technology [e.g. 66]. As previously mentioned this could enable IAXO, after the discovery of axions, to test axion and solar properties [36, 37, 38, 39, 40].
Indeed, if we naively scale up an available MMC prototype [66], it seems feasible to achieve \(n_{\mathrm{px}}=48\) with background rates of \(0.002\,\mathrm{s}^{-1}\) in the \(1\,\mathrm{keV}\) to \(10\,\mathrm{keV}\) energy range. It is worth noting that this background rate does not include potential reductions through active muon vetos [67], which have been demonstrated to yield reduction by up to a factor of 2 [68], and further improvements could allow for more pixels. Additionally, it seems feasible to produce MMCs with "ring-like" pixels [67], which could potentially avoid the reconstruction step for estimating the \(n_{i,j}\). While this would remove the associated systematic uncertainties, the location and shape of the signal region must be well-calibrated, stable, and accurately projected during operation to benefit from the technological advantages. While promising, MMCs will require further development for improved detector properties and careful consideration of the challenges associated with maintaining experimental stability when utilising a detector with rotational symmetry.
X-ray optics and calibration.Accurate tracking of the Sun and faithful projection of the axion image are critical to the success of the helioscope experiment as well as our methodology. The CAST experiment has already demonstrated a pointing accuracy of "well below 10% of the solar radius" [32]. Compared to the size of our radial bins, uncertainties of order \(0.1\,\mathrm{R}_{\odot}\) would have to be included via a point-spread function (PSF), so a more precise determination of the pointing accuracy is needed. After an axion detection, the centre and accuracy of the signal region can be cross-checked by averaging the position of all detected photon counts on the detector grid and computing their standard deviation, respectively. The expected size of the "axion image" can be calibrated using previously established techniques [25, fig. 3].
Another issue concerns imperfect optics, which can lead to an overall reduction in the number of detected photons and spectral distortions, which violate the simplistic assumptions made in this work. Nevertheless, if the effects of imperfect optics can be calibrated and described mathematically, they present a technical rather than a fundamental problem. In fact, our setup already includes signal reduction due to imperfect optics.
The finite energy resolution of the detectors can be taken into account by convolving the expected signal with a smoothing kernel for a given energy resolution. The energy resolution for the Gridpix detectors has been measured to be around 10-20% [69]. Given that the size of the energy bins used in this experiment is at least \(1.2\,\mathrm{keV}\), the effect of finite energy resolution is expected to be small.
It is also important to note that detectors have a finite energy threshold. The lower end of the energy range used in this experiment, \(\omega\geq 0.3\,\mathrm{keV}\), is reasonable but realistically achievable thresholds might be slightly higher [67].
Regarding distortion and shear from X-ray optics, it is possible to include its effects in the analysis as long as the shape of the PSF can be determined. However, the computations become more challenging if the PSF is highly asymmetric or involves shear, as suggested by e.g. ref. [32, fig. 3]. In this case, fitting the expected counts in each grid bin directly may be more appropriate instead of approximating the counts in the annuli. However, this approach is computationally very challenging and requires further investigation to improve its feasibility.
### Connections to helioseismology and neutrinos
Inversion techniques in helioseismology.Inversion techniques, such as the one developed in this work, have been studied in many different contexts in the past. In solar physics, we should highlight techniques for inferring solar properties from the observation of helioseismic activity on the solar disk (see e.g. refs [70, 71] for early works). Starting from ref. [72], using observations from ref. [73], the sound speed profile \(c^{2}(r)\) throughout the Sun could be reconstructed in a model-independent way i.e. allowing to compare to the predictions of solar models. This is also true for the solar density profile \(\rho(r)\) (see ref. [74] for an early review of inversion techniques).
Making further assumptions about the energy transport in the solar core, helioseismic data can also be used to constrain the solar temperature using a polynomial ansatz [75], which is similar to our approach. This becomes possible thanks to a relationship of the sound speed and the ratio \(T(r)/\mu(r)\), where \(\mu\) is the mean molecular weight. The uncertainty of the inferred temperature profile is about 3%. Thus, at the cost of arguably mild, additional assumptions and some model dependence, helioseismology allows for an accurate determination of \(T(r)\) in at least the innermost region of the Sun.
What about neutrinos?Given the parallels between axion and neutrino phenomenology, it is natural to ask if we could achieve the same results with the already available neutrino data. Indeed, it has long been recognised that neutrinos provide insights into the _central_ temperature of the Sun [e.g. 76, 77], which led to them being referred to as "solar thermometers" as well [78]. Through the observed neutrino flux, we can measure an effective temperature \(T_{\rm eff}\) of the Sun, which involves an integration over the whole solar volume. As for axions, \(T_{\rm eff}\) is close to the core temperature of the Sun due to the strong temperature dependence of neutrino interactions. Still, this is not the same as determining the radial dependence of \(T(r)\), for which we need to use a spatially-resolved "neutrino image" of the Sun.
While neutrino observatories can generate a solar neutrino image,3 the effective PSF of the experiments blurs them significantly. The angular resolution of these experiments is indeed quite poor as, e.g., shown in refs. [79, figs 17, 19, 20], [80, fig. 40], or [81, fig. 26].4 This is unfortunate given that, over its entire lifetime, the Super-K experiment has observed more than \(10^{5}\) neutrinos [82], which is comparable to the number of axions considered in Case A. Recent results from the Borexino experiment, who report about \(10^{4}\) detected solar neutrinos, only manage to correlate the origin of the neutrinos with the general position of the Sun, rather than the specific location of neutrinos on the solar disc [83, 84]. However, the future Hyper-K experiment may have better resolution for solar neutrinos [85].
Footnote 3: See the famous “neutrino image” from the Super-Kamiokande (Super-K) detector, e.g. available at [https://apod.nasa.gov/apod/ap980605.html](https://apod.nasa.gov/apod/ap980605.html), or an updated version on their website [https://www-sk.icrr.u-tokyo.ac.jp/en/sk/about/research/](https://www-sk.icrr.u-tokyo.ac.jp/en/sk/about/research/)).
Footnote 4: Note that the Sun is in the direction of \(\cos(\theta)=1\) in these figures.
While combining a future axion detection with neutrino measurements is an exciting thought, it seems that axions would currently be superior to neutrinos in terms of mapping out the temperature dependence inside the Sun thanks to the available X-ray technology.
## 5 Summary and concluding remarks
We propose a new method to infer radial profiles of the solar temperature \(T(r)\) and Debye scale \(\kappa_{\rm s}(r)\) following the detection of axions in a helioscope. The strong dependence of the
axion production rate on \(T\) enables a precise reconstruction of this quantity in IAXO after an axion detection. The reconstruction of \(\kappa_{\mathrm{s}}(r)\) is more challenging, and care should be taken to use accurate and consistent signal modelling when a large number of axions is detected.
Our method complements other probes of the solar interior such as solar sound profile measurements in helioseismology or neutrino experiments. Axions and neutrinos provide immediate information since they reach us within a relatively short time scale, namely less than about 8.5 minutes after their creation. They leave the solar core mostly unimpeded due to their small interaction rates, and their time resolution is only limited by the need to collect enough statistics. In contrast, photons produced in the Sun's core take at least 100 000 years to reach the surface [86] and are subject to a huge number of scatterings during that time.
The detection of photons resulting from axion conversion benefits from the availability of high-quality X-ray optics, currently offering superior spatial resolution compared to neutrino probes. Our methodology allows for a model-independent reconstruction of solar temperature at various locations within the Sun, providing an edge over helioseismology, which relies on additional assumptions about the solar composition.
In addition to the forward modelling approach presented in the main text, we also derived an analytical reconstruction algorithm for piecewise-constant and regular spline interpolations of the energy-averaged Primakoff rates (see appendix A). These algorithms could be valuable for studies aiming to solve similar inverse problems. Alternatively, simulation-based inference techniques could be explored as an alternative [e.g. 87].
Our procedure can be further enhanced by integrating complementary information from helioseismology, neutrino physics, or by applying physically-informed priors in a Bayesian framework, as discussed in section 3.3. This would be particularly useful for investigating axion models with additional couplings or for addressing technical challenges in realistic experimental setups.
To facilitate future modifications and utilisation of our method, we provide the computational routines used in this study as Python scripts in an updated version of our SolarAxionFlux library on Github [44].
We acknowledge helpful discussions with Loredana Gastaldo, Sebastian Schmidt, and Julia Vogel about the capabilities of IAXO detectors and/or optics, and with Thomas Schwetz about neutrino experiments. We are also indebted to Alex Geringer-Sameth for his brilliant idea to simplify the integration of radially symmetric functions on a square grid. SH has received funding from the European Union's Horizon Europe research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101065579. JJ would like to acknowledge support by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN. LT was funded by the _Graduiertenkolleg_ "Particle physics beyond the Standard Model" (GRK 1940). This work was performed using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility ([http://www.dirac.ac.uk/](http://www.dirac.ac.uk/)). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. DiRAC is part of the National e-Infrastructure. We acknowledge the CINECA award under the ISCRA initiative, for the availability of high-performance computing resources and support. We made use of the BibCom tool [88].
Reconstructing the averaged Primakoff rates
Here we discuss how to explicitly reconstruct the energy-averaged Primakoff rates \(\bar{\Gamma}^{\rm P}_{j}\) using (the estimated) helioscope photon count data \(\hat{n}_{i,j}\). For the reasons given in appendix A.3, we only infer the \(\bar{\Gamma}^{\rm P}\)s indirectly in our fitting approach in the main text. However, we deem it instructive to present two pedagogical examples (piece-wise constant and regular spline interpolation schemes) for an explicit reconstruction of the \(\bar{\Gamma}^{\rm P}\)s in appendices A.1 and A.2. This also provides the background for the procedure used in the main text, which we describe in appendix A.3.
Assume that the photon count data is binned in \(n_{\omega}\) energy bins, labelled \(\omega_{j}\), and in \(n_{\rho}\) radial bins on the detector, labelled \(\rho_{i}\) and in units of the solar radius. In particular, the radial bins are sorted in increasing order, such that \(\rho_{1}=0\) and \(\rho_{n_{\rho}+1}=1\). The computation of the predicted number of photon counts \(\bar{n}_{i,j}\) is given by eq. (6).
### Piecewise-constant interpolation
As a first approximation, consider a piecewise-constant interpolation for the \(\bar{\Gamma}^{\rm P}_{j}\):
\[\bar{\Gamma}^{\rm P}_{j}(r) =\sum_{i}\gamma_{i,j}\,\Theta(r-r_{i})\,\Theta(r_{i+1}-r) \tag{10}\] \[\text{with}\quad\gamma_{i,j} \equiv\int_{\omega_{j}}^{\omega_{j+1}}\!\mathrm{d}\omega\;\frac{ \omega^{2}}{2\pi^{2}}\,\Gamma^{\rm P}_{i}(\omega)\] (11) \[\text{and}\quad\Gamma^{\rm P}_{i}(\omega) \equiv\frac{g_{a\gamma\gamma}^{2}\,\kappa_{i}^{2}\,T_{i}}{32\pi} \left[\left(1+\frac{\kappa_{i}^{2}}{4\omega^{2}}\right)\ln\left(1+\frac{4 \omega^{2}}{\kappa_{i}^{2}}\right)-1\right]\frac{2}{\mathrm{e}^{\omega/T_{i}} -1}\,. \tag{12}\]
These equations depend on \(n_{\omega}\cdot n_{\rho}\) constants \(\gamma_{i,j}\), which we need to reconstruct in order to infer values for \(g_{a\gamma\gamma}\), \(\kappa_{i}\), and \(T_{i}\).
#### Matrix formalism
We can use the ansatz in eq. (10) to evaluate the integral in eq. (6), finding that
\[\bar{n}_{i,j} \propto\int_{r_{i}}^{r_{i+1}}\!\mathrm{d}\rho\,\rho\,\sum_{k=1}^{ n_{\rho}}\int_{\rho}^{1}\!\mathrm{d}r\,\frac{r}{\sqrt{r^{2}-\rho^{2}}}\, \gamma_{k,j}\;\Theta(r-r_{k})\,\Theta(r_{k+1}-r) \tag{13}\] \[=\int_{r_{i}}^{r_{i+1}}\!\mathrm{d}\rho\,\rho\,\left[\gamma_{i,j }\,\sqrt{r_{i+1}^{2}-\rho^{2}}+\sum_{k=i+1}^{n_{\rho}}\gamma_{k,j}\,\left( \sqrt{r_{i+1}^{2}-\rho^{2}}-\sqrt{r_{k}^{2}-\rho^{2}}\right)\right]\] (14) \[=\frac{1}{3}\left[\gamma_{i,j}\,\Delta_{i+1;i}^{3}+\sum_{k=i+1}^ {n_{\rho}}\gamma_{k,j}\left(\Delta_{k+1;i}^{3}-\Delta_{k+1;i+1}^{3}+\Delta_{k ;i+1}^{3}-\Delta_{k;i}^{3}\right)\right]\,, \tag{15}\]
where we defined \(\Delta_{m;n}^{3}\equiv(r_{m}^{2}-r_{n}^{2})^{3/2}\). Note that eq. (15) can be written as a matrix equation of the form
\[\bar{n}_{i,j}=\sum_{k=1}^{n_{\rho}}\mathcal{M}_{ik}\,\gamma_{k,j}\,, \tag{16}\]
where the matrix \(\mathcal{M}\) is defined as
\[\mathcal{M}_{ik}=\frac{P_{a\gamma}\,A_{\rm eff}\,\Delta t\,\mathrm{R}_{\odot}^ {3}}{3\,d_{\rm E}^{2}}\begin{cases}\Delta_{i+1;i}^{3}&\text{for $i=k$},\\ \Delta_{k+1;i}^{3}-\Delta_{k+1;i+1}^{3}+\Delta_{k;i+1}^{3}-\Delta_{k;i}^{3}& \text{for $k>i$},\\ 0&\text{otherwise}.\end{cases} \tag{17}\]
#### Analytical solution of the matrix equation and error propagation
Since \(\mathcal{M}\) is an upper triangular matrix, it is straightforward to invert it, solving the underlying system of linear equations in the process. We set \(\gamma_{i,j}=0\) for \(i=n_{\rho}+1\), as it is at the edge of the Sun. Then, the coefficients \(\gamma_{i,j}\) for \(i<n_{\rho}+1\) can be obtained in an iterative fashion by equating expected and observed counts, \(\bar{n}_{i,j}=n_{i,j}\). Note that this is a nontrivial approximation, which is why we need to simulate the behaviour of our method when applied to data. This is further indicated by the fact that we can only estimate the true observed \(n_{i,j}\), with non-integer estimators \(\hat{n}_{i,j}\). Keeping this in mind, we find that
\[n_{i,j} =\sum_{k=i}^{n_{\rho}}\mathcal{M}_{ik}\,\gamma_{k,j}=\mathcal{M}_ {ii}\gamma_{i,j}+\sum_{k=i+1}^{n_{\rho}}\mathcal{M}_{ik}\,\gamma_{k,j} \tag{10}\] \[\Rightarrow \gamma_{i,j} =\frac{1}{\mathcal{M}_{ii}}\left(\hat{n}_{i,j}-\sum_{k=i+1}^{n_{ \rho}}\mathcal{M}_{ik}\,\gamma_{k,j}\right)\,. \tag{11}\]
This formula allows us to propagate the (independent) statistical errors from each radial bin for the reconstructed \(\gamma_{i,j}\). Assuming Gaussian error propagation, and approximate Poissonian errors for the photon counts, we find that
\[\sigma_{i,j}^{2}\equiv(\Delta\gamma_{i,j})^{2} =\frac{1}{\mathcal{M}_{ii}^{2}}\left[(\Delta\hat{n}_{i,j})^{2}+ \sum_{k=i+1}^{n_{\rho}}\mathcal{M}_{ik}^{2}\,\sigma_{k,j}^{2}\right]\] \[=\frac{1}{\mathcal{M}_{ii}^{2}}\left[\hat{n}_{i,j}+\sum_{k=i+1}^{ n_{\rho}}\mathcal{M}_{ik}^{2}\,\sigma_{k,j}^{2}\right]\,, \tag{12}\]
keeping in mind that \(\sigma_{n_{\rho}+1,j}^{2}=0\).
The analytical expression in eq. (12) reveals that, due to the reconstruction procedure, the uncertainty accumulates towards the centre of the Sun, which is the primary region of interest. This effect becomes particularly relevant when the number of observed photons is low. Also note that, if we use roughly equally-spaced radial bins, the number of photons in the innermost bin is somewhat smaller due to the smaller volume. This, too, increases the error for this bin.
Reconstructions of the energy-averaged Primakoff rate \(\bar{\Gamma}^{\rm P}\) are shown for three energy bins (\(n_{\omega}=3\)): piecewise-constant (_left_), spline interpolation (_middle_), and fitted PCHIP spline (_right_). Error bars represent the standard deviation, while shaded regions indicate the 68% central region. The solar model is B16-AGSS09 with \(g_{10}=0.6\), and \(n_{\rho}=6\) (\(n_{\rho}=20\)) radial bins are used, along with \(n_{\omega}=4\) spectral bins, in the first two cases (the PCHIP case).
The propagated uncertainties in eq. (12), which we show in the left panel of figure 4, can be taken into account when fitting for the coefficients \(g_{a\gamma\gamma}\), \(\kappa_{i}\), and \(T_{i}\). Given the reconstructed coefficients \(\gamma_{i,j}\), we can do so by optimising the following fitting metric:
\[\Delta\chi^{2}\equiv-2\,\log L(g_{a\gamma\gamma},\,\{\kappa_{i},\,T_{i}\})= \sum_{j}\mathbf{x}_{j}^{\rm T}\,\Sigma_{j}^{-1}\,\mathbf{x}_{j}\,, \tag{13}\]
where the entries of the vector \(\mathbf{x}_{j}\) are given by \(x_{i,j}\equiv\bar{\Gamma}_{j}^{\rm P}(r_{i}\,|\,g_{a\gamma\gamma},\,\kappa_{i },\,T_{i})-\gamma_{i,j}\), and the covariance matrices \(\Sigma_{j}=\text{diag}(\sigma_{1,j}^{2},\,\ldots,\,\sigma_{n_{\rho},j}^{2})\) are given by eq. (12).
### Standard cubic spline interpolation
The piecewise-constant ansatz in eq. (101) is only one possible option. Cubic spline interpolation could be a more realistic choice which, however, introduces additional coefficients and requires the choice of boundary conditions. Taking into account that the function values, and their first and second derivatives, need to be continuous across radial bin boundaries, the most general cubic spline ansatz can be written as [89, sec. 3.3]
\[\bar{\Gamma}_{j}^{\rm P}(r)=\sum_{i} \bigg{[}\frac{\mu_{i,j}}{6h_{i}}\;(r_{i+1}-r)^{3}+\frac{\mu_{i+1, j}}{6h_{i}}\;(r-r_{i})^{3}+\left(\frac{\gamma_{i+1,j}-\gamma_{i,j}}{h_{i}}- \frac{h_{i}\left(\mu_{i+1,j}-\mu_{i,j}\right)}{6}\right)\,(r-r_{i})\] \[+\gamma_{i,j}-\frac{h_{i}^{2}\mu_{i,j}}{6}\bigg{]}\,\Theta(r-r_{i} )\,\Theta(r_{i+1}-r)\,, \tag{102}\]
where \(\mu_{i,j}\) are referred to as moments of the cubic splines, \(\gamma_{i,j}\) are again given by eq. (102), and \(h_{i}\equiv r_{i+1}-r_{i}\).
#### Computation of the matrix equation
We again evaluate the integral eq. (6) and write the resulting equations in matrix form,
\[\begin{pmatrix}\mathbf{\hat{\pi}}_{j}\\ 0\end{pmatrix}=\mathcal{M}\,\begin{pmatrix}\mathbf{\Upsilon}_{j}\\ \mathbf{\mu}_{j}\end{pmatrix}\,, \tag{103}\]
Figure 4: Reconstruction for the energy-averaged Primakoff rate \(\bar{\Gamma}^{\rm P}\) (black line) for the lowest and highest of \(n_{\omega}=4\) energy bins using piecewise-constant (_left_), spline (_middle_), and PCHIP (_right_) interpolations. Error bars/shaded regions correspond to the standard deviation (68% central region; _right_). We use \(g_{10}=0.6\), the B16-AGSS09 solar model, and \(n_{\rho}=6\) (\(n_{\rho}=20\); _right_) radial bins.
where \(\boldsymbol{\gamma}_{j}\equiv(\gamma_{1,j},\,\ldots,\,\gamma_{n_{\rho},j},\,0)\), \(\boldsymbol{\mu}_{j}\equiv(\mu_{1,j},\ldots,\,\mu_{n_{\rho}+1,j})\), \(\boldsymbol{\hat{n}}_{j}\equiv(\hat{n}_{1,j},\,\ldots,\,\hat{n}_{n_{\rho},j},\,0)\), and where the matrix \(\mathcal{M}_{ik}\) takes the following schematic form,
\[\mathcal{M}=\left(\begin{array}{c}\includegraphics[width=142.364pt]{images/0.eps} \end{array}\right)=\left(\begin{array}{c}\includegraphics[width=142.364pt]{images/0.eps} \end{array}\right)=\left(\begin{array}{c}\includegraphics[width=142.364pt]{images/0.eps} \end{array}\right)=\left(\begin{array}{c}\includegraphics[width=142.364pt]{images/0.eps} \end{array}\right)\,. \tag{14}\]
Here, the matrices \(\mathcal{M}_{\gamma\gamma}\) and \(\mathcal{M}_{\gamma\mu}\) above the solid red line come from the integral equations, while the matrices \(\mathcal{M}_{\mu\gamma}\) and \(\mathcal{M}_{\mu\mu}\) below the dashed red line relate to the continuity condition of the cubic spline. We provide the concrete expressions for the \(\mathcal{M}_{ik}\), and a code to carry out the related computations, on Github in the SolarAxionFlux repository [44].
The only conditions needed in addition to the ones that arise from eq. (6) are the boundary conditions for the cubic splines. For example, for Hermite splines, the first derivatives are specified at the boundaries. By choosing all of them to vanish, one obtains "clamped" polynomials, whose boundary conditions read,
\[\mathcal{M}_{n_{\rho},1} =1/h_{1}\,, \mathcal{M}_{n_{\rho},2} =-1/h_{1}\,,\] \[\mathcal{M}_{n_{\rho},n_{\rho}} =h_{1}/3\,, \mathcal{M}_{n_{\rho},n_{\rho}} =h_{1}/6\,,\] \[\mathcal{M}_{2n_{\rho}-1,n_{\rho}-2} =-1/h_{n_{\rho}-1}\,, \mathcal{M}_{2n_{\rho}-1,n_{\rho}-1} =1/h_{n_{\rho}-1}\,,\] \[\mathcal{M}_{2n_{\rho}-1,2n_{\rho}-2} =h_{n_{\rho}-1}/6\,, \mathcal{M}_{2n_{\rho}-1,2n_{\rho}-1} =h_{n_{\rho}-1}/3\,. \tag{15}\]
Other possible options include "natural" boundary conditions, where the second derivatives at the boundaries are assumed to vanish. Another widely used choice includes "not-a-knot" conditions, where the third derivatives of the first and last two polynomials are matched. However, in this case the problem cannot be formulated in a straightforward matrix equation anymore and would require additional computational steps. This also applies for other generalisation, which we discuss in appendix A.3
#### Numerical solution of the matrix equation and error propagation
We can solve eq. (13) by numerically inverting \(\mathcal{M}\) to obtain the \(\gamma_{i,j}\) values needed for the fitting. Even though the errors of the estimated observed photon counts \(\hat{n}_{i,j}\) are uncorrelated,
i.e. their covariance matrix is diagonal, the errors on the \(\gamma_{i,j}\) are correlated. If we denote the upper left block of \(\mathcal{M}^{-1}\) by \(M_{\gamma\gamma}^{-1}\), we find that the covariance matrix of \(\boldsymbol{\gamma}_{j}\) is given by
\[\Sigma_{j}=M_{\gamma\gamma,j}^{-1}\,\text{diag}(\hat{n}_{1,j},\,\ldots,\,\hat{n }_{n_{\rho},j},\,0)\,(M_{\gamma\gamma,j}^{-1})^{\text{T}}\,. \tag{111}\]
where we again approximated the Poisson errors via \((\Delta\hat{n}_{i,j})^{2}=\hat{n}_{i,j}\). The resulting uncertainties are shown for the example in the central panel of figure 4 and can be used in eq. (109) for the fitting procedure.
### Shape-preserving cubic spline interpolation
While instructional, there are number of issues with the direct reconstruction via matrix inversion. Most notably the matrix needs to be non-singular. In practical terms this means that the radial bins must be chosen such that they contain at least one (fractional) photon. Since most of the axion flux is produced at \(r\lesssim 0.25\,\text{R}_{\odot}\), cf. ref. [40], radial bins will be concentrated in the inner region of the Sun, possibly leading to "ringing" and unphysical fluctuations below zero of the interpolating splines in the outermost layers.
There exist "shape-preserving" spline interpolations that can guarantee properties such as positivity, convexity, or monotonicity of the resulting spline [e.g. 46, 90, 91] (see ref. [92] for the included review on monotone splines). However, the underlying algorithms cannot be written as a simple matrix equation anymore.
To make use of shape-preserving splines, we need to extend our ansatz for the \(\bar{\Gamma}^{\text{P}}\)s to arbitrary cubic polynomials. The corresponding coefficients must then be computed by an algorithm for a given set of \(\gamma_{i,j}\). The values for \(\gamma_{i,j}\) must, in turn, be computed from \(\bar{\Gamma}^{\text{P}}_{j}(r_{i}\,|\,g_{a\gamma\gamma},\,\kappa_{i},\,T_{i})\). Reconstruction of the \(\bar{\Gamma}^{\text{P}}\)s is thus only possible indirectly after fitting the underlying solar model parameters to the \(n_{i,j}\).
More concretely, each step in the fitting procedure needs to propose values for the \(\gamma_{i,j}\) coefficients, for which the adopted spline interpolation algorithm then computes the coefficients \(c_{k;i,j}\) for the polynomials
\[\bar{\Gamma}^{\text{P}}_{j}(r)=\sum_{i}\left[\gamma_{i,j}+\sum_{k=1}^{3}c_{k; i,j}(r-r_{i})^{k}\right]\Theta(r-r_{i})\,\Theta(r_{i+1}-r)\,. \tag{112}\]
Similar to the computation in appendix A.2, we can compute eq. (6) starting from eq. (112) to obtain matrix coefficients for \(\mathcal{M}\) similar to eq. (108). However, \(\mathcal{M}\) is now a \(4n_{\rho}\times n_{\rho}\) matrix, which means that it cannot be inverted to explicitly reconstruct the \(\bar{\Gamma}^{\text{P}}\)s. Instead, the \(\bar{\Gamma}^{\text{P}}\)s can be determined from the best-fitting points obtained by optimising eq. (7). For example, in the right panel of figure 4, we show the \(\bar{\Gamma}^{\text{P}}_{j}\) and their uncertainties, assuming the same setup as in the main text (cf. table 1).
|
2309.11927 | Beyond the Drake Equation: A Time-Dependent Inventory of Habitable
Planets and Life-Bearing Worlds in the Solar Neighborhood | We introduce a mathematical framework for statistical exoplanet population
and astrobiology studies that may help directing future observational efforts
and experiments. The approach is based on a set of differential equations and
provides a time-dependent mapping between star formation, metal enrichment, and
the occurrence of exoplanets and potentially life-harboring worlds over the
chemo-population history of the solar neighborhood. Our results are summarized
as follows: 1) the formation of exoplanets in the solar vicinity was episodic,
starting with the emergence of the thick disk about 11 Gyr ago; 2) within 100
pc from the Sun, there are as many as 11,000 (eta/0.24) Earth-size planets in
the habitable zone ("temperate terrestrial planets" or TTPs) of K-type stars.
The solar system is younger than the median TTP, and was created in a star
formation surge that peaked 5.5 Gyr ago and was triggered by an external agent;
3) the metallicity modulation of the giant planet occurrence rate results in a
later typical formation time, with TTPs outnumbering giant planets at early
times; 4) the closest, life-harboring Earth-like planet would be < 20 pc away
if microbial life arose as soon as it did on Earth in > 1 % of the TTPs around
K stars. If simple life is abundant (fast abiogenesis), it is also old, as it
would have emerged more than 8 Gyr ago in about one third of all life-bearing
planets today. Older Earth analogs are more likely to have developed
sufficiently complex life capable of altering the environment and producing
detectable oxygenic biosignatures. | Piero Madau | 2023-09-21T09:42:19Z | http://arxiv.org/abs/2309.11927v3 | Beyond the Drake Equation: A Time-Dependent Inventory of Habitable Planets and Life-Bearing Worlds in the Solar Neighborhood
###### Abstract
We introduce a mathematical framework for statistical exoplanet population and astrobiology studies that may help direct future observational efforts and experiments. The approach is based on a set of differential equations and provides a time-dependent mapping between star formation, metal enrichment, and the occurrence of exoplanets and potentially life-harboring worlds over the chemo-population history of the solar neighborhood. Our results are summarized as follows: (1) the formation of exoplanets in the solar vicinity was episodic, starting with the emergence of the thick disk about 11 Gyr ago; (2) within 100 pc from the Sun, there are as many as \(11,000\,(\eta_{\oplus}/0.24)\) Earth-size planets in the habitable zone ("temperate terrestrial planets" or TTPs) of K-type stars. The solar system is younger than the median TTP, and was created in a star formation surge that peaked 5.5 Gyr ago and was triggered by an external agent; (3) the metallicity modulation of the giant planet occurrence rate results in a later typical formation time, with TTPs outnumbering giant planets at early times; and (4) the closest, life-harboring Earth-like planet would be \(\lesssim 20\) pc away if microbial life arose as soon as it did on Earth in \(\gtrsim 1\%\) of the TTPs around K stars. If simple life is abundant (fast abiogenesis), it is also old, as it would have emerged more than 8 Gyr ago in about one-third of all life-bearing planets today. Older Earth analogs are more likely to have developed sufficiently complex life capable of altering their environment and producing detectable oxygenic biosignatures.
Astrobiology - Exoplanets - Habitable Planets - Metallicity - Solar Neighborhood - Star Formation +
Footnote †: journal: To appear in The Astrophysical Journal
## 1 Introduction
The search for habitable exoplanets and extraterrestrial life beyond the solar system is a topic of central interest for modern science, and one of the most compelling and consequential endeavors for humankind. Simple life emerged on Earth within the first billion years of its habitable window, and the high frequency of terrestrial planets in the habitable zones (HZs) around GK dwarf stars inferred from NASA's Kepler observations (Bryson et al., 2021) invites the question of how often (if at all) life may have arisen on other worlds in the past. This pursuit will ultimately require statistical analyses of the population of habitable systems, in-depth studies of the climates of individual planets, and searches for chemical biomarkers (Schwieterman et al., 2018), and has motivated the development of the next generation of large ground-based facilities and instrumentation. The yield and characterization of Earth-like planets will be a primary science metric for future space-based flagship missions, but the optimal observational strategy for addressing the origin and properties of planetary systems and the prevalence of habitable exoplanets and life beyond the solar system remains unclear (e.g., Bean et al., 2017; Tasker et al., 2017; Sandora and Silk, 2020; Truitt et al., 2020; Checlair et al., 2021; Sarkar, 2022; Batalha et al., 2023). The gathering of comprehensive data for each individual system is impractical if not impossible, so a statistical perspective is necessary to prioritize targets for follow-up observations. In particular, one would like to identify - given a model of habitability and biosignature genesis - how potential biosignature yields change during the evolution of a stellar system as a function of stellar properties like age, mass, and metallicity.
This paper aims to present a theoretical framework for exoplanet population and astrobiology studies that may provide a better statistical understanding of the formation history, frequency, age, and metallicity distributions of different planet types around stars of different
properties. Our approach may also establish a useful basis for testing hypotheses about habitable environments and life beyond the solar system, for gaining a sense of biosignature yields, and for informing future observational efforts and experiments. A well-known parameterization of the present-day abundance of life-bearing worlds in the Galaxy, \(N_{\ell}\), is represented by the first four terms in the probabilistic Drake equation (Drake, 1965), which can be rewritten as
\[N_{\ell}=N_{\rm MS}(t_{0})\,f_{p}\,n_{e}\,f_{\ell}. \tag{1}\]
Here, \(N_{\rm MS}(t_{0})\) is the total number of stars that are on the main sequence today and can provide their planets a stable HZ, \(f_{p}\) is the fraction of these stars that have planetary systems, \(n_{e}\) is the average number per planetary system of Earth-size planets that are in the HZ, and \(f_{\ell}\) is the subset of these rocky exoplanets that are "Earth-like" in a more detailed biochemical and geophysical sense and where simple life eventually arises. The first three terms (\(N_{\rm MS},f_{p},andn_{e}\)) in Equation (1) have already experimental measurements, and the fourth (\(f_{\ell}\)) is a conditional probability that may potentially be observable in the coming decades via spectroscopic searches for biosignature gases in exoplanet atmospheres (Schwieterman et al., 2018; Seager, 2018). In Drake's famous formulation, in order to estimate the number of active, communicative extraterrestrial civilizations around us today, the right-hand side of Equation (1) gets multiplied by the fraction \(f_{i}\) of life-bearing planets on which intelligent life emerges, times the percentage \(f_{c}\) of such civilizations that produces a detectable signal, times the fractional longevity \(f_{L}\) of a technological species.
The Drake equation and its 'biosignature' version (Seager, 2018) amount to a pedagogical and organizational summary of the factors that may affect the likelihood of detecting technologically advanced civilizations or just simple microbial life evolving on a habitable planet. They are only meant to guide the observational inputs needed to make an educated guess, rather than provide a time-dependent mapping between star formation, environment, exoplanets, and life-harboring worlds. Their inherent limitations include both the lack of temporal structure (e.g., Cirkovic, 2004; Forgan, 2009; Cai et al., 2021; Kipping, 2021) - an assumption of uniformity with time that precludes the inclusion of evolutionary effects associated with, e.g., the star formation and chemical enrichment history of the local Galactic disk and the time line of life emergence - as well as the difficulty of casting in a probabilistic argument the variety of phenomena and associated timescales that may influence anything quantified by probability \(f\)-factors and multiplicity \(n_{e}\). Recent rapid developments in astrophysics and planetary sciences warrant a more informative and modern evolutionary framework, a _rate-equation approach_ based on a system of first-order differential equations. These describe the changing rates of star, metal, planet, and habitable world formation over the history of a given stellar system, and can easily be adapted to incorporate the hierarchy of astrophysical and biological processes that regulate the age-dependent inventory of any key planet population.
The field of Galactic habitability and the formation history of Earth-like and giant planets in the Milky Way and the Universe as a whole have been research topics for more than two decades (e.g., Gonzalez et al., 2001; Lineweaver et al., 2004; Gowanlock et al., 2011; Behroozi and Peeples, 2015; Gobat and Hong, 2016; Zackrisson et al., 2016; Forgan et al., 2017; Balbi et al., 2020). Our approach expands upon some of the ideas employed in these early papers and develops new ones, focusing instead on the time-varying incidence of exoplanets and potentially habitable worlds over the chemo-population history of the _solar neighborhood_, the target of current and next-generation stellar and planetary surveys. This is the locale where more detailed calculations are justified by an avalanche of new data and actually needed in order to estimate, given a model of habitability and biosignature genesis, the relative biosignature yields among potential target stars.
The plan of this paper is as follows. In Section 2 we present the basic rationale and main ingredients of our modeling: the star formation history (SFH) and metallicity distribution function (MDF) in the solar vicinity, the planet occurrence rate around GK stars, and various metallicity-dependent effects. In Section 3 we cast and integrate our rate equations for the time evolution of the local abundance of dwarf stars and the giant planets and rocky planets in the HZ around them. We track giant planets to gauge the impact of their enhanced occurrence rate at higher host star metallicities relative to the weaker frequency-metallicity correlation of terrestrial planets. In Section 4 we extend our formalism to speculate about the formation history of life-harboring environments in the local volume under the hypothesis of a rapid abiogenesis process on Earth-like planets, and estimate the prevalence of nearby biospheres in terms of the exoplanet census as a whole. Finally, we summarize our findings and conclusions in Section 5.
## 2 Basic Stellar and Planetary Astrophysics
In order to provide absolute number counts in the solar vicinity we shall use the recent tally of main-sequence stars, giants, and white dwarfs within 100 pc of the Sun
from the Gaia Early Data Release 3 (EDR3). The Gaia Catalog of Nearby Stars contains
\[N_{\star}(t_{0})=331,312 \tag{2}\]
objects and is estimated to be \(>92\%\) complete down to faint stellar type M9 (Gaia Collaboration et al., 2021). Apart for a minor correction (a fraction of a percent for the initial mass function (IMF) in Equation (4)) associated with the contribution of remnant neutron stars and black holes, \(N_{\star}(t_{0})\) represents the total number of stars ever formed in the solar neighborhood. Below, we shall use this normalization together with an SFH and an IMF to compute the number of main-sequence stars as a function of time.
### Star Formation History
Let \(\phi(m)\) and \(\psi(t)\) be the (universal) IMF and SFH by number, respectively. The IMF and SFH are normalized so that \(\int_{0}^{t_{0}}\psi(t)dt=1=\int_{m_{l}}^{m_{u}}\phi(m)dm\). The number of stars that are on the main sequence at time \(t\), \(N_{\rm MS}(t)\), evolves at the rate
\[\dot{N}_{\rm MS}(t)=N_{\star}(t_{0})\int\phi(m)[\psi(t)-\psi(t-t_{\rm MS})]dm, \tag{3}\]
where the dot denotes the time derivative, \(t_{\rm MS}(m)<t\) is the main-sequence lifetime, and the second term in the square brackets corrects the rate of newly formed main-sequence stars for the number of stars that have evolved off the main sequence. In the following, we shall assume a Kroupa (2001) IMF
\[\phi(m)=\begin{cases}0.2530\,m^{-1.3}&(0.08\leq m<0.5)\\ 0.1265\,m^{-2.3}&(0.5\leq m<100),\end{cases} \tag{4}\]
where \(m\) is measured in solar masses. The main-sequence lifetime \(t_{\rm MS}\) can be computed using the analytical fitting formulae of Hurley et al. (2000) (based on the evolutionary tracks of Pols et al., 1998) as a function of \(m\) and metallicity \(Z\) (see Fig. 1).1 A G-type \(m=1\) main-sequence star, for example, has a lifetime of \(t_{\rm MS}=11\,\)Gyr at solar metallicity, and \(t_{\rm MS}=6.5\,\)Gyr at \(Z=0.1\,Z_{\odot}\).
Footnote 1: Other stellar evolution models could be adopted (Valle et al., 2014; Truitt et al., 2015; Stancliffe et al., 2016), but the resulting changes would not be significant in this context, as uncertainties in the input physics and solar composition lead to errors that are small compared to those associated with, e.g., uncertainties in the local SFH and exoplanet occurrence rates.
The SFH of the solar neighborhood has been recently reconstructed by Alzate et al. (2021) using all 120,452 stars brighter than \(G=15\) mag within 100 pc of the the Sun in the Gaia DR2 catalog. In broad agreement with previous determinations based on different techniques and data sets (e.g., Snaith et al., 2015; Mor et al., 2019; Ruiz-Lara et al., 2020), their results show two main early episodes of star formation: (1) a peak of activity occuring 10 Gyr ago that produced a significant number of stars with sub-solar metallicities, followed by a star formation minimum (quenching) around 8 Gyr ago; and (2) a more recent burst about 5.5 Gyr ago. Since then, star formation has been declining until recent times, making stars with supersolar metallicities in short-lived bursts of activity.
Most low-metallicity 10 Gyr old stars belong to the thick Galactic disk population (e.g. Robin et al., 2014; Haywood et al., 2019), as opposed to the thin disk for the rest of them. Clear evidence of a thick-disk peak at age \(9.8\pm 0.3\) Gyr is also seen in the local white dwarf luminosity function by Fantin et al. (2019). An intense phase of star formation between 9 and 13 Gyr ago during the emergence of the thick disk, producing about as much mass in stars as that manufactured in the next 8 Gyr, and followed by a minimum in the star formation rate at an age of \(\sim 8\) Gyr, was also apparent in the SFH reconstruction of Snaith et al. (2015, 2014). In a 2 kpc bubble around the Sun, Ruiz-Lara et al. (2020) inferred three recent episodes of enhanced star formation dated \(\sim 5.7\), 1.9 and 1 Gyr, in synchrony with the estimated Sagittarius dwarf galaxy pericenter passages.
Figure 2 shows the marginalized posterior SFH of solar neighborhood stars from Alzate et al. (2021) together with a reasonable reconstruction involving four Gaussians centered at ages of \(10,5.5,2.0\) and 0.7 Gyr, and
Figure 1: Main-sequence lifetimes \(t_{\rm ms}\) for stars with masses of 0.7, 0.8, 0.9, and 1.0 \(\,M_{\odot}\) (from top to bottom) at different metallicities, \(-2.2<\log_{10}(Z/Z_{\odot})<0.4\)(Hurley et al., 2000, “old” solar composition). The points show the results of the more recent stellar evolution calculations by Truitt et al. (2015, the “enhanced oxygen abundance” model).
having widths of 1.2, 0.9, 0.35, and 0.3 Gyr, respectively. In this reconstruction, which we use in the rest of this paper, about one-third of all stars belong to the old \(>9\) Gyr population, and only 17% to the youngest, \(<3\) Gyr component. There are periods of very little star formation around \(7-8\) Gyr ago and then again 3 Gyr ago. Note that, although the analysis by Alzate et al. (2021) uses a local sample, radial migration predicts that stars in the close solar vicinity may represent a mixture of stars born at various Galactocentric distances over the disk (see, e.g., Lian et al., 2022, and references therein).
### Metallicity PDF and Age-Metallicity Relation
To study the influence of stellar metallicity on the occurrence rates of planets and planetary systems, we shall adopt here the MDF from the GALAH+TGAS spectroscopic survey of dwarf stars in the solar galactic zone (Buder et al., 2019). Figure 3 shows the data histogram and the corresponding best-fit skewed distribution, \(G(M)\), with moments \(\langle M\rangle=-0.07\), \(\sigma_{M}=0.23\), and Skew \(=-0.51\), that accounts for the asymmetry of the extended metal-poor tail as well as the sharper truncation of the MDF on the metal-rich side. Here and below, we use the symbol \(M\) interchangeably with \(\log_{10}(Z/Z_{\odot})\) for compactness, and the metallicity distribution \(g(Z)\) is related to the MDF in bins of M as \(g(Z)=G(M)/(10^{M}\ln 10)\).
Solar neighborhood stars exhibit an age-metallicity relation, such that young age correlates with high metallicity, a temporal sequence that is the fossil record of the enrichment history of the Galactic disk (e.g., Haywood et al., 2019; Hayden et al., 2015; Haywood et al., 2013). Observations have shown that this relation has a significant scatter, attributed to the effects of radial migration and chemical mixing. Since our aim here is to characterize only the prevalent metallicity in each star formation episode correctly, we shall impose an age-metallicity relationship ignoring the dispersion around the mean - this is found to increase steadily with stellar age from 0.17 dex at age 2 Gyr to 0.35 dex at 13 Gyr (Buder et al., 2019). Within this framework, the fraction of stars that formed between time \(t\) and \(t+dt\), \(\psi(t)dt\), is then equal to the fraction of stars with metallicity between \(Z(t)\) and \(Z(t)+dZ\), \(g(Z)dZ\). The typical stellar metallicity therefore evolves with time as
\[\dot{Z}(t)=\psi(t)/g(Z). \tag{5}\]
The derived age-metallicity relation \(Z(t)\) is depicted in Figure 4. In broad agreement with previous results, it exhibits a rapid increase in metallicity at early epochs with an enrichment timescale of about 4 Gyr, followed by a slow evolution around solar values at ages between 4 and 10 Gyr and an upward trend toward supersolar
Figure 3: MDF of the GALAH+TGAS sample (red histogram; Buder et al., 2019). The distribution is skewed toward the metal-poor tail, with 59% of the stars having metallicities below solar. Note that, for plotting purposes, the histogram is expressed in densities and not in frequencies. The blue line shows a best-fit distribution in bins of \(M\) of the form \(G(M)=a(10^{M})^{b}\exp(-c10^{M})\), where \(M\equiv\log_{10}Z/Z_{\odot}\), \(a=128\), \(b=4.1\), and \(c=4.25\). The mean metallicity, standard deviation, and skewness of the distribution are provided in the text.
Figure 2: The SFH of the solar neighborhood. The blue pentagons with error bars show the marginalized age distribution (the fraction of stars formed per unit time at age \(t_{0}-t\)) inferred by Alzate et al. (2021, grid C, extinction corrected) for stars brighter than \(G=15\) mag in Gaia DR2. The solid curve displays a reasonable reconstruction of the local SFH involving four Gaussians centered at ages of \(10,5.5,2.0\) and 0.7 Gyr, having widths of 1.2, 0.9, 0.35, and 0.3 Gyr, respectively, and relative peaks 1:1.36:0.71:0.86. The dotted line indicates the three late peaks in the SFH (with a different normalization for illustration purposes) of the (kinematically defined) thin stellar disk derived by Ruiz-Lara et al. (2020).
metallicities at more recent times (e.g., Haywood et al., 2013; Snaith et al., 2015; Sharma et al., 2021).
We note in passing that the age-metallicity distribution may actually consist of two distinct populations, an old and a younger sequence corresponding to the formation of, respectively, the thick and the thin disk (Nissen et al., 2020). An analysis of the implications of this scenario is postponed to future work.
### Planet Occurrence Frequency Around FGK Stars
Exoplanet statistics in the inner regions of FGK dwarf stars have been investigated using the large and homogeneous sample from the Kepler mission (Thompson et al., 2018). Kepler planets commonly reside in multiplanet systems, and the integrated occurrence rate (the average number of planets per star) for exoplanets with radii in the range \(1-20\,R_{\oplus}\) and orbital periods up to \(P=400\,\)days is (Zhu and Dong, 2021)
\[\eta_{P}=1.23\pm 0.06. \tag{6}\]
Earth-size exoplanets in Earth-like orbits are not well probed by Kepler, and estimates of their frequencies are more uncertain. A recent analysis by Bryson et al. (2020, see also Burke et al., 2015) yields
\[\eta_{1}=0.015^{+0.011}_{-0.0007} \tag{7}\]
for the occurrence rate around GK dwarf stars of terrestrial planets within 20% of Earth's orbital period and radius.
The planet radius-orbital period parameter space defining \(\eta_{1}\) is a subset of the larger parameter space for \(\eta_{\oplus}\), the occurrence rate of Earth-size rocky planets in the HZ (hereafter "temperate terrestrial planets" or TTPs for short), roughly defined as the region around a Sun-like star in which a rocky planet with an Earth-like atmospheric composition can sustain liquid water on its surface (Kasting et al., 1993). The basic requirement for surface liquid water is predicated on a subset of the minimum conditions needed for a simple, microbial biosphere. Defining \(\eta_{\oplus}\) as the occurrence rate of TTPs with radii between 0.5 and 1.5 \(R_{\oplus}\) and orbiting stars with effective temperatures between 4800 and 6300 K, Bryson et al. (2021) recently derived
\[0.37^{+0.48}_{-0.21}<\eta_{\oplus}<0.60^{+0.90}_{-0.36}, \tag{8}\]
where the errors reflect 68% confidence intervals and the lower and upper bounds correspond to different completeness corrections. This occurrence rate uses the conservative HZ estimates from Kopparapu et al. (2014).
Below, we shall adopt a fiducial present-day occurrence rate of \(\eta_{\oplus}=0.24\). This is the standard value used in forecasting TTP yields from direct-imaging future flagship missions like HabEx and LUVOIR, and is based on the NASA ExoPAG SAG13 meta-analysis of Kepler data (Kopparapu et al., 2018). Note that, in the language of Drake's equation (Equation (1)), \(\eta_{\oplus}\equiv f_{p}n_{e}\).
For comparison, the frequency of giant gaseous planets with radii \(>4\,R_{\oplus}\) and orbital periods \(P<400\,\)days is estimated by Zhu and Dong (2021) to be
\[\eta_{\rm GP}=0.16\pm 0.015. \tag{9}\]
### Dependence of Planet Frequency on Stellar Metallicity
In the context of core-accretion planet formation theory, metal-rich protoplanetary disks have enhanced surface densities of solids, leading to the more efficient formation of the rocky cores of gas giant planets (Pollack et al., 1996; Ida and Lin, 2004). Is is well established observationally that metal-rich stars are more likely to host close-in giant planets (e.g., Fischer and Valenti, 2005; Petigura et al., 2018; Zhu, 2019), with an occurrence rate enhancement as a function of metallicity of the form
\[f(Z)=\mathcal{A}(Z/Z_{\odot})^{2}. \tag{10}\]
For a sample of stars with metallicity distribution \(g(Z)\), the normalization constant \(\mathcal{A}\) above is related to the integrated frequency of giant planets by Zhu et al. (2016)
Figure 4: Stellar age-metallicity relation in the solar neighborhood. The solid curve shows the result of the integration of Eq. (5) for the assumed SFH \(\psi(t)\) (Figure 2) and MDF \(g(Z)\) (Figure 3). The inferred relationship is in broad agreement with the mean abundance trends vs. age recovered by Snaith et al. (2015) for the inner (dashed curve) and outer (dotted-dashed curve) Milky Way disk. Note that stars in the solar vicinity have none of the characteristics of inner disk stars, and are better described as outer disk objects (Haywood et al., 2019).
\[\eta_{\rm GP}=\int g(Z)f(Z)dZ. \tag{11}\]
With the adopted MDF (Fig. 3), one derives \(\mathcal{A}=0.1369\).
Small planets show a weaker frequency-metallicity correlation (Sousa et al., 2008; Buchhave et al., 2012; Petigura et al., 2018). Most recently, Lu et al. (2020) were unable to confirm or reject a relationship between planet occurrence and host star metallicity for rocky planets with radii \(\lesssim 2R_{\oplus}\). In line with these analyses, no frequency-metallicity correlation for terrestrial planets will be assumed in our treatment. We shall also ignore possible correlations between small planet occurrence rates and \(\alpha\)-element enhancement (Bashi et al., 2020).
### Critical Metallicity for Terrestrial Planet Formation
In the core-accretion model of planet formation heavy elements are necessary to form the dust grains and planetesimals that build planetary cores. Johnson and Li (2012) estimated a minimum metallicity \(Z_{c}\) for planet formation by comparing the timescale for dust grain growth and settling to that for protoplanetary disk photoevaporation. They found that, for an Earth-size planet to form, a disk of surface density \(\Sigma(r)\) must have a metallicity
\[Z\gtrsim 0.1\,Z_{\odot}\left[\frac{\Sigma(1\,{\rm AU})}{10^{3}\,{\rm g}\,{\rm cm }^{-2}}\right]^{-0.48}. \tag{12}\]
Given the observed MDF (Fig. 3), the existence of a fiducial metallicity floor \(Z_{c}=0.1\,Z_{\odot}\) for the formation of terrestrial planets will only impact a small fraction of the census population in the solar neighborhood. Nevertheless, in our equations below we shall formally multiply the integrated occurrence rate of TTPs by the Heaviside function \(\theta(Z\)-\(Z_{c})\).
### Effect of Metallicity on the HZ
Low-metallicity stars have higher luminosities \(L_{\rm ZAMS}\) and higher effective temperatures \(T_{\rm eff}\) than metal-rich stars of the same mass (e.g., Tout et al., 1996). An \(m=1\) zero-age main-sequence (ZAMS) star of metallicity \(Z=0.1\,Z_{\odot}\), for example, is 80% brighter, 13% hotter, and has a larger HZ than its solar-metallicity counterpart. Conversely, an F-type star with \(m=1.5\) and \(Z=0.1\,Z_{\odot}\) has a main-sequence lifetime of only 1.8 Gyr (vs. 2.7 Gyr at solar metallicity), and is unlikely to be a good candidate for harboring continuously habitable planets.
We assess the impact of stellar metallicity on the habitability of terrestrial exoplanets around GK stars (see also Danchi and Lopez, 2013; Valle et al., 2014; Truitt et al., 2015) adopting the conservative HZ estimates of Kopparapu et al. (2014), where the inner and outer edges of the continuously HZ are defined by the "runaway greenhouse" (atmosphere becomes opaque to outgoing thermal radiation owing to excess amounts of H\({}_{2}\)O) and "maximum greenhouse" (increased reflectivity of a thick CO\({}_{2}\) atmosphere wins out over greenhouse effect) limits. The 1D, radiative-convective, cloud-free climate models of Kopparapu et al. (2013) provide critical values for the effective flux \(S_{\rm eff}\) - the normalized value of solar constant required to maintain a given surface temperature - as a function of the effective temperature of the host star
\[S_{\rm eff}=S_{\rm eff,\odot}+aT_{\star}+bT_{\star}^{2}+cT_{\star}^{3}+dT_{ \star}^{4}, \tag{13}\]
where \(T_{\star}=T_{\rm eff}-5760\,{\rm K}\) and the coefficients (\(a,b,c\), and \(d\)) for the runaway greenhouse and maximum greenhouse limits are listed in Kopparapu et al. (2014). Stellar effective temperatures and luminosities on the ZAMS were computed as a function of metallicity following Tout et al. (1996). The corresponding HZ distances can be calculated using the relation
\[d_{\rm ZAMS}=\left(\frac{L_{\rm ZAMS}/L_{\odot}}{S_{\rm eff}}\right)^{0.5}\,{ \rm AU}, \tag{14}\]
Figure 5: ZAMS HZ limits for a \(1\,M_{\oplus}\) planet around \(m=1\) G-type host stars of different metallicities. The inner HZ (blue curve) is defined by the runaway greenhouse limit, while the red curve marks the outer HZ – the maximum greenhouse limit. Stellar effective temperatures and luminosities were computed following Tout et al. (1996), and the \(y\)-axis covers the range from \((L_{\rm ZAMS}/L_{\odot},T_{\rm eff})=(0.70,5635\,{\rm K})\) at \(Z=1.6\,Z_{\odot}\) to \((L_{\rm ZAMS}/L_{\odot},T_{\rm eff})=(1.34,6464\,{\rm K})\) at \(Z=0.1\,Z_{\odot}\). The effective incident fluxes determining the inner and outer HZ were estimated using the parametric formulae of Kopparapu et al. (2014).
where \(L_{\rm ZAMS}/L_{\odot}\) is the luminosity of the star in solar units.2 Figure 5 depicts the predicted variations in HZ boundaries as a function of stellar metallicity for an \(m=1\) star. The effective stellar fluxes at the inner and outer edges of the HZ increase at low \(Z\)s because the calculated planetary albedos become higher as the star's radiation is shifted toward the blue, a dependence that is stronger at the outer HZ boundary because of the importance of Rayleigh scattering in dense CO\({}_{2}\) atmospheres (Kasting et al., 2014). Note, e.g., how a \(1\,M_{\oplus}\) planet at 1 au from an \(m=1\) ZAMS G-class dwarf of metallicity \(0.1\,Z_{\odot}\) is just outside the inner edge of its host's HZ, as this is evaluated at \(1.05-1.83\) au.
Footnote 2: Throughout this paper, the \(\odot\) subscript denotes present-day values.
Figure 5 shows how, at low metallicities, the HZ moves further away from the host star and is about 25% wider. These differences in HZ boundaries may be relevant for present and upcoming planet-finding surveys around low-metallicity stars (Dedrick et al., 2020). Because old age correlates with low stellar metallicity, one might expect \(\eta_{\oplus}\) to be larger at earlier times if other factors, like the efficiency of planetary formation and planetary spacing, were equal. Given the relatively flat period distribution, \(df/d\ln p\propto p^{0.2}\), observed for long-period Kepler planets (e.g., Bryson et al., 2020), however, the impact of such a shift in HZ boundaries on the predicted occurrence rates of TTPs around GK dwarfs in the young Galaxy should be rather minor.
### HZ Lifetime
The boundaries of a radiative HZ are not temporally or spatially static but "migrate" outward over the course of a star's main-sequence phase. The secular increase in stellar luminosity results in a runaway greenhouse event, which, in the case of Earth, will cause the cessation of habitable conditions and the likely extinction of our biosphere about 1.5-2 Gyr in the future (e.g., Goldblatt and Watson, 2012; De Sousa Mello and Santos Friaca, 2020), well before the Sun becomes a red giant. The transitory nature of the residence of a TTP within an HZ has strong astrobiological implications and can be described by the time a planet, located at a given distance from a star, spends within the HZ (Danchi and Lopez, 2013; Rushby et al., 2013; Waltham, 2017). Here, we have used the formulae and best-fitting coefficients of Hurley et al. (2000) for the time-dependent luminosities and radii of stars on the main sequence to derive estimates for the change in the HZ boundaries over time.
Figure 6 (top panel) shows the evolution of the inner edge of the HZ, computed as before from the prescriptions of Kopparapu et al. (2014), as a function of time \(\tau=t/t_{\rm MS}\) for GK dwarf stars of different masses and metallicities. The point on each curve marks the characteristic "HZ lifetime," \(t_{\rm HZ}(m,Z)\), when a hypothetical planet formed _in the center_ of the HZ at the ZAMS stage enters into the hot zone of the host star, undergoes a runaway greenhouse event, and becomes uninhabitable
Figure 6: Time-dependent HZ boundaries around GK dwarf stars. Top panel: evolution of the inner boundary of the HZ, \(d_{i}\) (in au), from the ZAMS (corresponding to \(\tau=0\)) to the end of the main sequence, as a function of \(\tau=t/t_{\rm MS}\) for stars of different mass and metallicities. Solid lines: \(m=0.7\) (blue), \(m=0.9\) (orange), and \(m=1.1\) (green), all calculated at \(Z=0.1\,Z_{\odot}\). Dashed lines: same for \(Z=Z_{\odot}\). The point on each curve denotes the time when a hypothetical planet, formed in the center of the HZ at the ZAMS stage, becomes uninhabitable. Bottom panel: HZ lifetime \(t_{\rm HZ}\) (in Gyr) as a function of stellar mass and metallicity. Solid blue line: \(Z=0.1\,Z_{\odot}\). Dashed red line: \(Z=Z_{\odot}\). The dotted lines show the corresponding main-sequence timescale.
(Rushby et al., 2013).3 Lower-mass stars, while characterized by longer main-sequence lifetimes, have proportionally smaller \(t_{\rm HZ}/t_{\rm MS}\) ratios than higher-mass stars, the result of their lower rates of stellar luminosity evolution (Rushby et al., 2013). At fixed mass, main-sequence lifetimes are shorter and the total luminosity change over the main sequence is larger for lower-metallicity stars (Truitt et al., 2015). In the bottom panel of Figure 6 we compare the "HZ lifetime" with the main-sequence timescale as a function of stellar mass and metallicity. Over the plotted mass interval, the ratio \(t_{\rm HZ}/t_{\rm MS}\) ranges from 0.75 to 0.90 at solar metallicities, and from 0.65 to 0.75 at \(0.1\,Z_{\odot}\). In absolute value terms, we find HZ lifetimes that are longer than the age of the Galaxy for \(m<0.9\) (\(Z=Z_{\odot}\)) and \(m<0.75\) (\(Z=0.1\,Z_{\odot}\)).
Footnote 3: Note that the habitable lifetime in fact changes with star-planet separation, gradually increasing between the inner and outer edges of the HZ, and that the full distribution of \(t_{\rm HZ}\) with distance should be taken into account in more advanced modeling (e.g. Waltham, 2017).
Below, we shall assume that the evolution of main-sequence stars - rather than biogeochemical processes - is the only factor controlling the collapse of the HZ zone and the reduction of the biosphere lifespan. As the HZ expands outward due to the effects of stellar evolution, any planets that were initially beyond the boundaries of the HZ - so-called "cold start" icy planets - could potentially become habitable at later times as the HZ reaches them. We shall neglect this possibility below as the delayed habitability of such globally glaciated exoplanets remains dubious (e.g., Yang et al., 2017).
## 3 Exoplanets Around K Dwarfs
We can now cast our model for the time evolution of the exoplanet population in the solar neighborhood into a set of rate equations that can be then be integrated as a function of time. Let us focus on K-class dwarfs with masses between \(m_{1}=0.45\) and \(m_{2}=0.80\) and main-sequence and HZ lifetimes that are typically longer than the age of the Galaxy at the metallicities of interest here. K stars may be better candidates in the search for biosignatures than G dwarfs, as they are more abundant, evolve less quickly on the main sequence, and provide their planets a stable HZ (e.g., 7Tuchow and Wright, 2020). They also offer a longer photochemical lifetime of methane in the presence of oxygen compared to G dwarfs and, being dimmer, provide a better planet-star contrast ratio in direct-imaging observations (Arney, 2019).4 For the assumed Kroupa IMF, the fraction of stars that are classified as K-type is
Footnote 4: The photochemical lifetime of methane in oxygenated atmospheres is even longer around M dwarfs (Segura et al., 2005), but M dwarf planet habitability may be hindered by extreme stellar activity and a prolonged superluminous pre-main-sequence phase.
\[\mathcal{F}(m_{1},m_{2})\equiv\int_{0.45}^{0.80}\phi(m)dm=0.14. \tag{15}\]
Note that if the lower bound on habitability corresponded to the spectral type K5 instead (\(m=0.65\,\,M_{\odot}\)), the factor \(\mathcal{F}\) in Equation (15) would decrease by a factor of 3.5, while the inclusion of M dwarfs below 0.45 \(M_{\odot}\) would boost the same integral by a factor of 6. Of course, for the occurrence rate of TTPs what matters is the product \(\eta_{\oplus}\times\mathcal{F}\), so one could group some of the uncertainties related to the lower habitability bound into the factor \(\eta_{\oplus}\).
### Rate Equations
We can now track the evolution of the mean number of K-type stars in the solar neighborhood, \(N_{K}(t)\), and the abundance of giant planets and TTPs around them, \(N_{\rm GP}(t)\) and \(N_{\oplus}(t)\), by numerically integrating over time the corresponding rates
\[\dot{N}_{K}(t) =N_{\star}(t_{0})\psi(t)\mathcal{F}, \tag{16}\] \[\dot{N}_{\rm GP}(t) =f[Z(t)]\dot{N}_{K}(t),\] \[\dot{N}_{\oplus}(t) =\eta_{\oplus}\theta[Z(t)-Z_{c}]\dot{N}_{K}(t).\]
Here, in the equation for \(\dot{N}_{\oplus}(t)\), we have assumed that exoplanets enter the HZ immediately after formation, and interpreted the parameter \(\eta_{\oplus}\) as the occurrence rate of TTPs around K-class stars on the ZAMS. 5 The equations above must be supplemented by Equation (5) for the evolution of the stellar metallicity \(Z(t)\).
Footnote 5: For more massive FG-type stars with main-sequence and HZ lifetimes that are shorter than the age of the Galaxy, the rate equations (16) take the more complicated form
\[\dot{N}_{\rm FG}(t) =N_{\star}(t_{0})\phi(m)[\psi(t)-\psi(t-t_{\rm MS})], \tag{17}\] \[\dot{N}_{\rm GP}(t) =f[Z(t)]\,\dot{N}_{FG}(t),\] \[\dot{N}_{\oplus}(t) =\eta_{\oplus}N_{\star}(t_{0})\phi(m)\,\theta[Z(t)-Z_{c}]\psi(t)\] \[-\eta_{\oplus}N_{\star}(t_{0})\phi(m)\,\theta[Z(t-t_{\rm HZ})-Z_{c }]\psi(t-t_{\rm HZ}).\]
The second term in the last equation approximately corrects the rate of newly formed TTPs for the amount that has "migrated" out of the HZ over the course of the star's main-sequence lifetime. Note that, because of the mass dependence of \(t_{\rm HZ}\leq t_{\rm MS}\), the rate equations must now be integrated in bins of stellar mass.
(16) for \(\eta_{\rm GP}=0.16\) and \(\eta_{\oplus}=0.24\). In our "solar vicinity" sphere of 100 pc radius, there are currently about \(11,000\,(\eta_{\oplus}/0.24)\) TTPs around K dwarfs. They have a median age of 6.2 Gyr and 77% of them are older than the solar system. The minimum metallicity threshold for Earth-size planet formation of Johnson and Li (2012) does not significantly affect these numbers as the vast majority of star formation has taken place at \(Z>0.1\,Z_{\odot}\). By contrast, the \(f(Z)\) modulation of the giant planet occurrence rate results in later typical formation times and shifts their median age to 3.9 Gyr, with terrestrial planets vastly outnumbering giant planets at early times.
The early formation of TTPs in the solar vicinity occurred largely in two major episodes of enhanced star formation, starting with the emergence of the thick disk about 11 Gyr ago and followed by a second event that lasted 3.5 Gyr, peaked 5.5 Gyr ago, and involved more than 40% of the total stellar counts today. The five planet system Kepler-444, orbiting a metal-poor \(11.2\pm 1.0\) Gyr old star, shows that thick-disk stars were indeed the hosts of some of the oldest terrestrial planets (Campante et al., 2015). The duration and size of the second major star formation surge suggest an external agent, perhaps a merger with a gas-rich satellite galaxy (Mor et al., 2019) or the first passage of the Sagittarius dwarf galaxy through the Milky Way's disk (Ruiz-Lara et al., 2020). Consistently with the Principle of Medicority, the solar system formed near the peak of this second episode. Over the last 4 Gyr the abundance of TTPs around K stars has increased by \(+24\%\), while that of giant planets has actually doubled.
## 4 Simple Life in the Solar Neighborhood
In the coming decades, advanced space- and ground-based observatories will allow an unprecedented opportunity to probe the atmospheres and surfaces of TTPs in search for signs of life or habitability. The discovery of extraterrestrial life would be a landmark moment in the history of science, with implications that would reverberate throughout all of society. Much of the early history of life on Earth has been dominated by methanogenic microorganisms, and methane in anoxic, Archean-like atmospheres is one of the most promising exoplanet spectroscopic biosignatures (7Schwieterman et al., 2018; Thompson et al., 2022). In spite of the recent swift developments in astrophysics and planetary sciences described in the previous sections, however, the probability of abiogenesis on Earth-like planets is currently unknown, as unknown are the characteristic timescales over which biochemical complexity actually evolves. The rapid emergence of life in the history of Earth has been used to argue for a high abiogenesis rate (e.g., Lineweaver and Davis, 2002; Kipping, 2020; Whitmire, 2022). A Bayesian framework may naturally account for the anthropic bias that, if the timescale for intelligence evolution is long, life's early start may simply be a prerequisite to our existence, rather than evidence for simple primitive life being common on Earth-like worlds (Spiegel and Turner, 2012). By means of this methodology, the recent analysis by Kipping (2020) shows that a fast abiogenesis scenario is at least three times more likely than a slow one.
In preparation for the next generation of space- and ground-based instruments, it seems interesting to conjecture on today's prevalence and time-varying incidence of microbial life-harboring worlds in the solar neighborhood under the hypothesis of a rapid abiogenesis pro
Figure 7: Exoplanet formation history in the solar neighborhood. Top panel: K-class dwarf (solid curve), giant planet (dashed curve), and TTP (dotted-dashed curve) formation rates in a “solar vicinity” sphere of 100 pc radius, as a function of age \(t_{0}-t\). These estimates assume \(\eta_{\rm GP}=0.16\) and \(\eta_{\oplus}=0.24\) (see text for details). Bottom panel: cumulative number counts resulting from the integration of Equation (16). Note how the solar system is younger than 77% of all TTPs, and has an age that is comparable to that of the median giant planet.
cess. We follow previous work (Spiegel and Turner, 2012; Scharf and Cronin, 2016; Chen and Kipping, 2018; Kipping, 2020) and describe abiogenesis as a stochastic Poisson process defined by a (uniform) rate parameter \(\lambda_{\ell}\) - the mean number of abiogenesis events occurring per Earth-like planet in a fixed time span, which we set equal to 1 Gyr. The probability of achieving at least one successful abiogenesis event over a time interval \(t-t^{\prime}\) since a given planet first became habitable at \(t^{\prime}\) is then given by
\[P_{\ell}(t-t^{\prime})=1-e^{-\lambda_{\ell}\,(t-t^{\prime})}. \tag{18}\]
The time-dependent probability that life emerges on a TTP can then be expressed as the product \(P_{\ell}(t-t^{\prime})P(\ell|\text{HZ})\), i.e. we distinguish here between the population of planets that are "temperate" (i.e. Earth-size rocky planets in the continuously HZ) and the subset that are actually "habitable," i.e. "Earth-like" in a more detailed biochemical and geophysical sense, and where simple life will eventually arise. The number of these "Earth analogs" that formed (and became habitable) between time \(t^{\prime}\) and \(t^{\prime}+dt^{\prime}\) and where life emerged by time \(t\) is \(dN_{\ell}(t)=P(\ell|\text{HZ})P_{\ell}(t-t^{\prime})\dot{N}_{\oplus}(t^{ \prime})dt^{\prime}\). Assuming a probability \(P(\ell|\text{HZ})\) that is independent of time, we can then write the mean number of life-hosting worlds present at time \(t\) as the convolution integral
\[N_{\ell}(t)=P(\ell|\text{HZ})\int_{0}^{t}\dot{N}_{\oplus}(t^{\prime})\,\left[1 -e^{-\lambda_{\ell}(t-t^{\prime})}\right]dt^{\prime}. \tag{19}\]
Their formation rate must be given by the time derivative of Equation (19), yielding
\[\dot{N}_{\ell}(t)=P(\ell|\text{HZ})\int_{0}^{t}\dot{N}_{\oplus}(t^{\prime})\, \lambda_{\ell}\,e^{-\lambda_{\ell}(t-t^{\prime})}dt^{\prime}. \tag{20}\]
Note that, in the formalism of Drake's Equation (1), \(P(\ell|\text{HZ})=f_{\ell}\) in the limit of fast abiogenesis.
The oldest, generally accepted evidence for life on Earth comes from observations of microbial sulfate reduction in the 3.48 Gyr Dresser Formation (e.g., Lepot, 2020). The Earth formed \(4.54\pm 0.05\) Gyr ago (Dalrymple, 2001), and mineralogical evidence from detrital zirconis indicates that liquid oceans must have been present on Earth's surface \(4.404\pm 0.008\) Gy ago (Wilde et al., 2001). The maximum plausible value for the time interval over which at least one successful abiogenesis event occurred on Earth is therefore \(\simeq 0.9-1\) Gyr. This is conservatively long compared to the maximum-likelihood timescale for life to first appear after conditions became habitable, \(\sim 190\) Myr, inferred by Kipping (2020), and much larger than the uncertainty as to when Earth became suitable for life, justifying our approximation of starting the "habitability clock" at formation.
We have integrated Equations (19) and (20) above assuming \(\lambda_{\ell}=1\,\text{Gyr}^{-1}\), and plotted in Figure 8 the resulting differential and cumulative number counts, \(\dot{N}_{\ell}\) and \(N_{\ell}\). For illustration, we have assumed in the figure a conversion factor \(P(\ell|\text{HZ})=1\) between TTPs and life-hosting Earths. Naturally, life would be abundant (again, we are concerned here with the appearance of the earliest life forms, not of intelligent life) in the solar vicinity if abiogenesis was fast and early Earth-like conditions existed and were relatively common on other worlds for 1 Gy or more. The closest life-harboring exoplanet would be only 20 pc away if simple life arose as soon as it did on Earth in just 1% of TTPs around K stars. Conversely, Earth would be the only life-hosting planet in the solar neighborhood if abiogenesis was successful in about 1-in-10,000 TTPs. If microbial life is abundant it is also old, as it would have emerged more than 8 Gyr ago in about one-third of all life-bearing planets today. Note how the convolution integral in Equation (20) tends to smooth out the oscillations in \(\dot{N}_{\ell}\) compared to the star and planet formation rates depicted in Figure 7, and that the assumed abiogenesis characteristic timescale of \(1/\lambda_{\ell}=1\,\text{Gyr}\) shifts the median age of the \(\sim 10,000\,P(\ell|\text{HZ})\) extrasolar biospheres predicted in the solar neighborhood to 5.7 Gyr.
### The Emergence of Oxygenic Atmospheres
A critical issue in the search for extraterrestrial life is whether Earth-like conditions lead to ecosystems
Figure 8: Abiogenesis in the solar neighborhood. Long-dashed green curve: formation rate, \(\dot{N}_{\ell}\), of life-bearing exoplanets as a function of age \(t_{0}-t\). The calculation (Equation (20)) assumes a rapid abiogenesis process with rate parameter \(\lambda_{\ell}=1\,\text{Gyr}^{-1}\), and simple life eventually arising in all TTPs, \(P(\ell|\text{HZ})=1\). Dot-dashed green curve: cumulative number counts of life-hosting exoplanets, \(N_{\ell}\), resulting from the integration of Eq. (19). Blue curves: same for life-hosting planets undergoing a “Neoproterozoic oxygenation event” (NOE) with rate parameter \(\lambda_{O}^{-1}=3.9\,\text{Gyr}\).
that progressively oxygenate their planet atmospheres roughly following Earth's oxygenation history. The first oxygenation of Earth's atmosphere due to the emergence of photosynthesizing cyanobacteria happened about halfway through Earth's history (Luo et al., 2016), but O\({}_{2}\) rose irreversibly to near present atmospheric levels only about between 800 and 550 Myr ago, during the NOE (Och and Shields-Zhou, 2012; Lyons et al., 2021), an event accompanied by major biological upgrades. While the precise timing of the NOE remains subject of debate, our findings inevitably invite the question of whether and how often, given an habitable environment and following a successful abiogenesis event, the conditions for the beginnings of complex life may have arisen on exoplanets in the solar neighborhood. We can then consider a second stochastic process, labeled "\(O\)" for "oxygenation" and defined by a rate parameter \(\lambda_{O}\), which can proceed only once abiogenesis ("\(\ell\)") is successful. The inverse of \(\lambda_{O}\) is the characteristic timescale it takes for the earliest forms of life to evolve and produce an NOE-like juncture. Consider again an Earth-like planet that formed at time \(t^{\prime}\). The joint probability density that abiogenesis was first successful at time \(t^{\prime\prime}\) and was followed by an oxygenation event at time \(t\) is then given by
\[p_{O}(t^{\prime\prime}-t^{\prime},t-t^{\prime\prime})=\lambda_{\ell}\lambda_{ O}\,e^{-\lambda_{\ell}(t^{\prime\prime}-t^{\prime})-\lambda_{O}(t-t^{\prime \prime})}. \tag{21}\]
The formation rate of simple life-hosting planets undergoing an NOE can then be written as
\[\dot{N}_{O}(t)=P(\ell|\mathrm{HZ})\int_{0}^{t}dt^{\prime\prime}\int_{0}^{t^{ \prime\prime}}dt^{\prime}\dot{N}_{\oplus}(t^{\prime})p_{O}. \tag{22}\]
We have integrated this equation assuming \(\lambda_{O}^{-1}=3.9\,\mathrm{Gyr}\) and plotted the results in Figure 8. Because of the considerable delay between planet formation and NOE, the \(\sim 7500\,P(\ell|\mathrm{HZ})\) worlds with strong oxygenic atmospheric biosignatures predicted to exist in the solar neighborhood have a formation rate that peaked 5 Gyr ago and a median age comparable to that of the solar system.
## 5 Summary and Discussion
The search for habitable exoplanets and extraterrestrial life beyond Earth is one of the greatest scientific endeavors of all time. The high frequency of terrestrial planets in the HZs around dwarf stars implied by Kepler observations makes it timely to develop and explore new tools - beyond the probabilistic Drake equation - for statistical exoplanet population and astrobiology studies that may help directfuture mission designs and observational efforts. In particular, one would like to understand - given a model of habitability and biosignature genesis - the formation history of simple life-harboring environments in the local volume, and identify how potential atmospheric biosignature yields change as a function of stellar properties like age, mass, and metallicity.
The approach we have described in this work is based on a system of simple ordinary differential equations, rewritten below for the convenience of the reader
\[\dot{N}_{K}(t) =N_{\star}(t_{0})\psi(t)\mathcal{F},\] \[\dot{Z}(t) =\psi(t)/g(Z),\] \[\dot{N}_{\mathrm{GP}}(t) =f(Z)\dot{N}_{K},\] \[\dot{N}_{\oplus}(t) =\eta_{\oplus}\theta(Z-Z_{c})\dot{N}_{K}, \tag{23}\] \[\dot{N}_{\ell}(t) =P(\ell|\mathrm{HZ})\int_{0}^{t}\dot{N}_{\oplus}(t^{\prime})\, \lambda_{\ell}\,e^{-\lambda_{\ell}(t-t^{\prime})}dt^{\prime},\] \[\dot{N}_{O}(t) =P(\ell|\mathrm{HZ})\int_{0}^{t}dt^{\prime\prime}\int_{0}^{t^{ \prime\prime}}dt^{\prime}\dot{N}_{\oplus}(t^{\prime})p_{O},\]
which track the evolution of the mean number \(N_{K}\) of K-type stars in the solar neighborhood, their metallicity \(Z=Z(t)\), and the abundances \(N_{\mathrm{GP}}\), \(N_{\oplus}\), \(N_{\ell}\), and \(N_{O}\) of giant planets, TTPs, life-harboring worlds, and planets with oxygen-rich atmospheres, respectively These rate equations provide a time-dependent mapping between star formation, metal enrichment, and the occurrence of potentially habitable exoplanets over the chemo-population history of the solar neighborhood, and presents a useful basis for testing hypotheses about Earth-like environments and life beyond the solar system. The new framework can be easily adapted to incorporate the hierarchy of astrophysical and biological processes that regulate the age-dependent inventory of any key planet population.
We have numerically integrated the equations above adopting the recent tally of nearby stars (\(N_{\star}\)) and white dwarfs from Gaia EDR3 (Gaia Collaboration et al., 2021), the episodic SFH (\(\psi\)) of the solar neighborhood as reconstructed by Alzate et al. (2021) and Ruiz-Lara et al. (2020), the MDF (\(g\)) from the GALAH+TGAS spectroscopic survey of dwarf stars in the solar galactic zone (Buder et al., 2019), and assuming an age-metallicity relation. In our model, the function \(f(Z)\) describes the metallicity modulation of the occurrence rate of giant gaseous planets (with integrated frequency today \(\eta_{\mathrm{GP}}=0.16\), Zhu and Dong, 2021), and we take \(\eta_{\oplus}=0.24\) for the fiducial occurrence rate of TTPs (Kopparapu et al., 2018) around K-class stars on the ZAMS. There is a minimum metallicity threshold for Earth-size planet formation of \(Z_{c}=0.1\,Z_{\odot}\)(Johnson and Li, 2012). Following earlier work (e.g. Spiegel and Turner, 2012), we describe abiogenesis as a stochastic Poisson process defined by a (uniform) rate parameter \(\lambda_{\ell}\), and denote with
\(P(\ell|\mathrm{HZ})\) the (constant) probability that a seemingly potentially habitable planet in the HZ was at early times "Earth-like" in a more detailed biochemical and geophysical sense and eventually became inhabited by life. A second stochastic process, an oxygenation event defined by a rate parameter \(\lambda_{O}\), can proceed only once abiogenesis is successful.
Our main results can be summarized as follows.
1. The formation of exoplanets in the solar vicinity followed two major events of enhanced star formation, starting with the emergence of the thick disk about 11 Gyr ago and followed by a second event that peaked 5.5 Gyr ago, lasted 3.5 Gyr, and produced more than 40% of all stars today. The solar system formed in the second star formation surge and was likely triggered by an external agent, perhaps a merger with a gas-rich satellite galaxy (Mor et al., 2019) or the first passage of the Sagittarius dwarf galaxy through the Milky Way's disk (Ruiz-Lara et al., 2020).
2. Within 100 pc from the Sun, there are as many as \(11,000\,(\eta_{\oplus}/0.24)\) TTPs around K-type stars. About 77% of all TTPs in the solar neighborhood are older than the solar system.
3. The metallicity modulation of the giant planet occurrence rate results in a later typical formation time, with TTPs vastly outnumbering giant planets at early times. Over the last 4 Gyr, the abundance of TTPs around K stars has increased by only \(+24\%\), while that of giant planets has doubled. The existence of a fiducial metallicity floor for the formation of terrestrial planets impacts only a small fraction of the census population in the solar neighborhood, as the vast majority of star formation has taken place at \(Z>0.1\,Z_{\odot}\).
4. The closest life-harboring Earth analog would be less than 20 pc away if microbial life arose as soon as it did on Earth in \(\gtrsim 1\%\) of the TTPs around K stars. Conversely, Earth would be the only life-hosting planet in the solar neighborhood if abiogenesis was successful in about 1-in-10,000 TTPs. If simple life is abundant (fast abiogenesis with characteristic timescales \(1/\lambda_{\ell}=1\,\mathrm{Gyr}\)), it is also old, as it would have emerged more than 8 Gyr ago in about one-third of all life-bearing planets today.
We finally note that errors in the number counts of exoplanets are likely dominated by planet occurrence rates (\(\eta_{\mathrm{GP}}\) and \(\eta_{\oplus}\)), which are uncertain at the \(\sim 0.1-0.5\) dex level due to incompleteness in long-period candidates. Comparable systematic errors may be associated with uncertainties in the IMF, SFH, and metallicity effects. Needless to say, in the case of life-hosting worlds, the precise values of \(P(\ell|\mathrm{HZ})\), \(\lambda_{\ell}\), and \(\lambda_{O}\) are currently unknown and remain a matter of speculation. Our work says nothing about how difficult or easy abiogenesis really is, a question that must ultimately be answered empirically. Still, given a model of habitability and biosignature genesis, our approach may provide a blueprint for assessing the prevalence of exoplanets and microbial life-harboring worlds over the chemo-population history of the solar neighborhood, gaining a sense of the atmospheric biosignature yields among potential target stars of different masses, ages, and metallicities, and guiding future observational efforts and experiments.
## Acknowledgements
Support for this work was provided by NASA through grant 80NSSC21K027. We acknowledge useful discussions on this project with J. Alzate, M. Bisazza, F. Haardt, D. Lin, and R. Murray-Clay, and the hospitality of New York University Abu Dhabi during the completion of this study. The author would also like to thank the referee for a number of constructive comments and suggestions that greatly improved the paper.
|
2308.16636 | Effects of the $α$-cluster structure and the intrinsic momentum
component of nuclei on the longitudinal asymmetry in relativistic heavy-ion
collisions | The longitudinal asymmetry in relativistic heavy ion collisions arises from
the fluctuation in the number of nucleons involved. This asymmetry causes a
rapidity shift in the center of mass of the participating zone. Both the
rapidity shift and the longitudinal asymmetry have been found to be significant
at the top CERN Large Hadron Collider (LHC) energy for collisions of identical
nuclei, and the longitudinal asymmetry is important for reconstructing the
colliding vertex and correcting the rapidity shift. However, much discussion of
the longitudinal asymmetry has treated the initial condition as a nonzero
momentum contributed only by the number of participants, i.e., the asymmetry
depends only on the number of participating nucleons. So we naturally raise a
physical problem, can other initial conditions, such as two typical initial
conditions for nuclei, geometric configuration, and momentum distribution,
provide effects on the longitudinal asymmetry? Therefore, in this work we
consider other effects on the longitudinal asymmetry other than the fluctuation
in the number of participants, e.g., the {\alpha} clustering structure as well
as the intrinsic momentum distribution in the target and projectile nuclei for
the collisions in the framework of a multiphase transport (AMPT) model. By
introducing systems with different {\alpha}-clustering structure and intrinsic
momentum distribution, we calculated the ratio of the rapidity distributions of
different systems and extracted expansion coefficients to analyze the
difference contributed by these factors. ... | Ru-XIn Cao, Song Zhang, Yu-Gang Ma | 2023-08-31T11:11:37Z | http://arxiv.org/abs/2308.16636v2 | Effects of the \(\alpha\)-cluster structure and the intrinsic momentum component of nuclei on the longitudinal asymmetry in relativistic heavy-ion collisions
###### Abstract
The longitudinal asymmetry in relativistic heavy-ion collisions arises from the fluctuation in the number of participating nucleons. This asymmetry causes a rapidity shift in the center of mass of the participant zone. Both the rapidity shift and the longitudinal asymmetry have been found to be significant at the top LHC energy for collisions of identical nuclei. However, much discussion of the longitudinal asymmetry has treated the initial condition as a non-zero momentum only contributed only by the number of participants, i.e., the asymmetry depends only on the number of participating nucleons. In this work, we consider other effects on the longitudinal asymmetry other than fluctuation in the number of participants, e.g. the intrinsic momentum distribution as well as \(\alpha\)-clustering structure in the target or projectile nuclei for the collisions in the framework of a multiphase transport (AMPT) model. By introducing systems with different \(\alpha\)-clustering structure and intrinsic momentum distribution, we calculated ratio of different systems' rapidity distribution and extracted expansion coefficient to analyze the difference contributed by these factors. And we investigated the possible effect of non-Gaussian distribution on the rapidity distribution. These results may help us to constrain the initial conditions in ultra-relativistic heavy-ion collisions, and suggest a quantitative correction on final state measurement and a possible correlation between the initial condition and the final-state observable in LHC and RHIC energy.
## I Introduction
For decades, relativistic heavy-ion collision experiment has been an important approach to study properties of strong interaction as well as quark-gluon plasma (QGP) which was supposed existed in the early universe [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Generally in heavy-ion collisions, we treat colliding nucleons as two parts, i.e. participants that take part in collisions and spectators that simply pass through the collision zone without interaction. For a collision of non-identical nuclei, the number of participating nucleons from each nucleus is naturally different. However, for a collision of identical nuclei, the number of participants may also fluctuate event-by-event. That means the numbers of participants in two colliding nuclei may also lead to an inequality. This inequality from participant number fluctuation will lead to a non-zero net momentum of the nucleon-nucleon centre of mass frame, but assumed fixed momentum for each nucleon in advance. Thus the center of mass of participants will shift from the collider center of mass of the system, further results in the rapidity shift at final state. This effect was usually called as longitudinal asymmetry [15; 16]. The longitudinal asymmetry reflects the fluctuation of nucleon at initial state, and may manifest in some phenomena. For instance, the \(\Lambda\) polarization was investigated in Ref. [17] which applied the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model [18; 19; 20; 21] and gave global spin polarization of \(\Lambda\) hyperon for \({}^{108}\)Ag + \({}^{108}\)Ag and \({}^{197}\)Au + \({}^{197}\)Au collisions at \(\sqrt{s_{NN}}\)=2.42-62.4 GeV. In that work it was compared with measurements from the HADES Collaboration [22] and STAR Collaboration [23] and fitted well at lower energies. They concluded that the global polarization was a result of the global angular momentum of the system, so that the longitudinal asymmetry involving initial momentum spatial asymmetry may also be correlated to the polarization phenomena.
Previous study on longitudinal asymmetry usually focus on the effects from participant fluctuation between target and projectile. Thus a motivation naturally arises, can other effects at initial state of collision provide additional significant contribution to longitudinal asymmetry? Based on this motivation, we consider two important effects at initial state - \(\alpha\)-clustering structure and short range correlation, which may intensify the longitudinal asymmetry.
\(\alpha\)-clustered nucleus was proposed by Gamow [24], which can be regarded as a special case of nuclear structure. In that view, in stable nuclei especially for \(4N\) nuclei, some small groups (like \(\alpha\)) made up of two protons and two neutrons are likely to exist. Then in the nucleus these groups are connected in different shapes like triangle in \({}^{12}\)C, tetrahedron in \({}^{16}\)O and so on. The clustering effect is important to nuclear equation of state, nucleosynthesis and many other problems [25; 26; 27; 28]. Various observables have therefore been have been proposed to study the clustering of nuclei in the heavy ion reaction, such as collective flow [29; 30; 31], multiplicity correla
tion [32; 33]. A recent review can be found in [34; 35]. So we assume that such geometry configurations are likely to affect the fluctuation of numbers of participants at initial state, and further contribute to the longitudinal asymmetry.
Another effect taken into our account is short range correlation (SRC). The SRC can partly arise from the nucleon-nucleon short-range central interaction [36; 37]. And the intrinsic momentum distribution of nucleons is a direct reflection, which shows us the probability to find a nucleon at certain momentum in a nucleus. When using high values of nucleon momentum and removal energy to describe nucleon spectral function, the function can be written in the form of a convolution integral involving the momentum distributions describing the relative and center-of-mass motion of a correlated nucleon-nucleon pair embedded in the medium [37]. High momentum tail (HMT), as a direct result from SRC, can be found in momentum distribution of nucleons, and some studies show that the contribution of HMT is mainly provided by proton-neutron pairs [37; 38]. In Ref. [39] the related phenomenon in an Extended Quantum Molecular Dynamics (EQMD) model has been discussed, and the effects on emission time distribution, momentum spectrum and momentum correlation function of two emitted protons of \({}^{12}\)C-\({}^{11}\)B reaction are also investigated, which demonstrated the importance of SRC. The intrinsic momentum distribution of nucleon may also affect shift of initial center of mass, then affect the longitudinal symmetry.
Under AMPT frame, it is simulated that \({}^{12}\)C + \({}^{12}\)C collisions with/without \(\alpha\)-cluster at center of mass energy \(\sqrt{s_{NN}}=6.37\) TeV and 200 GeV, \({}^{12}\)C + \({}^{12}\)C collisions with/without intrinsic momentum distribution at 200 GeV, as well as \({}^{197}\)Au + \({}^{197}\)Au collisions with Woods-Saxon configuration at 200 GeV. With the same \(\sqrt{s_{NN}}\) and configuration (such as the default Woods-Saxon), comparison between different systems, for example, C + C and Au + Au, reveals the system size dependence of longitudinal asymmetry. Also for the same configuration like Woods-Saxon, comparison between at 200GeV and 6.37TeV in C + C collisions shows us the energy dependence of longitudinal asymmetry. Similarly, at the same \(\sqrt{s_{NN}}\), comparison between systems with Woods-Saxon and \(\alpha\)-cluster reveals effect on longitudinal asymmetry from geometry configuration, comparison between systems with Free-Fermi-Gas and High-Momentum-Tail reveals effect on longitudinal asymmetry from intrinsic momentum distribution, in which the High-Momentum-Tail case can show us how the short range correlation in nucleon pair change longitudinal asymmetry.
The paper is organized as follows: in Sec. II, we respectively gave brief introductions of models inducted into our simulation - AMPT model, \(\alpha\)-cluster structure and HMT effect. Then we introduced basic methods to calculate those longitudinal asymmetry parameters and provide correction from our \(\alpha\)-cluster effect and HMT effect. Also we suggested possible reasons to explain difference between different results, and associate those reasons with some further investigate in later works. In Sec. III, we used AMPT to simulate C + C and Au + Au collisions with different initial conditions, and extracted their longitudinal asymmetry parameters and expansion coefficients. Then we compared parameters and coefficients from various systems and pointed out their difference. In Sec. IV, we explained effect on longitudinal asymmetry from initial condition, which can provide us insights and guidance on how to constrain condition of collisions and connect observable at final-state with different system effect in future experimental measurements. At last in Sec. V, we give conclusion and outlook of our works.
## II Models and methods
### Introduction to AMPT
A multiphase transport model [40; 41; 42] is composed of four stages to simulate relativistic heavy-ion collisions. It has successfully described various phenomena at RHIC and LHC energies and becomes a well-known event generator. The AMPT has two versions: String Melting (SM) and Default. In SM version, Heavy Ion Jet Interaction Generator (HIJING) [43; 44] is used to simulate the initial conditions, then Zhang's Parton Cascade (ZPC) [45] is used to describe interactions for partons which are from all of hadrons in the HIJING but spectators, after which a simple Quark Coalescence Model describes hadronization process, finally A Relativistic Transport (ART) model [46] simulates hadron re-scattering process. The Default version of AMPT only conducts the mini-jet partons in partonic scatterings via ZPC and uses the Lund string fragmentation to perform hadronization.
AMPT model [40; 42] can describe the \(p_{T}\) spectrum and energy dependence of identified particles such as pion, kaon, \(\phi\), proton and \(\Omega\) produced in heavy-ion collisions [41; 47; 48], as well as the collective flows and temperature during evolution etc [49; 50; 51; 52; 53]. Chiral and magnetic related anomalous phenomena can also be described by the AMPT model [54; 55; 56; 57; 58; 59]. More details of the model description and selection of the set for parameters can be found in Refs. [40; 41; 42].
### \(\alpha\)-cluster structure
In recent several decades, various theoretical models were developed to study the \(\alpha\)-cluster structure, such as the Fermion Molecular Dynamics model (FMD) [60; 61], the Antisymmetric Molecular Dynamics model (AMD) [62; 63], the extended Quantum Molecular Dynamics model (EQMD) [64; 65; 66] and so on. In our simulation, the initial nucleon distribution in nuclei is configured in the HIJING model with either a pattern of
Woods-Saxon distribution or an exotic nucleon distribution which is embedded to study the \(\alpha\)-clustered structure [26; 30]. The parameters set for the triangle structure are inherited from an extended quantum molecular dynamics (EQMD) model [26]. EQMD is extended from the quantum molecular dynamics (QMD) model, which can give reasonable \(\alpha\)-cluster configurations for 4N nuclei by taking the effective Pauli potential and dynamical wave packet into account. And more details for parameter setting can be seen in Ref. [26; 30].
### High momentum component (HMT)
The high-momentum-tail caused by short range correlation is also proposed to contribute to the longitudinal asymmetry in heavy-ion collisions. By comparing calculated results from model with inclusive and exclusive experiments [37; 38; 67; 68], the momentum distribution can be described as two parts: \(n_{0}(k)\) corresponding to low-momentum part which is dominated by single particle features of nucleon structure, \(n_{1}(k)\) corresponding to high-momentum part which is dominated by short-range properties of nucleon structure. Simply one can write the momentum distribution as [68]:
\[\begin{cases}n(k)\approx n_{0}(k)=\dfrac{1}{4\pi A}\sum_{\alpha<\alpha_{F}}A_ {\alpha}n_{\alpha}(k)\text{ for }k<\hat{k}\\ n(k)\approx n_{1}(k)=C^{A}n_{deut}(k)\text{ for }k>\hat{k}\end{cases}, \tag{1}\]
where the subscript \(F\) in \(\alpha_{F}\) means Fermi level and Fermi momentum, and other variables can all be parameterized from light nuclei momentum distribution fitting [68]. For the above distribution, it is always compared with Free-Fermi-Gas (FFG) distribution in this work. More details for parameterization can be found in Ref. [68]. In this work, we add this distribution into initialization of AMPT model. The default case is the Woods-Saxon distribution, which generally describes only potential of nucleon. FFG case means free Fermi gas, in which all nucleons' momentum distribution is below the Fermi momentum. However, for our focus - HMT, nucleon's momentum could reach high momentum tail, corresponding to \(n_{1}(k)\) resulted from SRC.
### Methodology
Generally, the longitudinal asymmetry can be characterized by some parameters [15]. Here we give the rapidity shift \(y_{0}\), asymmetry of participants \(\alpha_{part}\) and asymmetry of spectator \(\alpha_{spec}\):
\[y_{0}=\frac{1}{2}ln\frac{A}{B} \tag{2}\] \[\alpha_{part}=\frac{A-B}{A+B}\] (3) \[\alpha_{spec}=\frac{(N-A)-(N-B)}{(N-A)+(N-B)}=\frac{B-A}{2N-(A+B)}, \tag{4}\]
where, \(A\) and \(B\) mean numbers of nucleon participating from the two colliding nuclei (naturally for identical nuclei \(A\) and \(B\) are equivalent), and \(N\) is the total number of nucleons in each nucleus. And it should be noted that \(y_{0}\approx\frac{1}{2}ln\frac{A}{B}\) is appropriate when \(m_{0}\ll p\), fortunately it is possible in LHC at TeV scale \(m_{0}/p<10^{-6}\) and in RHIC at GeV scale \(m_{0}/p<10^{-4}\). Hence we can also write the equation as \(y_{0}=\frac{1}{2}ln\frac{1+\alpha_{part}}{1-\alpha_{part}}\). Further, when \(\alpha_{part}\) is small enough, it is easy to see that \(y_{0}\approx\alpha_{part}\).
With these definition we can classify vast events in terms of their \(y_{0}\), for each event of nucleus-nucleus collision has its own rapidity shift \(y_{0}\) which is only determined by initial \(A\) and \(B\). And although we can not directly acquire the \(A\) and \(B\), the practical experiments provide us indirect method: by gaining energy deposited in the zero-degree calorimeters on either side of the interaction vertex in collider experiments [69], we can measure the \(\alpha_{spec}\), then \(y_{0}\) can be calculated through the transformed equation:
\[y_{0}=\frac{1}{2}ln\frac{(A+B)(1+\alpha_{spec})-2N\alpha_{spec}}{(A+B)(1- \alpha_{spec})+2N\alpha_{spec}} \tag{5}\]
And further, to keep consistent to the measurement \(\alpha_{ZN}\) in ALICE experiments [69], the longitudinal asymmetry can also be defined by number of neutrons in spectators, denoted as \(A^{n}_{spec}\) and \(B^{n}_{spec}\), instead of \(\alpha_{spec}\):
\[\alpha_{ZN}=\frac{A^{n}_{spec}-B^{n}_{spec}}{A^{n}_{spec}+B^{n}_{spec}}. \tag{6}\]
In Fig. 1, according to different \(\alpha_{ZN}\) region [69], we plot \(y_{0}\) distribution in Au + Au (Woods-Saxon case), C + C (Woods-Saxon, FFG and HMT case) collisions at center of mass energy \(\sqrt{s_{NN}}=200\) GeV and C + C (Woods-Saxon and Triangle case) at \(\sqrt{s_{NN}}=6.37\) TeV by using AMPT (String Melting) model and the distribution is consistent with other models [15; 16; 69].
In the distribution of \(y_{0}\) shown in Fig. 1, we should note that if the nucleon intrinsic momentum distribution in the nuclei is taken into account, the definition of rapidity shift \(y_{0}\) should be corrected as,
\[\begin{split}& y_{0}=\frac{1}{2}ln\frac{1+\alpha_{ mom}}{1-\alpha_{mom}}\\ &\alpha_{mom}=\frac{|P^{A}_{z}|-|P^{B}_{z}|}{|P^{A}_{z}|+|P^{B}_{z }|}\end{split} \tag{7}\]
where \(P^{A}_{z}\) and \(P^{B}_{z}\) are the longitudinal momentum of the participants from the two colliding nuclei. Note that \(P^{A}_{z}\) and \(P^{B}_{z}\) would be not equal to the beam momentum due to the effect of the nucleon intrinsic momentum distribution. Also for FFG and HMT cases, the \(\alpha_{ZN}\) which is used to divide positive or negative regions should take momentum distribution into account, the \(A^{n}_{spec}\) and \(B^{n}_{spec}\) in Eq. 6 should be naturally replaced by \(P^{A_{spec}}_{z}\) and \(P^{B_{spec}}_{z}\).
Now that we have the \(y_{0}\) distribution classified by \({}_{ZN}\), the longitudinal asymmetry of the different regions becomes obvious. Naturally, for events in \(\alpha_{ZN}\)\(<\)\(-\)\(0.1\) region (which we call negative \(\alpha_{ZN}\) region), \(y_{0}\) distribution shows us a positive shift, also \(y_{0}\) distribution for events in \(\alpha_{ZN}\)\(>\)\(0.1\) region (which we call positive \(\alpha_{ZN}\) region) shows us a negative shift, and in \(|\alpha_{ZN}|\)\(<\)\(0.1\) region, \(y_{0}\) distributed in middle region. This negative correlation between \(\alpha_{ZN}\) and \(y_{0}\) can be understood from Eq.(4), for the behaviour of \(y_{0}\) intuitively reveals the physical picture of longitudinal asymmetry. For example, in an event, if \(A\)\(>\)\(B\), we have \(y_{0}\)\(>\)\(0\) according to Eq. (2). So the rest neutrons as spectators in projectile (noted as \(A_{spec}^{n}\)) will generally to be less than the rest neutrons as spectators in target (noted as \(B_{spec}^{n}\)), thus we have \(\alpha_{ZN}\)\(<\)\(0\) according to Eq.(6). Similar distribution can be seen in Ref. [69].
To further investigate the rapidity shift from the longitudinal asymmetry, it is proposed to take the ratio of the rapidity distribution of particles with positive asymmetry to that of negative asymmetry in collisions, \(\frac{\left(\frac{dN}{dy}\right)}{\left(\frac{dN}{dy}\right)}_{-asym}\)[16], in which the '\(+asym\)' corresponds to positive \(y_{0}\) region (\(\alpha_{ZN}<-0.1\)) and '\(-asym\)' corresponds to negative \(y_{0}\) region (\(\alpha_{ZN}>0.1\)), so the ratio can be expressed in Taylor expansion,
\[\frac{\left(\frac{dN}{dy}\right)}{\left(\frac{dN}{dy}\right)}_{-asym}\propto \sum_{0}^{\infty}c_{n}y^{n}. \tag{8}\]
If the rapidity distribution of the particles is in a Gaussian type, \(dN/dy\propto\exp\left(-\frac{\left(y-y_{0}\right)^{2}}{2\sigma^{2}}\right)\), Eq.(8) becomes,
\[\frac{\left(\frac{dN}{dy}\right)}{\left(\frac{dN}{dy}\right)}_{-asym}\propto \exp\left(\frac{2yy_{0}}{\sigma^{2}}\right)\propto\sum_{0}^{\infty}c_{n}(y_{ 0},\sigma)y^{n}, \tag{9}\]
where the Taylor expansion coefficients \(c_{n}\) will be related to the Gaussian parameters \(y_{0}\) and \(\sigma\) and yields \(c_{n}(y_{0},\sigma)=\frac{\left(2y_{0}/\sigma^{2}\right)^{n}}{n!}\). However, the rapidity distribution of particles does not always follow a Gaussian pattern and the no-Gaussian effect will be discussed later.
## III Results of longitudinal asymmetry from different system
### Result for \(y_{0}\) and numbers of participants
Panel (a1)-(a4), (b1)-(b3) in Fig. 1 show the \(y_{0}\) distributions at initial sate in C + C and Au + Au collisions at \(\sqrt{s_{NN}}\) = 200 GeV and C + C collisions at \(\sqrt{s_{NN}}\) = 6.37 TeV, respectively, for different \(\alpha_{ZN}\) regions. The results are consistent with other works for Au + Au and Pb + Pb collisions from various works [15; 16; 69]. In this calculation nucleon distributions are configured either as the Woods-Saxon type in \({}^{12}\)C or the \(\alpha\)-clustered triangle shape in \({}^{12}\)C. The \(y_{0}\) distributions in C + C collisions present similar behaviour for the different configurations of the nucleon distribution in the collided nuclei, but show stronger fluctuations than for larger collision
Figure 1: The figures show distribution of parameter \(y_{0}\) in different \(\alpha_{ZN}\) region for C + C at 200 GeV, C + C at 6.37TeV and Au + Au at 200 GeV in AMPT (String Melting) frame.
systems shown in (a4), and also show stronger fluctuations than for larger \(\sqrt{s_{NN}}\) in (a2) and (b2). For \(y_{0}\) distributions in C + C collisions with configuration for collided nuclei with nucleon momentum distribution in HMT and FFG. It can be seen the \(y_{0}\) distribution in (a3) and (b3) is affected by the nucleon intrinsic momentum distribution comparing with that in Woods-Saxon distribution in (a1). The former case shows larger width of \(y_{0}\) distribution contributed by momentum distribution.
Further in Fig. 1, by comparing C + C (W-S, 200 GeV) to C + C (W-S, 6.37 TeV), or C + C (Tri., 200 GeV) to C + C (Tri., 6.37 TeV), the systems at higher \(\sqrt{s_{NN}}\) (6.37 TeV) show smaller \(y_{0}\) fluctuation than those at lower \(\sqrt{s_{NN}}\) (200 GeV). And large system (Au + Au) also shows smaller \(y_{0}\) fluctuation than small system (C + C). These physical pictures are consistent with works in Refs. [15; 16]. But if we consider initial intrinsic momentum distribution, we can see the unfixed momentum in beam direction (in FFG and HMT) enhanced \(y_{0}\) fluctuation. Then in Fig. 2, it can be seen that the rapidity distribution at final state directly corresponds to different \(y_{0}\) shift in Fig. 1. The rapidity distribution with positive shift in \(\alpha_{ZN}\)\(<-\)\(0.1\) reflects the positive \(y_{0}\) shift in \(\alpha_{ZN}\)\(<-\)\(0.1\) and vice versa.
### Results of expansion coefficient
After plotting initial distribution of parameters, we can calculate \(c_{n}\) based on equation (8). It is clear that the longitudinal asymmetry becomes harder to be measured as the collision energy increasing or the regions close to middle region [15], thus the later extraction of more parameters like expansion coefficients may become harder to distinguish in investigation. As a result, we choose taking positive and negative region which are far from mid-region so that the events from both sides around symmetry events can provide distinct ratio to investigate further significant properties of longitudinal asymmetry.
The rapidity distributions of charged particles shown in Fig. 2 for events from the positive and negative rapidity shift regions in C + C collisions and Au + Au collisions, respectively, for different initial state configurations and collision energies. To illustrate the longitudinal asymmetry, the differences between the positive and negative shift regions are expressed by the ratio of \(\frac{\left(\frac{dN}{dy}\right)_{+asym}}{\left(\frac{dN}{dy}\right)_{-asym}}\) as shown in Fig. 3. According to Eq.(8) and Eq.(9), a third order polynomial is performed to fit the ratio and the coefficients \(c_{0},c_{1},c_{2}\) and \(c_{3}\) are extracted [15; 16]. The extracted coefficients \(c_{n}\)(\(n\) = 0, 1, 2, 3) are listed in table 1 for different collision systems with specific initial configurations.
For the \(\alpha\) cluster structure case and the Woods-Saxon case in table 1, at the same \(\sqrt{s_{NN}}\), there is no obvious difference between \(c_{n}\)(Tri.) and \(c_{n}\)(W - S) (here \(n\) = 1, 2, 3) within the uncertainty for the same order. If we compare their central values, \(c_{1}\) in the triangle case is slightly smaller than \(c_{1}\) in the Woods-Saxon case, and \(c_{2}\) behaves similarly to \(c_{1}\), while \(c_{3}\) is larger in the triangle case. In summary, the difference between the standard configuration and the cluster configuration is not clear.
For the case of intrinsic momentum distribution, according to table 1, the first order terms \(c_{1}\) in the W-S case are smaller than those in the FFG and HMT cases. However, \(c_{2}\) and \(c_{3}\) in the W-S case are larger than those in the FFG and HMT cases, respectively, and the high-order terms \(c_{2},c_{3}\) in the HMT case are larger than those in the FFG case, even considering their uncertainties.
## IV Explanation and further discussion
### Ideal Gaussian rapidity distribution and deformed rapidity distribution
Before discussing the results for \(c_{n}\), we should firstly consider how the parameters in ideal Gaussian distribution affect \(c_{n}\). According to Eq. (9), \(c_{n}\) can be directly determined by initial shift \(y_{0}\) and final rapidity width \(\sigma\). However, in experiments or transport model simulations, the rapidity distribution does not always have the ideal Gaussian distribution, so that Eq. (9) requires \(y_{0}^{+asym}=y_{0}^{-asym}\), \(\sigma^{+asym}=\sigma^{-asym}\), which means that \(c_{n}\) is very sensitive to \(y_{0},\sigma\) as explained in Ref. [69]. We can provide a simple method of estimating the magnitude of the sensitivity. We denote \(\frac{\sigma^{+asym}}{\sigma^{-asym}}=m,\frac{y_{0}^{+asym}}{y_{0}^{-asym}}=n\) and choose \(\sigma^{+asym}=\sigma,y_{0}^{+asym}=y_{0}\) (just for convenience), the widths and means in Fig. 2 give \((m-1)\sim 10^{-3},n\sim 10^{-1}\). Ignoring small higher order quantities such as \((1-m^{2}),y_{0}^{2}\), we can estimate the difference between the simulated rapidity distribution and the standard Gaussian shape: \(\frac{ratio_{sim}}{ratio_{gas}}\sim exp\frac{m(n+1)y_{0}}{\sigma^{2}}\). Both our simulation and Ref. [15; 69] give \(y_{0}\sim 10^{-1},\sigma\in(2,4)\), so we can easily estimate that changing \(y_{0}\) and \(\sigma\) in the order of \(10^{-3}\sim 10^{-1}\) can only lead to \(ratio_{simu}\) being about 1.2 times larger than \(ratio_{gasus}\). So, besides the sensitivity of \(y_{0}\) and \(\sigma\), we think that more of the difference of \(c_{n}\) is due to the deformation of the rapidity distribution.
As \(c_{n}\) from different initial momentum case show the most significant difference, we further choose to plot Q-Q plots to compare our W-S, FFG and HMT cases with Gaussian distributions. In statistics, the Q-Q plots are usually used to characterize normality of given distribution, every distribution has its variable values corresponding to different percentiles, by plotting scatters from our datasets in y-axis with scatters from Gaussian distribution in x-axis, we can visually see how close our datasets distribute to a Gaussian distribution. Generally an approximate linearity like our fitting lines in Fig. 4 means that the distribution of our data is close to Gaussian shape. And meanwhile the intercept shows \(y_{0}\) and slope shows \(\sigma\). The scatters and fitting lines in Fig. 4 do
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**case** & \(\langle y_{0}\rangle\) & \(c_{1}\) & \(c_{2}\) & \(c_{3}\) \\ \hline C+C(200GeV WS) & 0.0762392\(\pm\)0.00157402 & 0.017952\(\pm\)0.00295684 & 0.00112717\(\pm\)0.000582212 & 0.00073685\(\pm\)0.000230348 \\ \hline C+C(200GeV Tri) & 0.0737411\(\pm\)0.000647043 & 0.0153949\(\pm\)0.00131006 & 0.000615051\(\pm\)0.000256761 & 0.000921691\(\pm\)0.000100582 \\ \hline C+C(6.37TeV WS) & 0.0659717\(\pm\)0.000181782 & 0.00436256\(\pm\)0.000215336 & 0.000064434\(\pm\)0.0000366278 & 0.000110215\(\pm\)0.000014411 \\ \hline C+C(6.37TeV Tri) & 0.0657316\(\pm\)0.000175794 & 0.00432838\(\pm\)0.000213905 & 0.0000426949\(\pm\)0.0000363649 & 0.000102568\(\pm\)0.0000143071 \\ \hline C+C(200GeV FFG) & 0.0719352\(\pm\)0.0015531 & 0.0266402\(\pm\)0.00284862 & -0.000642742\(\pm\)0.000551982 & -0.000197884\(\pm\)0.000217183 \\ \hline C+C(200GeV HMT) & 0.0672757\(\pm\)0.00167048 & 0.0198128\(\pm\)0.00296711 & 0.000742437\(\pm\)0.000582835 & 0.000440111\(\pm\)0.000226578 \\ \hline Au+Au(200GeV WS) & 0.0220948\(\pm\)0.000126595 & 0.0051536\(\pm\)0.000266719 & 0.000010667\(\pm\)0.0000526751 & 0.000304233\(\pm\)0.0000207495 \\ \hline \end{tabular}
\end{table}
Table 1: \(\langle y_{0}\rangle\) and \(c_{n}\) extracted from different cases, \(c_{0}\) for all cases are close to 1 enough that can be negligible.
Figure 3: The figures show ratio of \(dN/dy\) in our seven different systems, along with fitting curves and standard polynomials for comparison.
Figure 2: The figures show normalized \(dN/dy\) in positive/middle/negative \(y_{0}\) regions in our seven different systems, corresponding to \(\alpha_{ZN}<-0.1,-0.1<\alpha_{ZN}<0.1\) and \(\alpha_{ZN}>0.1\).
not show significant difference between W-S, FFG, HMT cases and Gaussian distribution. But we can still notice that the rapidity distribution with momentum distribution (FFG and HMT) give different slope and intercept from W-S case, implying us the effect from intrinsic momentum distribution on rapidity deformation.
### Effect on \(c_{n}\) from rapidity shift and rapidity deformation in longitudinal asymmetry
Beyond the explanation for the analytic form of the Gaussian distribution, the practical meaning of the expansion coefficient can be understood better from definition of Taylor expansion, that describing function by combination of polynomials. From this point of view, our expansion coefficients \(c_{n}\) actually present contribution from powers of rapidity at different orders. To give a more intuitive explanation, we plot each rapidity ratio along with three standard polynomial: \(y,y^{2},y^{3}\) in panel Fig. 3(c). And then we also plot each component \(c_{n}y^{n}\) in Fig. 5 to show their contribution to ratio, here different values of \(c_{n}\) are shown in Tab. 1. It is clearly seen that in systems with higher \(\sqrt{s_{NN}}\) (C + C, 6.37TeV) or larger size (Au + Au, 200GeV), the effect of longitudinal asymmetry is obviously smaller than that in C + C (200GeV). In Fig. 5 (a), (b) and (c), we can see yellow (C + C, WS, 6.37TeV), green (C + C, Tri, 6.37TeV) and violet (Au + Au, WS, 200GeV) lines are closer to 0 than red (C + C, WS, 200GeV), orange (C + C, Tri, 200GeV), cyan (C + C, FFG, 200GeV), blue (C + C, HMT, 200GeV) lines, and the longitudinal asymmetry of systems at the same \(\sqrt{s_{NN}}\) with different configurations (C + C, WS, 200GeV) in red line and C + C, Tri, 200GeV in orange line, C + C, WS, 6.37TeV in yellow line and C + C, Tri, 6.37TeV in green line) are so close that can hardly be distinguished. So our best choice to discuss how the deformation change the longitudinal asymmetry is comparing C + C (WS, 200GeV), C + C (FFG, 200GeV) and C + C (HMT, 200GeV) systems.
In polynomials we can see, in different region of rapidity, the contribution of \(y,y^{2},y^{3}\) are different. As the rapidity \(y\) increases from 0 to 1, then to the region greater than 1, the deformation effect contributed by \(y^{2}\) and \(y^{3}\) becomes more and more significant so that \(c_{n}y^{n}\), especially \(c_{3}y^{3}\), can be comparable to \(c_{1}y^{1}\) as shown in Fig. 5(a),(c).
In \(-1\)\(<\)\(y\)\(<\)1 region, we have \(|y|\)\(>\)\(|y^{2}|\)\(>\)\(|y^{3}|\), which means the direct rapidity shift \(y\) as the linear (also as the leading order) component of the ratio dominates the largest contribution to the \(ratio_{+/-}\) in this region. According to Ref. [69], \(c_{1}\) shows a linear dependence on \(\langle y_{0}\rangle\). For those cases in which \(y_{0}\) only depends on fluctuation of participants (like all the W-S and Tri. cases), \(c_{1}\) dependence on \(\langle y_{0}\rangle\) is consistent with our expectation. For systems at the same \(\sqrt{s_{NN}}\) in Woods-Saxon and Triangle case, by comparing \(\langle y_{0}\rangle\) with \(c_{1}\) in Tab. 1, we can see that \(c_{1}\) shows similar linear dependence on \(\langle y_{0}\rangle\), and similar dependence can also be shown even in the error (width) of \(\langle y_{0}\rangle\) and \(c_{1}\) in Tab. 1. We can see these \(c_{1}\) in \(|y|\in(0,1)\) are mainly dominated by rapidity shift.
However, when we discuss the region in \(y\in(1,5)\), Fig. 5 reminds us that deformation of rapidity distribution also contributes to the ratio, meanwhile for the FFG and HMT cases, their \(c_{1}\) dependence on \(\langle y_{0}\rangle\) are different from the WS case. In Tab. 1 we can see FFG (200GeV) and HMT (200GeV) have smaller \(\langle y_{0}\rangle\) than WS (200GeV), but larger \(c_{1}\) than WS. In Fig. 2, it is difficult to see the slight deformation intuitively in rapidity distribution. But fortunately, according to Fig. 1, Fig. 3, Fig. 5 and Tab. 1, we can infer how the rapidity distribution deformed at final state in Fig. 2.
For convenience, we can call the region in \(|y-\langle y_{0}\rangle|<\langle y_{0}\rangle\) as peak, and the region in \(2\langle y_{0}\rangle<|y|<(5-2\langle y_{0}\rangle)\) as ridge. In Fig. 3(b), we can see, around \(y=0\) both C + C (FFG, 200GeV, green) and C + C (HMT, 200GeV, blue) show larger ratios than C + C (WS, 200GeV, red). That means, in \(|y|\in(0,1)\) rapidity distribution in FFG and HMT give larger ratios of \(\frac{\langle dN/dy\rangle_{peak}}{\langle dN/dy\rangle_{ridge}}\) than the WS case (normalized \(dN/dy\) has been shown in Fig. 2). This conclusion is a result from deformation of peak and ridge in Fig. 2, and in Fig. 1 we can infer the origin of this deformation.
In Fig. 1(a1),(a3),(b3), we can extract that \(y_{0}\) distribution in C + C (FFG, 200GeV) and C + C (HMT, 200GeV) show lower peaks and larger width than C + C (WS, 200GeV), for example, in \(\alpha_{ZN}<-0.1\), \(\sigma_{WS}=0.1011\)\(<\)\(\sigma_{FFG}=0.1016\)\(<\)\(\sigma_{HMT}=0.1057\). These larger widths are caused by additional momentum distribution in FFG and HMT, as we defined in Eq. 7. Hence we see the effect from intrinsic momentum distribution on longitudinal asymmetry at final state.
But momentum distribution does not only affect \(c_{1}\) by causing deformation in \(y\in(-1,1)\). In Fig. 3(b), as y increases to \(\pm 5\), we can see ratio of C + C (WS, 200GeV, red) exceeds C + C (FFG, 200GeV, green) and
Figure 4: The figure gives Q-Q plot to examine normality of systems with different initial momentum and parameterize deformation effect at final rapidity distribution.
C + C (HMT, 200GeV, blue), especially in (-5,-4) and (4,5), after a small peak, the ratios in FFG and HMT case fall closer to 1.00 than WS case. It reminds us that in region close to \(\pm 5\), rapidity distribution in FFG and HMT are both depressed obviously that the ratios are closer to 1. This depression is a result of deformation at marginal y distribution(y\(\rightarrow\)\(\pm 5\)), to discuss origin of this deformation, we should go back to check the asymmetry from intrinsic momentum distribution in Fig. 1. By comparing initial \(y_{0}\) distribution in Fig. 1 and final rapidity ratio in Fig. 3, we can see the asymmetry in both initial and final state is consistent. In \(y_{0}\) distribution, FFG and HMT provide larger width around \(y_{0}=0\) with less events around \(y_{0}=0.6\) than WS case. Meanwhile in Fig. 3, FFG and HMT show larger ratio in peak and ridge region with smaller ratio in marginal region. Comparison of \(c_{n}\) between WS, FFG and HMT proved that, asymmetry from FFG and HMT at initial state transformed into different ratio at final state. Intrinsic momentum from FFG and HMT generates more events with larger \(y_{0}\) in peak and ridge, corresponding to larger width of \(y_{0}\), but the intrinsic momentum can not support larger \(y_{0}\) to extend to edge around \(y_{0}=0.6\). Then the asymmetry transformed into rapidity asymmetry in Fig. 2 and Fig. 3, intrinsic momentum from FFG and HMT provides us enhanced ratio in peak and depressed ratio in ridge and margin. That's why we see larger \(c_{1}\) and smaller \(c_{2},c_{3}\) in FFG and HMT than WS.
Lastly, we can discuss the difference between FFG and HMT. According to Fig. 5, actually we can see the fitting line of \(ratio_{+/-}\) of FFG (green) is higher than HMT (blue) in most region of peak and ridge as we mentioned in Fig. 2. Considering that \(c_{1}\) dominates \(ratio_{+/-}\) as shown in Fig. 5, we can say the effect of deformation in FFG is mainly shown as generating more events in peak of rapidity distribution and less events at the edge close to \(\pm 5\). It is reasonable for FFG indeed provides additional momentum distribution on \(y_{0}\), because there is no interaction between nucleon, but meanwhile FFG can not provide more particles emitted to larger rapidity (\(y\sim 5\)). To compensate the over-increasing \(c_{1}\) which dominates in mid-rapidity region, \(c_{2}\) and \(c_{3}\) are both small enough to 0, even negative as shown in Fig. 5 and Tab. 1. But the SRC mechanism in HMT provides a way to emit more particles with larger rapidity. According to Ref[37; 38; 67], HMT can cause more high energy nucleon emission at final state, so in beam direction more particles with larger rapidity can distribute close to \(\pm 5\). That's why Tab. 1 and Fig. 5 show us that \(c_{2},c_{3}\) of HMT provide larger and positive contribution than those of FFG. In summary, intrinsic momentum distribution are transformed to different deformation of final rapidity distribution, and their effect on longitudinal asymmetry can be characterized by \(c_{n}\).
### Prospect and alternative improvement in experiments
For both initial condition and longitudinal asymmetry we introduced above, some experiments have been carried out to investigate them prospectively at ALICE and JLab, and some are planned to investigate them at FRIB and FAIR etc [67; 69]. So we suggest to test some joint measurements, for an instance, electron-nucleus scattering [70] experiments can help us estimate HMT component and short-range-correlation effect in complex nuclei collision [37; 67], meanwhile collective flow \(v_{n}\), characteristic spectra of giant dipole resonance (GDR), dihadron azimuthal correlation and backward-forward multiplicity correlation can help us to distinguish \(\alpha\)-cluster structure [26; 30; 31; 32; 71; 33], lasty the energy deposition in detector and rapidity measurement reveal the longitudinal asymmetry [69]. By carrying out these experiments in symmetric nuclei collision, we can give insight or provide improvement of physical picture on longitudinal asymmetry, further to constrain condition of collision and describe final rapidity more precisely.
Figure 5: The figures show different components \(c_{n}y^{n}\) in our seven systems.
Summary
This paper presents a comparison of the longitudinal asymmetry for systems with different \(\alpha\) cluster structure and intrinsic momentum distribution in the AMPT model. \(\alpha_{ZN}\) and \(y_{0}\) are calculated to characterise the rapidity shift, as performed in experimental measurements by ALICE [69]. To study the effect of different initial condition on longitudinal asymmetry, we introduce \(\alpha\) cluster structure and different intrinsic nucleon momentum distribution into the simulation from the AMPT, where the intrinsic momentum distribution is added to the parameter \(y_{0}\) as shown in Fig. 1(a3) and (b3). With these data we use 3rd polynomial fitting to extract the expansion coefficients \(c_{n}\) in Tab. 1. The comparison between different initial conditions shows us the effects of the \(\alpha\) clustering structure and the initial momentum component.
Based on our analysis, we propose that the dependence of the longitudinal asymmetry is the result of the competition between rapidity shift and rapidity deformation. In the \(|y|\)\(<\)1 region, \(c_{1}\) is mainly linearly dependent on the initial rapidity shift if we don't consider the momentum distribution, and the momentum distribution will lead to rapidity deformation, shown as a larger ratio in peak and ridge. However, in the large rapidity region, \(c_{2}\) and \(c_{3}\) reflect the deformation of the final state. reflect the deformation of the final-state rapidity distribution. HMT caused by SRC provides a larger rapidity distribution when \(y\) is close to \(\pm\)5, which increases the longitudinal asymmetry of \(c_{2}\) and \(c_{3}\).
Finally, we discuss practical application of our calculation in experiments, including joint measurement on \(\alpha\)-clustering effect, high momentum component effect, and longitudinal asymmetry with deformation, some dependent experiments have been performed in different detectors [31; 34; 67; 69]. In order to test the results of this work, we propose to investigate the collision of symmetric nuclei of the C + C system, and in the future we expect that these investigations can provide us with insights to constrain the initial condition, the longitudinal asymmetry, and the correction for the deformation of the final velocity distribution.
###### Acknowledgements.
This work was supported in part by the National Natural Science Foundation of China under contract Nos. 11890710, 11890714, 12147101, 12275054, 11875066, 11925502, 11961141003 and, the Strategic Priority Research Program of CAS under Grant No. XDB34000000, National Key R&D Program of China under Grant No. 2018YFE0104600 and 2016YFE0100900, and by Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008.
|
2309.11270 | Methodology for measuring photonuclear reaction cross sections with an
electron accelerator based on Bayesian analysis | Accurate measurements of photonuclear reaction cross sections are crucial for
a number of applications, including radiation shielding design, absorbed dose
calculations, reactor physics and engineering, nuclear safeguard and
inspection, astrophysics, and nuclear medicine. Primarily motivated by the
study of the production of selected radionuclides with high-energy photon beams
(mainly 225Ac, 47Sc, and 67Cu), we have established a methodology for the
measurement of photonuclear reaction cross sections with the microtron
accelerator available at the Swiss Federal Institute of Metrology (METAS). The
proposed methodology is based on the measurement of the produced activity with
a High Purity Germanium (HPGe) spectrometer and on the knowledge of the photon
fluence spectrum through Monte Carlo simulations. The data analysis is
performed by applying a Bayesian fitting procedure to the experimental data and
by assuming a functional trend of the cross section, in our case a Breit-Wigner
function. We validated the entire methodology by measuring a well-established
photonuclear cross section, namely the 197Au({\gamma},n)196Au reaction. The
results are consistent with those reported in the literature. | Saverio Braccini, Pierluigi Casolaro, Gaia Dellepiane, Christian Kottler, Matthias Lüthi, Lorenzo Mercolli, Peter Peier, Paola Scampoli, Andreas Türler | 2023-09-20T12:54:52Z | http://arxiv.org/abs/2309.11270v1 | # Methodology for measuring photonuclear reaction cross sections with an electron accelerator
###### Abstract
Accurate measurements of photonuclear reaction cross sections are crucial for a number of applications, including radiation shielding design, absorbed dose calculations, reactor physics and engineering, nuclear safeguard and inspection, astrophysics, and nuclear medicine. Primarily motivated by the study of the production of selected radionuclides with high-energy photon beams (mainly \({}^{225}\)Ac, \({}^{47}\)Sc, and \({}^{67}\)Cu), we have established a methodology for the measurement of photonuclear reaction cross sections with the microtron accelerator available at the Swiss Federal Institute of Metrology (METAS). The proposed methodology is based on the measurement of the produced activity with a High Purity Germanium (HPGe) spectrometer and on the knowledge of the photon fluence spectrum through Monte Carlo simulations. The data analysis is performed by applying a Bayesian fitting procedure to the experimental data and by assuming a functional trend of the cross section, in our case a Breit-Wigner function. We validated the entire methodology by measuring a well-established photonuclear
cross section, namely the \({}^{197}\)Au(\(\gamma\), n)\({}^{196}\)Au reaction. The results are consistent with those reported in the literature.
keywords: Photonuclear reactions, cross section, bayesian analysis, electron accelerator +
Footnote †: journal: Nuclear Physics Letters
## 1 Introduction
Photonuclear reactions occur when megaelectronvolt photons undergo an inelastic interaction with a nucleus. At photon energies below 25-30 MeV, the excitation function of photonuclear reactions is characterized by a prominent peak, known as giant dipole resonance (GDR), that is a collective excitation of the atomic nucleus in which nucleons move together to create a large oscillation of the nucleus in the shape of a dipole. This energy range matches the upper limit of most electron accelerators, which produce X-rays by "Bremsstrahlung", i.e. by slowing down (or stopping completely, depending on the thickness of the target) the electrons in a target. The photon flux scales approximately quadratically with the target atomic number, thus high Z materials are typically chosen as converter targets. The most commonly used are gold, tantalum or tungsten, although the use of lighter materials such as niobium and copper is also reported [1]. Experimentally, measuring photonuclear cross sections at bremsstrahlung facilities is challenging. The reaction yield is a folding of the cross section and of the continuous X-ray energy spectrum, and the yield curve can be obtained experimentally by varying the electron energy. Thus, the cross section curve is typically evaluated by means of unfolding methods. Of course, this relies heavily on the knowledge of the energy spectrum (which is difficult to measure), reproducibility of the accelerator output, and high counting statistics. In order to circumvent the issues of a bremsstrahlung spectrum, the production of high-energy X-rays has been also achieved with other techniques including Laser-Compton Scattering (LCS). While bremsstrahlung generally produces a larger number of photons, LCS has the advantage of producing quasi-monochromatic gamma rays, which allow to avoid the use of unfolding methods [2]. In spite of these
difficulties, now there is a large amount of measured data available from photonuclear reactions. Along this line, the International Atomic Energy Agency (IAEA) issued a comprehensive review on photonuclear data, emphasizing the importance of the accurate knowledge of photonuclear reaction cross sections for several applications [3], including radiation shielding design and transport analyses, calculation of the absorbed dose in human body for radiotherapy, physics and technology of fission and fusion reactors, activation analyses, safeguards and inspection technologies, nuclear waste transmutation and astrophysical nucleosynthesis.
In the last decade, the possibility of using photonuclear reactions for the production of radionuclides for nuclear medicine has been established [4, 5, 6, 7]. The renewed interest in this topic was sparked by the commercial availability of compact high-power electron accelerators, such as the 35 MeV, 120 kW linac from MEVEX Corp (Stittville, ON, Canada) and the Rhodotron TT300-HE, an electron accelerator characterized by a maximum energy of 40 MeV and a beam power up to 125 kW, produced by IBA (Louvain-La-Neuve, Belgium). Of course, the precise knowledge of interaction cross sections is key for a scalable production of radionuclides for medical purposes. At the Bern University Hospital's medical cyclotron facility, cross sections of several proton-induced nuclear reactions were measured, in particular those involving the production of so-called theranostic pairs, such as \({}^{43,44}\)Sc/\({}^{47}\)Sc, \({}^{61,64}\)Cu/\({}^{67}\)Cu and \({}^{152,155}\)Tb/\({}^{149,161}\)Tb, as well as more recently of the Auger emitter \({}^{165}\)Er that can be potentially be used in combination with other lanthanides [8, 9, 10, 11, 12, 13]. Currently, we are investigating the feasibility of the METAS electron microtron (maximum energy: 22 MeV, average current: 20 \(\mu\)A) for studying selected photonuclear reactions, in particular for the production of \({}^{225}\)Ac \(\left[\mathrm{t}_{1/2}=9.9\ \mathrm{d},\mathrm{E}_{\alpha}=5.8\ \mathrm{MeV}(100\%)\right]\), \({}^{47}\)Sc \(\left[\mathrm{t}_{1/2}=3.349\ \mathrm{d},\mathrm{E}_{\beta^{-}}^{\mathrm{max}}=440. \mathrm{g}\mathrm{e}\mathrm{V}(68.4\%);600.3\ \mathrm{keV}(31.6\%),\mathrm{E}_{\gamma}=159.4\ \mathrm{keV}(68.3\%)\right]\), and \({}^{67}\)Cu [\(E_{\beta-}^{\mathrm{max}}=377\ \mathrm{keV}(57\%);468\ \mathrm{keV}(22\%);562\ \mathrm{keV}(20\%)\)].
In particular, \({}^{225}\)Ac is one of the most promising radionuclides for Targeted Alpha Therapy (TAT). Recent findings have demonstrated the striking potential of \({}^{225}\)Ac-PSMA-617 for prostate cancer therapy [14]. To date, the availability
of \({}^{225}\)Ac is still insufficient with respect to the high demand for clinical applications. The main production routes are the radiochemical extraction from \({}^{229}\)Th, high-energy proton induced spallation of \({}^{232}\)Th and \({}^{238}\)U targets [15], and neutron irradiation of \({}^{232}\)Th and \({}^{226}\)Ra targets [16]. A viable, but not yet fully studied alternative route for the production of \({}^{225}\)Ac in large scale is the irradiation of \({}^{226}\)Ra targets with high-energy gamma rays [17]. In view of the assessment of the \({}^{226}\)Ra(\(\gamma\), n)\({}^{225}\)Ra cross section, we aim to establish a rigorous procedure for the measurement of photonuclear reactions at METAS. This paper reports on the validation of this procedure through the measurement of a well-established photonuclear monitor reaction, namely the \({}^{197}\)Au\((\gamma,n)^{196}\)Au reaction [18; 19; 20]. The Methods section describes the microtron accelerator at METAS, irradiation and measurements, Monte Carlo simulations, and data analysis. The results are presented and discussed in the following two sections, and eventually conclusion and outlook are drawn.
## 2 Materials and methods
After describing the electron accelerator at METAS in the first paragraph, the irradiation procedures and the measurements with gamma spectroscopy are discussed in the second paragraph. The third paragraph focuses on the assessment of the photon fluence spectrum through Monte Carlo simulation. Finally, the data analysis techniques are discussed.
### The accelerator at METAS
The irradiation experiments were conducted at the electron accelerator of the Swiss Federal Institute of Metrology (METAS). The accelerator is of microtron type, capable of producing electron beams with an endpoint energy from \(4\,\mathrm{MeV}\) to \(22\,\mathrm{MeV}\). The installed accelerator is based on the design described in Ref. [21].
The relevant parts of the accelerator facility are shown in Fig 1. The initial electron beam is formed in an electron gun and is accelerated inside a resonator (\(535\,\mathrm{keV}\) per revolution). The electron beam cycles through the resonator, by
means of a constant magnetic field, until its path is offset by the extraction tube. Subsequently, the electron beam enters the beamline. Here the beam is shaped by means of four quadrupole magnets (QM1 to 4) and four steering magnets (SM1 to 4), and transported via two bending magnets (BM1 and 2) to the treatment head. The beam is extracted from the vacuum tube to air through a 400 \(\mathrm{\SIUnitSymbolMicro m}\) thick aluminum window and directed onto a converter target. A gold plate (2 \(\mathrm{mm}\) thick with a diameter of 10 \(\mathrm{mm}\)) acts as a Bremsstrahlung converter. Thermal cooling of the converter is provided by a copper housing, through which water is circulated. Water and copper located behind the gold disc (in the beam direction) absorb residual emerging electrons and low energy photons, hence hardening the photon beam. A tungsten block with a conical opening acts as a collimator. Under normal operation conditions, the photon energy spectrum would be further shaped by a flattening filter located downstream of the collimator. The filters are interchangeable by a revolver assembly. For our irradiations the flattening filter was replaced by a custom target mount, described in Sec. 2.2.
A single electron beam pulse has a duration of 3 \(\mathrm{\SIUnitSymbolMicro s}\) and a current of 25 \(\mathrm{mA}\) to 100 \(\mathrm{mA}\) (depending on beam energy). The repetition rate can be varied stepwise in the range of 1 \(\mathrm{Hz}\) and 200 \(\mathrm{Hz}\). On the converter the beam is assumed to have a Gaussian shape with a full width at half maximum (FWHM) of 3 \(\mathrm{mm}\). Although every electron orbit in the accelerator can be accessed by the extraction tube, optimized magnet settings only exist for a subset of orbits. The exact energy corresponding to an orbit, as well as the energy spread within an orbit, was determined using a magnetic spectrometer in a separate beamline, dedicated for total absorption dosimetry [22]. The energy spread was found to be of the order of 25 \(\mathrm{keV}\) for all measured orbits.
### Irradiation and measurements
In order to validate the proposed experimental procedure with the measurement of a well-established photonuclear cross section, we selected the \({}^{197}\)Au(\(\gamma\), n)\({}^{196}\)Au reaction. Gold foils with a diameter of 25 \(\mathrm{mm}\) and a nominal thick
Figure 1: Scheme of the main elements of the Microtron accelerator facility at METAS. Quadrupole magnets are depicted in red, steering magnets in green and bending magnets in blue. Instrumentation in shown in yellow.
ness of 12.5 \(\mu m\) have been purchased from Goodfellow, GmbH. We performed irradiation runs of 11 gold targets at different electron energies in the range 8.499-20.678 MeV. It should be noted that the energy threshold of the \({}^{197}\)Au(\(\gamma\), n)\({}^{196}\)Au nuclear reaction is \(8.070\pm 0.003\) MeV [23]. To evaluate the initial number of target nuclei, each gold foil was weighted prior the irradiation using a precision scale1 with a typical uncertainty of 0.007 mg. Irradiation times were chosen based on the predicted activity and ranged from half an hour (for the irradiation at the highest beam energy) to 6 hours (for the irradiation at the lowest beam energy). The charge of each individual beam pulse was measured using an AC current transformer (ACCT)2 connected to a high bandwidth waveform digitizer3. For each irradiation individual pulses were recorded and summed post-irradiation to obtain the total charge on the Bremsstrahlung converter. This current/charge measurement setup was calibrated against a Faraday Cup. Comparing the simultaneously collected charge in the Faraday cup over a precision resistor (50 \(\Omega\)) to the area of the ACCT signal allowed to obtain a linear calibration curve over the range of 25 nC to 225 nC. Thus, establishing an accurate charge measurement for individual pulses. The calibration curve is shown on the left in Fig 2, on the right hand side of the figure a typical evolution of the beam current during an irradiation is shown. To investigate the stability of the calibration, the calibration factors were monitored for various beam repetition rates and vertical and horizontal displacements (using steering magnets SM X2 and Y2 until the beam was lost). Based on these investigations the uncertainty of the beam charge was quantified to be within 1.2 %.
Footnote 1: Mettler Toledo XP205
Footnote 2: Bergoz ACCT-S-055
Footnote 3: Spectrum Instrumentation, M2p5962-x4
EBT3 Gafchromic films were used to evaluate the photon beam uniformity in the position of the gold foil [24]. The beam was found to be uniform within 1% on the foil surface. The activity of the gold targets at the end of the irradiation was measured with a High Purity Germanium (HPGe) detector in operation
at the cyclotron laboratory of the University of Bern. The detector's energy calibration and efficiency are periodically verified with a multi-peak radioactive source type EG 3X from EUROSTANDARD CZ, spol. s r.o., with an energy resolution of 0.24% (\({}^{137}\)Cs peak FWHM). The detector is used on a daily basis for cross section, activity and half-life measurements of radionuclides of medical interest[25; 26; 27]. The detector is a coaxial N-type HPGe (Canberra GR2009) with the sensitive volume shielded by 10 cm of lead. The pre-amplifier signal is fed into a Lynx digital analyzer. The gamma spectra were analyzed using the Interspec software[28]. As an example, Fig. 3 shows the gamma spectrum of a gold target after exposure to the Bremsstrahlung beam.
The energies of all the peaks related to the \({}^{196}\)Au decay are highlighted, whereas the inset zooms on the peaks used in the analysis, namely the 355.73 keV (87.0 %) and the 333.03 keV (22.9 %). The mass of the gold targets, the irradiation time and the \({}^{196}\)Au activity are reported in Tab. 1 for all the beam energies.
### Monte Carlo simulations
A key ingredient for measuring photonuclear cross sections is the precise characterization of the photon beam. Nowadays, the gold standard for assess
Figure 2: The left hand side show the calibration curve of the recorded ACCT area against the charge collected in the Faraday cup. The right hand side shows the beam charge evolution over the entire irradiation with a beam energy of 20.678 MeV
\begin{table}
\begin{tabular}{c c c c c c} \hline \(E_{beam}\,[\mathrm{MeV}]\) & \(t_{irr}\,[\mathrm{s}]\) & \(Q\,[\mathrm{mC}]\) & \(t_{dec}\,[\mathrm{s}]\) & \(m_{Au}\,[\mathrm{mg}]\) & \(A\,[\mathrm{kBq}]\) \\ \hline
8.499 & 21660 & \(1130.8\pm 13.6\) & 25200 & 123.09 & \((76.0\pm 8.4)\cdot 10^{-3}\) \\
9.030 & 24180 & \(788.4\pm 9.4\) & 19920 & 122.88 & \(1.22\pm 0.12\) \\
10.101 & 16980 & \(544.2\pm 6.5\) & 148860 & 117.45 & \(4.35\pm 0.41\) \\
10.634 & 11220 & \(817.4\pm 9.8\) & 7800 & 128.41 & \(15.8\pm 1.5\) \\
12.228 & 63380 & \(639.7\pm 7.7\) & 9120 & 123.09 & \(48.1\pm 4.4\) \\
13.801 & 14700 & \(522.9\pm 6.2\) & 7860 & 119.59 & \(125.0\pm 11.0\) \\
15.383 & 3600 & \(308.8\pm 3.7\) & 15180 & 118.07 & \(163.0\pm 15.0\) \\
16.977 & 8760 & \(193.9\pm 2.3\) & 336120 & 116.81 & \(112.0\pm 10.0\) \\
18.562 & 3900 & \(96.0\pm 1.2\) & 24300 & 127.45 & \(128.0\pm 12.0\) \\
20.678 & 1860 & \(39.9\pm 0.5\) & 26940 & 124.49 & \(72.5\pm 6.6\) \\ \hline \end{tabular}
\end{table}
Table 1: Measurement data for the available beam energies at METAS. \(E_{beam}\) is the electron beam energy, \(t_{irr}\) the duration of the irradiation, \(Q\) is the total electron charge, \(t_{dec}\) is the time between the end of irradiation and the HPGe measurement, \(m_{Au}\) is the target foil’s weight and \(A\) is the measured \({}^{196}\)Au activity. The uncertainties of the beam energy and the mass are 25 keV and 0.01 mg respectively, whereas the uncertainty of the irradiation and decay times are negligible.
ing particle fluences in accelerator environments are Monte Carlo (MC) particle transport simulations of the relevant accelerator elements. In our case, this means simulating the accelerator head with the converter, collimators and target assembly. We implemented the accelerator head's geometry in FLUKA version 4.0 and Flair 3.1 [29; 30; 31; 32] and independently in Geant4 [33; 34; 35] based on the technical drawings of the treatment head. Fig. 4 shows Flair's rendering of the accelerator head with the vacuum window, converter with mount and cooling channels, collimators and target. The full beam line is not part of the simulation as it is not relevant for our purposes. The initial electron beam of the simulation starts in the vacuum pipe, just before going through the vacuum window. The electron beam shape is Gaussian in the two directions perpendicular to the beam axis with a FWHM of 3 mm. Also the beam energy profile is implemented as Gaussian with a FWHM of 25 keV.
Fig. 5 shows the differential photon fluence for different electron beam energies. Due to the accelerator setup, a standard Bremsstrahlung fluence spectrum is expected. The simulations run for a sufficient number of primaries in order to
Figure 3: Typical gamma energy spectrum of an irradiated gold target after irradiation.
keep the statistical error on the differential photon fluence well below 1% even for photon energies close to \(E_{beam}\).
A thorough assessment of the photon fluence's uncertainty is necessary. First, the left plot in Fig. 6 shows that the agreement between the Geant4 and FLUKA simulation is very good within the statistical uncertainty (the Geant4 has a slightly higher statistical noise). This means that differences in the material definitions and in the physics implementation of electromagnetic interactions in the two codes have a negligible impact on the differential photon fluence at the target location. Second, we checked that in order to alter significantly the
Figure 4: FLUKA implementation of the accelerator head at METAS.
Figure 5: Photon fluence per primary at the irradiation point from FLUKA for various beam energies.
photon fluence above \(5\,\mathrm{MeV}\), rather large changes in the geometry of the accelerator head would be necessary. Adding a \(3\,\mathrm{cm}\) thick polyethylene neutron moderator in front of the target merely leads to an overall reduction of \(\approx 6\%\) for \(E_{beam}=21.74\,\mathrm{MeV}\) in the photon fluence at the target, as can be seen in the right plot of Fig. 6. Also replacing the water in the cooling cavity of the converter with air or placing a \(1\,\mathrm{mm}\) thick lead foil directly in the photon beam between the converter and the target does not affect significantly the photon fluence (see right plot of Fig. 6).
We also verified the simulation results experimentally with dosimetric measurements. Since the accelerator at METAS is mainly used for metrology, we performed measurements with two calibrated PTW 31014 ionization chambers inside a water phantom. The reference chamber was placed at \(100\,\mathrm{cm}\) from the converter directly on the central photon beam axis and the second chamber was located \(7.6\,\mathrm{cm}\) behind. This allowed us to verify the simulation results with two quantities: the absolute dose in the reference chamber, which provides a benchmark for the normalization of the simulation, and the ratio of the doses deposited in the two chambers, which is independent of the normalization or charge measurement and contains information about the lower energy part of the photon spectrum.
For an electron beam energy of \(15.383\,\mathrm{MeV}\), Tab. 2 reports the dose in the reference chamber and the ratio of the doses from the two chambers. The measurements were repeated five times, which yielded a statistical error of about
Figure 6: Comparison between the photon fluence from Geant4 and FLUKA (left) and impact of geometry alterations in the FLUKA simulations (right).
3%. Furthermore, we assumed an experimental uncertainty on the beam charge measurement of 1.2 %.
The FLUKA simulation of this dosimetric measurement involved a significant extension of the accelerator geometry. A flattening filter, additional collimators and the water tank with the ionization chambers, all of which are located after the target for the cross section measurements, had to be implemented. The doses are scored in regions of the size of the chambers according to the PTW specifications (using regional USRBIN scoring). We show the results with the statistical uncertainty of the simulation in Tab. 2. The simulation results agree very well with the measurements. Despite the fact that with this experimental setup we probe primarily the lower end of the photon energy spectrum, this test measurement gives us confidence that our characterization of the photon beam with FLUKA is accurate.
The MC simulations do not only provide the differential photon fluence as the input for the cross section measurement, they also provide the yield of the activation products in the target material. This is a valuable benchmark and comparison for the experimental determination of the yield. The activation products are scored with RESNUCLE card with radioactive decay set in semi-analogue mode. Of course, the photonuclear interactions need to be turned on in FLUKA with the PHOTONUC card set to ELECTNUC (we do not need any photonuclear interactions with muons) and with the COALESCE and EVAPO-RAT settings of the PHYSICS card. For the target material, the same biasing factor applies as in the case of the aforementioned converter biasing.
Note that FLUKA has its own implementation of photonuclear cross sec
\begin{table}
\begin{tabular}{l l l} \hline \hline & \multicolumn{1}{c}{\(D_{ref}\,[\text{Gy/prim}]\)} & \multicolumn{1}{c}{\(D_{ref}/D_{back}\)} \\ \hline FLUKA & \((8.02\pm 0.29)\cdot 10^{-16}\) & \(1.45\pm 0.08\) \\ PTW chamber & \((8.68\pm 0.44)\cdot 10^{-16}\) & \(1.43\pm 0.10\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the absolute dose and the ratio of dose to water in two locations between FLUKA simulations and the measured doses in PTW ionzation chambers.
tions. As described in Refs. [36; 37] FLUKA has its own cross section library for photonuclear interactions for about 190 stable nuclides. In the energy range around the giant dipole resonance (GDR), which is relevant for our study, the photonuclear cross sections are based on an evaluated parametrization done by the FLUKA developers based on available experimental data and theoretical considerations [37]. An assessment of the systematic uncertainty on the yield of activation products in FLUKA, in particular \({}^{196}\)Au, would go beyond the scope of this work since it would mean to evaluate the accuracy of the implemented cross section. We therefore only report the statistical uncertainty on the yield from FLUKA.
### Data analysis
From the target irradiations at METAS, we obtained the yield of \({}^{196}\)Au in the target foil, the foil's weight and the time integrated electron beam charge. In order to keep all measured information separate from the modelled and/or simulation data, we normalized the measured yield of \({}^{196}\)Au to the number of primary particles and unit volume and denote it as \(y_{Au}(E_{beam})\). This involves the decay correction for the time between the irradiation of the target and its HPGe measurement. We assume a decay constant for \({}^{196}\)Au of \(\lambda=(1.3009\pm 1.27\cdot 10^{-4})\cdot 10^{-6}\,\mathrm{s}^{-1}\) according to Ref. [38].
The relation of \(y_{Au}(E_{beam})\) to the production cross section and photon fluence \(\phi\) is
\[y_{Au}(E_{beam})\;=\;\rho_{Au}\,\int_{E_{th}}^{E_{beam}}dE^{\prime}\,\frac{d \phi(E_{beam},E^{\prime})}{dE^{\prime}}\cdot\sigma(E^{\prime})\;, \tag{1}\]
where \(E_{th}\) is the threshold energy, \(\rho_{Au}\) is the number density of target nuclei, and \(\sigma\) is the photonuclear cross section for \({}^{197}\)Au\((\gamma,n)^{196}\)Au. With the differential photon fluence determined through the FLUKA simulation, it is possible to extract the cross section \(\sigma\) from the measured yields \(y_{Au}(E_{beam})\).
The limited number of available electron beam energies implies several restrictions. On the one hand, we had to restrict the shape of \(\sigma\) to a truncated Breit-Wigner function
\[\sigma(E)\ =\ \frac{n\cdot k\cdot\Theta(E-E_{th})}{(E^{2}-m^{2})^{2}+m^{2}\, \Gamma^{2}}\,\qquad\mbox{with}\qquad k\ =\ \frac{2\sqrt{2}}{\pi}\,\frac{m\cdot\Gamma\,\sqrt{m^{2}(m^{2}+\Gamma^{2})}}{ \sqrt{m^{2}+\sqrt{m^{2}(m^{2}+\Gamma^{2})}}}. \tag{2}\]
The threshold energy \(E_{th}\) for the \({}^{197}\mbox{Au}(\gamma,n)^{196}\mbox{Au}\) reaction is fixed to \(8.070\pm 0.003\,\mbox{MeV}\) according to Ref. [23]. \(n\) is a normalization constant which is fitted to the measured data together with the mass \(m\) and width \(\Gamma\) of the GDR.
At energies around the GDR, this is an appropriate model for the cross section in the case of gold. Note that for non-spherical nuclei the cross section might be a combination of two Breit-Wigner functions and also the \(\sigma\propto\sqrt{E-E_{th}}\) behavior in the threshold region is not implemented in Eq. (2). Given the limited number of data points, fitting a more complex parametrization of \(\sigma\) is bound to fail.
We performed a Bayesian fit of the parameters \(n\), \(m\), and \(\Gamma\) using the Turing.jl package [39, 40]. We believe that the integral in Eq. (1) and the limited number of data points make a Bayesian approach more appropriate. A Gaussian likelihood and the following conservative priors were used for the fitting procedure
\[n \sim {\cal N}(10^{-24}\,\mbox{cm}^{2},10^{-24}\,\mbox{cm}^{2})\,\] \[m \sim {\cal N}(14.0\,\mbox{MeV},3.0\,\mbox{MeV})\, \tag{3}\] \[\Gamma \sim {\cal N}(2.0\,\mbox{MeV},1.0\,\mbox{MeV})\.\]
In addition, the statistical noise has a normally distributed prior. All of the priors' normal distribution were truncated at 0. Since the calculation of the posterior distribution requires a large amount of evaluations, the integral in Eq. (1) is performed using the trapezoidal rule. Given the small bin size of the photon fluence of \(0.1\,\mbox{MeV}\) and the smoothness of the integrand function, we assume that the uncertainty on the numerical evaluation of the integral is marginal.
In our analysis we did not assume any uncertainty on the differential pho
ton fluence from the FLUKA simulations. With the checks described in the previous section, the shape and the normalization of \(d\phi/dE\) are well under control. Furthermore, any normalization uncertainty would simply propagate into the parameter \(n\) when sampling the posterior distribution. Also, changes in the shape of the photon fluence spectrum, e.g. introducing a bin-wise error, is hardly noticeable due to the integration in Eq. (1).
From Eq. (1) and the shape of the Breit-Wigner function it is clear that the fit results are mostly dependent on the data points with \(E_{beam}\) below the peak of the GDR (see also Ref. [41]), i.e. in our case \(E_{GDR}=13.7\,\)MeV [38]. It was therefore our primary goal to get as much data points in this energy range in order to improve the fit as much as possible.
## 3 Results
In Tab. 1 we present the measured \({}^{196}\)Au yields together with the irradiation data. The data was taken with different target foils and therefore the target's mass was added. Due to the long irradiation time and long half-life of \({}^{196}\)Au, it is safe to assume a negligible uncertainty on \(t_{irr}\) and \(t_{dec}\). The uncertainty on the target mass measurement is also negligible. The electron beam energy is restricted to a narrow energy range due to the stability criterion of the accelerator design. Additionally, beam energies for individual orbits were determined during commissioning of the accelerator using a magnetic spectrometer (see Fig. 1). Here, the energy spread of the electron beam was also measured to be 25 keV. A statistical uncertainty on the number of counts measured with the HPGe was considered for the \({}^{196}\)Au activity(Tab. 1).
Fig. 7 shows the decay-corrected and normalized yield from the measurements in comparison with the simulated yield. The uncertainty on the decay constant of \({}^{196}\)Au, retrieved from Ref. [42], is negligible. Only at lower energies, close to the \({}^{197}\)Au\((\gamma,n)^{196}\)Au reaction threshold, the measured and the simulated yields do not agree well. On the one hand, the measurements in this regime are plagued by long irradiation times and low activities. On the other
hand, FLUKA has its own evaluated cross section library and the implementation of the \({}^{197}\)Au\((\gamma,n)^{196}\)Au reaction threshold is not disclosed. Note that standard evaluated cross section libraries like TENDL [43] and IAEA [3] differ at threshold energies.
The error bars on the FLUKA yield in Fig. 7 are hardly visible. The statistical errors are 7.8 % and 3.8 % for the two lowest beam energies and well below 1 % for the higher beam energies. The measured yield's error is given only by the uncertainty of the HPGe measurement and is around 10 % (see also the Tab. 1). The fit prediction from the measurement data is also shown in Fig. 7.
Tab. 3 shows the fit results for the measured and simulated yield, respectively. Clearly, the uncertainties on the parameters of the Breit-Wigner function are relatively small.
Finally, Fig. 8 shows the resulting \({}^{197}\)Au\((\gamma,n)^{196}\)Au cross section from the measured yield. The peak of the cross section from the measurement is slightly shifted towards higher energies in comparison with the fitted cross section from FLUKA (see also parameter \(m\) in Tab. 3). However, the evaluated cross sections from Refs. [43; 3] have the peak well within the error band of the measurement
Figure 7: Comparison of the \({}^{196}\)Au yield per primary beam particle, i.e. electrons, between FLUKA and the measured data. The blue band shows the yield predicted by the measurement’s cross section fit.
fit.
## 4 Discussion
The results from fitting a Breit-Wigner curve to the measured \({}^{196}\)Au yield in Tab. 3 show that the methodology discussed in Sec. 2 allows to determine photonuclear cross sections with the electron accelerator at METAS. For the reference process \({}^{197}\)Au\((\gamma,n)^{196}\)Au, the uncertainties on the predicted cross section in Fig. 8 are well within the range of the experimental and evaluated cross sections from the literature. Even with the restricted number of data points, i.e. available beam energies \(E_{beam}\) in Tab. 1, the fitting parameters are strongly constraint. Fig. 9 demonstrates how the fit strongly reduced the width of the
Figure 8: \({}^{197}\)Au\((\gamma,n)^{196}\)Au cross section from the measured data fit (see Tab. 3) in comparison with the cross sections TENDL 2019[43], IAEA 2019[3] and Ref. [20].
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \(n\left[10^{-24}\,\mathrm{cm}^{2}\right]\) & \(m\left[\mathrm{MeV}\right]\) & \(\Gamma\left[\mathrm{MeV}\right]\) & \(\varepsilon\left[\#iso/(cm^{3}prim)\right]\) \\ \hline FLUKA & \(2.43\pm 0.04\) & \(13.5\pm 0.1\) & \(2.12\pm 0.20\) & \((1.70\pm 0.60)\cdot 10^{-7}\) \\ Measurement & \(2.58\pm 0.05\) & \(14.2\pm 0.1\) & \(2.76\pm 0.27\) & \((2.28\pm 0.81)\cdot 10^{-7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results from fitting the Breit-Wigner function of Eq. (2) to either the FLUKA or measured \({}^{196}\)Au yield. The parameter’s pdf is shown in Fig. 9.
posterior pdf compared to the priors. Therefore there is a significant gain in information over our prior knowledge.
Unsurprisingly, the fit is mostly sensitive to the yield values around the resonance peak. In this region the Breit-Wigner curve has the strongest gradient and the photon fluence is still large. Therefore, slight changes in the upper integration limit in Eq. (1) have a larger impact on the yield.
Given the multiple orders of magnitude of the measured and simulated yields, it might be tempting to perform the fit in log space. The \(\log(y_{Au})\) has a high gradient at low energies and flattens towards \(\sim 20\,\mathrm{MeV}\). Fitting the log of the yield therefore gives a higher weight to the lower energy data points. The measured points in the threshold region are, however, affected by low count statistics and therefore may be affected by systematic uncertainties. Furthermore the convergence of the fit worsens in log space and the relative errors on \(n\), \(m\) and \(\Gamma\) increase compared to the results in Tab. 3.
Comparing the fits to the measured and simulated yields, it is clear that there is an underestimation of the yield starting around \(11\,\mathrm{MeV}\). This is the reason why the FLUKA data lead to a higher peak of the Breit-Wigner curve (see Fig. 8). Interestingly, these lower values of the yield do not drive \(m\) to higher energies. This is due to the fact that close to the threshold energy, the simulated yield is lower compared to the measured one.
Fig. 10 presents the pair plot of the fitting parameters for the case of the measured data. Clearly, the parameters \(n\), \(m\) and \(\Gamma\) are strongly correlated
Figure 9: Comparison of the prior (blue) and posterior (green) pdf for the three fitting parameters \(n\), \(m\) and \(\Gamma\). The measurement data constrains the parameter rather strongly.
among each other. This stems from the integration in Eq. 1 since it averages out the Breit-Wigner curve. Increasing the normalization \(n\) will drive \(m\) towards higher energies in order to decrease the overlap between the peak of the Breit-Wigner curve and the high fluence region. The same reasoning works for the other two correlations, i.e. higher values of \(m\) require a wider cross section that is able to still catch higher fluence contributions at lower energies and increasing \(n\) makes the Breit-Wigner function flatter.
The number of beam energies, of course, limits the goodness of the fit. As a cross check, we simulated more beam energies and performed a fit of the resulting yield. This means we effectively reverse engineered the cross section implemented in FLUKA. With a total of 18 beam energies, of which 12 lied below \(14\,\mathrm{MeV}\), the parameters are constrained much stronger. Even for \(\Gamma\) the relative error drops below \(1\,\mathrm{\char 37}\). This shows that despite the averaging of the integral in Eq. (1) the method can improve with more data points. The degeneracies of the parameters from Fig. 10 remain.
Figure 10: Pair plot of the three parameters for the fit to the measured yield showing a strong correlation of the posterior distributions.
Despite the good results for the \({}^{197}\)Au\((\gamma,n)^{196}\)Au cross section, our method and experimental setup face some challenges. On one side there is a model dependence in the sense that we need to rely on the assumption of a Breit-Wigner shape of the cross section. This assumption should be questioned in particular when measuring cross sections with non-spherical target nuclei that require more complex fitting functions. Increasing the number of fitting parameters, such as e.g. if the sum of two Breit-Wigner functions, would certainly require additional data points in order to keep the uncertainties at an acceptable level. On the other side, our method requires input from MC simulations which could be viewed as a limitation or, at least, as a source of systematic uncertainties. A good characterization of the experimental setup and verification of the MC simulations (see also Sec. 2) is there fore key for a successful determination of photonuclear reaction cross sections.
## 5 Conclusions and outlook
In this study we showed that it is possible to measure the photonuclear reaction cross section for the reaction \({}^{197}\)Au\((\gamma,n)^{196}\)Au using the Microtron at METAS. The \({}^{196}\)Au yield in thin gold foils is determined by irradiating thin gold foils with photons and measuring the induced activity with a HPGe spectrometer. Assuming that the cross section follows a Breit-Wigner curve and with the modelling of the photon fluence using MC simulations, we were able to reproduce the reference values for the \({}^{197}\)Au\((\gamma,n)^{196}\)Au cross section through a Bayesian fitting procedure. Even with a limited number of beam energies, the Bayesian fitting procedure yields low uncertainties on the parameters of the Breit-Wigner shape of the cross section. Our results crucially rely on an accurate characterization of the photon fluence spectrum as well as on the precise determination of the induced activity and the electron beam current. The method presented in this study can be translated easily to other photonuclear reactions, for which the cross sections are hardly known. Depending on the process under investigation, more fitting parameters will be required (sum of
two Breit-Wigner functions or multi-isotopic target materials). Therefore, we envision that more beam energies would be required for more complex fits to the data. The isotope \({}^{226}\)Ra is an intriguing candidate. The cross-section for this reaction would be interesting, considering large-scale production of \({}^{225}\)Ac for targeted alpha therapy using the photonuclear route. With the presented methodology we laid the groundwork to accurately measure photonuclear cross sections with a relatively simple setup. The method can be easily applied at other facilities which might have access to higher beam energies or higher beam currents.
In sum, our study not only validates the methodology for measuring the photonuclear reaction cross section using the \({}^{197}\)Au\((\gamma,n)^{196}\)Au reaction, but also opens up new avenues for extending this approach to other isotopes and applications.
## Funding
This study is supported by Swiss National Science Foundation Sinergia grant PHOtonuclear Reactions (PHOR): breakthrough research in radionuclides for theranostics awarded to A. Turler, S.Braccini and C. Kottler; Schweizerischer Nationalfonds zur Forderung der Wissenschaftlichen Forschung (CRSII5_180352).
## CRediT authorship contribution statement
**Saverio Braccini:** Supervision, Project administration, Funding acquisition, Writing - Review & Editing, **Pierluigi Casolaro:** Conceptualization, Methodology, Software, Validation, Investigation, Writing - Original Draft, Writing - Review & Editing, Visualization, Formal analysis, Data Curation, **Gaia Dellepiane:** Investigation, Writing - Review & Editing, **Christian Kottler:** Supervision, Project administration, Funding acquisition, Writing - Review & Editing, **Matthias Luthi:** Conceptualization, Methodology, Software, Validation, Investigation, Writing - Original Draft, Writing - Review & Editing,
Visualization, Formal analysis, Data Curation, **Lorenzo Mercolli:** Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization, **Peter Peier:** Resources, Writing - Review & Editing, Supervision, **Paola Scampoli:** Supervision, **Andreas Turler:** Writing - Review & Editing, Project administration, Funding acquisition. |
2308.00054 | Eternal Distance-2 Domination in Trees | We consider the eternal distance-2 domination problem, recently proposed by
Cox, Meger, and Messinger, on trees. We show that finding a minimum eternal
distance-2 dominating set of a tree is linear time in the order of the graph by
providing a fast algorithm. Additionally, we characterise when trees have an
eternal distance-2 domination number equal to their domination number or their
distance-2 domination number, along with characterizing which trees are eternal
distance-2 domination critical. We conclude by providing general upper and
lower bounds for the eternal distance-k domination number of a graph, as well
as constructing an infinite family of trees which meet said upper bound and
another which meets the given lower bound. | Alexander Clow, Christopher M van Bommel | 2023-07-31T18:20:33Z | http://arxiv.org/abs/2308.00054v1 | # Eternal Distance-2 Domination in Trees
###### Abstract
We consider the eternal distance-2 domination problem, recently proposed by Cox, Meger, and Messinger, on trees. We show that finding a minimum eternal distance-2 dominating set of a tree is linear time in the order of the graph by providing a fast algorithm. Additionally, we characterize when trees have an eternal distance-2 domination number equal to their domination number or their distance-2 domination number, along with characterizing which trees are eternal distance-2 domination critical. We conclude by providing general upper and lower bounds for the eternal distance-k domination number of a graph, as well as constructing an infinite family of trees which meet said upper bound and another which meets the given lower bound.
**Keywords:** Graph theory, Trees, Domination, Eternal Domination
**MSC Classification:** 05C69, 05C05
## 1 Introduction
Consider the problem of stationing ambulances throughout a city. We desire that each location in the city is close to an ambulance, so that paramedics can respond to a call quickly, but we also need the response time to be maintained after a specific ambulance is assigned to a call. We can model this problem using the notion of _eternal domination_ of a graph, which we outline as follows. For a graph \(G\), we start with an initial set of vertices occupied by "guards" (or in our application, ambulances) that form a _dominating set_ of the graph, that is, every vertex not in the set is adjacent to a vertex in the set. Then in a series of steps, a vertex is chosen ("attacked", requires an ambulance), and we require a new set of vertices that include the chosen vertex and for which there is a matching between the previous set and the new set where pairs of vertices are identical or adjacent (intuitively, each guard or ambulance moves to an adjacent location or stays put). We say that a collection of sets \(\{D_{1},D_{2},\ldots,D_{m}\}\) eternally dominates \(G\) if for any sequence of attacks \(A_{1},A_{2},\ldots\), there is a sequence of sets \(D_{i_{1}},D_{i_{2}},\ldots\) such that \(D_{i_{j}}\) contains the attacked vertex \(A_{j}\), and there is a matching between the vertices of \(D_{i_{j}}\) and \(D_{i_{j+1}}\) such that pairs of vertices are identical or adjacent. The minimum number of guards required, that is the minimum cardinality of an eternal dominating set, is called the _eternal domination number_ of \(G\), and is denoted \(\gamma_{all}^{\infty}(G)\). Here, the subscript all refers to the fact that each guard is able to move in response to an attack, as introduced by Goddard, Hedetniemi, and Hedetniemi [10], in contrast to the version originally introduced by Burger et al. [3] where only one guard is permitted to move in response to an attack.
Several works have studied the eternal domination number in recent years, as surveyed by Kloystermeyer and Mynhardt [14]. Goddard, Hedetniemi, and Hedetniemi [10] established the domination number as a fundamental lower bound and the clique-star cover number as a fundamental upper bound,
established exact values for cliques, complete bipartite graphs, paths, cycles, and Cayley graphs, and determined additional upper bounds in terms of the 2-domination number, independence number, and clique-connected cover number. Klostermyer and MacGillivray [12] provide a linear-time algorithm for determining the eternal domination number of trees. In a subsequent paper, Klostermeyer and MacGillivray [13] characterize trees achieving various upper and lower bounds on the eternal domination number.
The eternal domination number of grids has been extensively studied. Beaton, Finbow, and MacDonald [1] considered the eternal domination number of \(4\times n\) grids. Finbow, Messinger, and van Bommel [8] establish upper and lower bounds on the eternal domination number of \(3\times n\) grids. Bounds on the eternal domination number of \(5\times n\) grids were established by van Bommel and van Bommel [18]. Lamprou, Martin, and Schewe [15] provided a general upper bound for grid graphs that is optimal asymptotically. Bounds for strong grid graphs were studied by McInerney, Nisse, and Perennes [16] and Gagnon et al. [9].
Henning, Kloystermeyer, and MacGillivray [11] demonstrated a tight upper bound for connected graphs with minimum degree 2, and improved the bound for the class of cubic bipartite graphs. Cohen et al. [5] study a generalization called the spy-game, which they show to be NP-hard. Finbow et al. [7] demonstrated the advantage of allowing multiple guards to occupy the same vertex, and give exponential-time algorithms for calculating eternal domination numbers. In this work we will allow guards to occupy the same vertex. Blazej, Krist'a, and Tomas [2] determine bounds and a linear time algorithm for eternal domination numbers of cactus graphs.
Here, we assume an ambulance is not limited to moving to an adjacent location, rather they are allowed to move to locations up to distance \(k\) away. Then we can model this relaxed version of the problem using the notion of _eternal distance-\(k\) domination_ of a graph, as introduced by Cox, Meger, and Messinger [6], defined as follows. For a graph \(G\), we start with an initial set of vertices occupied by "guards" (or in our application, ambulances) that form a _distance-\(k\) dominating set_ of the graph, that is, every vertex not in the set is at distance at most \(k\) to a vertex in the set. Then in a series of steps, a vertex is chosen ("attacked", requires an ambulance), and we require a new set of vertices that include the chosen vertex and for which there is a matching between the previous set and the new set where pairs of vertices are within distance \(k\) (intuitively, an ambulance responds to the call, and the remaining ambulances are redistributed across the city to maintain their response time. We say that a collection of sets \(\{D_{1},D_{2},\ldots,D_{m}\}\) eternally dominates \(G\) if for any sequence of attacks \(A_{1},A_{2},\ldots\), there is a sequence of sets \(D_{i_{1}},D_{i_{2}},\ldots\) such that \(D_{i_{j}}\) contains the attacked vertex \(A_{j}\), and there is a matching between the vertices of \(D_{i_{j}}\) and \(D_{i_{j+1}}\) such that pairs of vertices are within distance \(k\). The minimum number of guards required, that is the minimum cardinality of an eternal distance-\(k\) dominating set is called the _eternal distance-\(k\) dominaton number_ of \(G\), and is denoted \(\gamma_{all,k}^{\infty}(G)\). Consider Figure 1 for two graphs whose domination, distance-2 domination, eternal domination, and eternal distance-2 domination numbers are given. Cox, Meger, and Messinger [6] considered general bounds on, the computational complexity of computing, and the exact values for small classes, of \(\gamma_{all,k}^{\infty}\). Additionally, they developed reductions for trees and present the following open problems:
**Question 1.1**.: _For what class of graphs, \(\mathcal{G}\), is \(\gamma_{k}(G)=\gamma_{all,k}^{\infty}(G)\) for all \(G\in\mathcal{G}\)?_
**Question 1.2**.: _Let \(\mathcal{G}_{n,m}\) be the family of simple graphs on \(n\) vertices and \(m\) edges. For a fixed \(n\) and \(m\), what are the graphs with the smallest eternal distance-\(k\) domination number, or largest eternal distance-\(k\) domination number?_
**Question 1.3**.: _Given a fixed \(n\) and fixed \(k\), what possible values can \(\gamma_{all,k}^{\infty}\) take on, for graphs \(G\) of order \(n\)?_
**Question 1.4**.: _Which graphs \(G\) have the property that \(\gamma_{all,2}^{\infty}(G)=\gamma(G)\)? Can we characterize the trees with this property?_
**Question 1.5**.: _Suppose that for every minimum dominating set of a tree \(T\), each vertex in the dominating set has at least two private neighbours. Then is \(\gamma_{all,2}^{\infty}(T)=\gamma(T)\)?_
In this work, we focus primarily on the case of \(k=2\) in trees. We develop sufficient reductions to provide an algorithm to determine the eternal distance-2 domination number of trees in linear time. Our main approach is to consider the low-diameter subgraphs that are most leaf-like; we develop this idea formally in Section 3. Using these reductions, we provide a linear time algorithm for the computation of the eternal distance-2 domination number of trees. We then provide characterizations of trees for which \(\gamma_{all,2}^{\infty}=\gamma\), addressing Question 1.4 for the class of trees and allowing us to answer Question 1.5 in the negative. Additionally, we characterize the trees that are eternal distance-2 domination critical, as well as the trees for which \(\gamma_{all,2}^{\infty}=\gamma_{2}\), partially addressing Question 1.1. Finally, we present extremal families of trees for eternal distance-\(k\) domination, offering a solution to Question 1.2 in the case \(m=n-1\).
## 2 Preliminaries
We provide the following results relating domination parameters that will be used throughout. We begin with the following bound on the eternal distance-\(k\) domination number in terms of distance domination, as observed by Cox, Meger, and Messinger [6].
**Proposition 2.1**.: _[_6_]_ _For any graph \(G\) and integer \(k\geq 2\),_
\[\gamma_{k}(G)\leq\gamma_{all,k}^{\infty}(G)\leq\gamma_{\left\lfloor\frac{k}{2 }\right\rfloor}(G).\]
Equality between \(\gamma(G)\) and \(\gamma_{2}(G)\) for trees was characterized by Raczek [17], which forces equality of the eternal distance-2 domination number. We state the characterization here and consider the question of equality of the bounds in later sections. Let \(\mathbb{T}\) be the family of all trees \(T\) that can be obtained from sequence \(T_{1},\ldots,T_{j}\) (\(j\geq 1\)) of trees such that \(T_{1}\) is the path \(P_{2}\) and \(T=T_{j}\), such that \(T_{i+1}\) can be obtained recursively from \(T_{i}\) by the operation \(\mathbb{T}_{1}\), \(\mathbb{T}_{2}\), or \(\mathbb{T}_{3}\):
* **Operation**\(\mathbb{T}_{1}\). The tree \(T_{i+1}\) is obtained from \(T_{i}\) by adding a vertex \(x_{1}\) and the edge \(x_{1}y\) where \(y\in V(T_{i})\) is a stem vertex of \(T_{i}\).
* **Operation**\(\mathbb{T}_{2}\). The tree \(T_{i+1}\) is obtained from \(T_{i}\) by adding a path \((x_{1},x_{2},x_{3})\) and the edge \(x_{1}y\) where \(y\in V(T_{i})\) is neither a leaf nor a stem vertex in \(T_{i}\).
* **Operation**\(\mathbb{T}_{3}\). The tree \(T_{i+1}\) is obtained from \(T_{i}\) by adding a path \((x_{1},x_{2},x_{3},x_{4})\) and the edge \(x_{1}y\) where \(y\in V(T_{i})\) is a stem vertex in \(T_{i}\).
Additionally, let \(P_{1}\) belong to \(\mathbb{T}\). Then the following is established.
**Theorem 2.2** ([17]).: _Let \(T\) be a tree. Then \(T\in\mathbb{T}\) if and only if \(\gamma(T)=\gamma_{2}(T)\)._
We next note the exact value for the eternal distance-\(k\) domination number has been calculated for paths by Cox, Meger, and Messinger [6]. In Section 7 we will see that paths are an extremal family of graphs in terms of eternal distance-\(k\) domination.
**Theorem 2.3**.: _[_6_]_ _For \(n\geq 1\) and \(k\geq 1\), \(\gamma_{all,2}^{\infty}(P_{n})=\left\lceil\frac{n}{k+1}\right\rceil\)._
Finally, to characterize the trees that are eternal distance-2 domination critical, we use the concept of the 1-sum of two graphs to build the family. For graphs \(G\) and \(H\), the _1-sum_ of \(G\) and \(H\) at vertex \(u\) of \(G\) and \(v\) of \(H\) is the graph formed by taking the disjoint union of graphs \(G\) and \(H\) and identifying vertices \(u\) and \(v\).
## 3 Reductions & Complexity for Eternal Distance-2 Domination in Trees
Let \(T=(V,E)\) be a tree. We say \(v\in V\) is a 0-leaf iff \(v\) is a leaf, and for \(k>0\), we say \(v\) is a \(k\)-leaf iff \(v\) is adjacent to a \((k-1)\)-leaf and all but perhaps one of the neighbours of \(v\) are \(t\)-leaves, where \(t<k\). For a 1-leaf, \(u\), let \(L(u)\) be the set of all leaves adjacent to \(u\), and let \(L[u]=L(u)\cup\{u\}\). Given a \(k\)-leaf,
\(v\in V\), \(k>1\), let \(L(v)=\cup_{u\in N(v)}L[u]\), where \(u\) is a \(t\)-leaf and \(t<k\), and let \(L[v]=L[v]\cup\{v\}\). We begin by demonstrating that a tree with a sufficiently large diameter contains a \(k\)-leaf.
**Lemma 3.1**.: _Let \(T=(V,E)\) be a tree. If the diameter of \(T\) is at least \(2k\), then there exists a \(v\in V\) which is a \(k\)-leaf._
Proof.: Let \(T=(V,E)\) be a tree with diameter at least \(2k\). Let \(P=v_{0}v_{1}v_{2}\ldots v_{m}\) be a longest path in \(T\). If \(v_{k}\) is not a \(k\)-leaf, then there exists a path \(Q=w_{0}w_{1}w_{2}\ldots w_{t}x_{k}\), where \(t\geq k\). But then if we replace the vertices \(v_{0}\ldots v_{k}\) of \(P\) with \(Q\), we obtain a longer path in \(T\), a contradiction. Hence, \(T\) contains a \(k\)-leaf.
Next, we demonstrate a reduction strategy for computing the eternal distance-2 domination number based on \(2\)-leaves.
**Lemma 3.2**.: _Let \(T\) be a tree with diameter at least \(4\). If \(v\) is a \(2\)-leaf in \(T\) such that \(T[L(v)]\) has diameter \(2\), let \(T^{\prime}=T-L[v]\), otherwise, let \(T^{\prime}=T-L(v)\). Then, \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T^{\prime})+1\)._
Proof.: Let \(T=(V,E)\) be a tree with diameter at least \(2k\) and let \(v\in V\) be a \(2\)-leaf. Let \(T^{\prime}\) be defined as in the statement of the Lemma. We will show \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T^{\prime})+1\) by demonstrating that \(\gamma^{\infty}_{all,2}(T)\leq\gamma^{\infty}_{all,2}(T^{\prime})+1\) and \(\gamma^{\infty}_{all,2}(T)\geq\gamma^{\infty}_{all,2}(T^{\prime})+1\).
To begin observe that as \(v\) is a \(2\)-leaf and \(T\) is a tree, there must be at least \(1\) guard in \(L[v]\) in every distance-\(2\) dominating set of \(T\). Otherwise, there exists a vertex in \(L(v)\) which is not distance-\(2\) dominated. Hence, for every eternally distance-\(2\) dominating set there is at least \(1\) guard in \(L[v]\).
Case.1: \(T[L(v)]\) has diameter \(2\). Then for all vertices \(u\in L[v]\), a guard at \(u\) is within distance \(2\) of every other vertex in \(L[v]\). Hence, the one guard which must be in \(L[v]\) can eternally distance-\(2\) dominate \(L[v]\). This implies that \(\gamma^{\infty}_{all,2}(T)\leq\gamma^{\infty}_{all,2}(T^{\prime})+1\), as one extra guard is sufficient to guard \(L[v]\), while \(\gamma^{\infty}_{all,2}(T^{\prime})\) is sufficient to guard the rest of the graph.
Note that \(\gamma^{\infty}_{all,2}(T)\geq\gamma^{\infty}_{all,2}(T^{\prime})+1\) follows directly from the fact there must always be a guard in \(L[v]\). This is because if the guard \(g_{1}\) in \(L[v]\) were to ever move to protect a vertex \(x\in V\setminus L[v]\), then another guard, \(g_{2}\), would have to enter \(L[v]\) to remain a distance-\(k\) dominating set. Given \(T\) is a tree, this is never advantageous for the guards, as if \(g_{2}\) is within distance \(2\) of \(L[v]\) while not being in \(L[v]\), this implies \(g_{2}\) is also within distance \(2\) of \(x\), so \(g_{2}\) can move directly to \(x\) and \(g_{1}\) can remain in \(L[v]\).
Case.2: \(T[L(v)]\) has diameter at least \(3\). Then we will demonstrate \(\gamma^{\infty}_{all,2}(T)\leq\gamma^{\infty}_{all,2}(T^{\prime})+1\) by providing a winning strategy with for the guards using exactly \(\gamma^{\infty}_{all,2}(T^{\prime})+1\) guards. Place \(\gamma^{\infty}_{all,2}(T^{\prime})\) guards on \(V\setminus L(v)\) and let them proceed as if playing on \(T^{\prime}\), next place an extra guard, \(g_{v}\), on \(v\). If the attacker attacks vertices in \(V\setminus L(v)\), then let the guards \(\gamma^{\infty}_{all,2}(T^{\prime})\) proceed as if playing on \(T^{\prime}\), while \(g_{v}\) does not move.
If a vertex in \(L(v)\) is attacked let \(g_{v}\) defend against the attack, while the other \(\gamma^{\infty}_{all,2}(T^{\prime})\) guards respond as if \(v\) was attacked in \(T^{\prime}\). This will place a guard \(g\) on \(v\). If the next attack is also on \(L(v)\), then let \(g\) move to respond to the attack, while \(g_{v}\) returns to \(v\) and the guards in \(V\setminus L[v]\) do not move. As long as the attacker attacks vertices in \(L(v)\), guards \(g\) and \(g_{v}\) continue this strategy where the guard at \(v\) responds, while the other moves to \(v\). Should the attacker attack a vertex outside of \(L(v)\), then the guard currently at \(v\), say \(g\) without loss of generality, assumes the role of one of the \(\gamma^{\infty}_{all,2}(T^{\prime})\) guards protecting \(T^{\prime}\), while the other guard (again without loss of generality) \(g_{v}\), returns to \(v\). As this returns the game to its initial state, \(\gamma^{\infty}_{all,2}(T)\leq\gamma^{\infty}_{all,2}(T^{\prime})+1\).
The fact that \(\gamma^{\infty}_{all,2}(T)\geq\gamma^{\infty}_{all,2}(T^{\prime})+1\) follows by a similar argument as case \(1\), where it is never advantageous for the guard \(g_{v}\) whose job it is to protect \(L[v]\) to protect a vertex \(x\) outside of \(L[v]\), as any guard \(g\) which begins outside of \(L[v]\) and would take on the role of defending \(L[v]\) could simply protect \(x\) directly.
This completes the proof as we have shown that \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T^{\prime})+1\) as desired in both cases.
Note that the approach used in Lemma 3.2 does not easily extend for \(\gamma^{\infty}_{all,k}\) when \(k>2\). For an example of this see Figure 2. Note that when \(k>2\), we may conclude that \(\gamma^{\infty}_{all,k}(T)\leq\gamma^{\infty}_{all,k}(T^{\prime})+1\), however we cannot guarantee that this bound reaches equality, because the guard \(g_{2}\) (see proof of case \(1\)) might have distance greater than \(k\) to \(x\). This is because supposing \(g_{1}\) starts at \(v\) and \(g_{2}\) starts at \(z\), \(\operatorname{dist}(v,x)\leq k\) implies that the neighbour of \(v\) not in \(L[v]\), call it \(y\), has \(\operatorname{dist}(x,y)\leq k-1\), while, \(\operatorname{dist}(z,y)\leq k-1\). Hence, \(\operatorname{dist}(z,x)\leq 2k-2\) is an upper bound that cannot be improved. For \(k\leq 2\) this is fine, as \(k\geq 2k-2\), however for \(k>2\), \(k<2k-2\) implies \(g_{2}\) may be unable to reach \(x\) in a single
move. Notice that Figure 2 can be generalized to all cases \(k\geq 3\) by appending path of length \(k-3\) to each leaf in \(T\) and \(T^{\prime}\) respectively.
It is also significant to point out that the trees \(T\) and \(T^{\prime}\) in Figure 2 are counterexamples to Proposition 3 from [6]. That is, in an identical way to how Lemma 3.2 does not generalize to the \(k>2\) case, Proposition 3 from [6] will not apply to \(k>2\).
When the diameter of the tree is small, the eternal distance-2 domination number can be directly computed as follows.
**Lemma 3.3**.: _Let \(T=(V,E)\) be a tree and let \(k>0\) be an integer and let \(d\) be the diameter of \(T\),_
* _if_ \(k<d<2k\)_, then_ \(\gamma^{\infty}_{all,k}(T)=2\)_, and_
* _if_ \(d\leq k\)_, then_ \(\gamma^{\infty}_{all,k}(T)=1\)_._
Proof.: Recall that [6] showed that for all graphs \(G\) and integers \(k\), \(\gamma^{\infty}_{all,k}(G)=\gamma^{\infty}_{all,1}(G^{k})\). Suppose \(k<d<2k\). Then \(T^{k}\) has a universal vertex but is not a complete graph. This implies \(\gamma^{\infty}_{all,1}(T^{k})=2\), which implies \(\gamma^{\infty}_{all,k}(T)=2\). Now suppose, \(d\leq k\), then \(T^{k}\) is complete, implying \(\gamma^{\infty}_{all,1}(T^{k})=\gamma^{\infty}_{all,k}(T)=1\).
We conclude this section with an algorithm, based on the previous results, that will allow us to compute the eternal distance-2 domination number in linear time.
**Algorithm 3.4**.: _Let \(T\) be a tree rooted at a vertex \(r\). We compute the eternal distance-2 domination number of \(T\) as follows:_
1. _Set_ \(\gamma=0\) _and let_ \(S=\{r\}\) _be a stack._
2. _While the stack is not empty, let_ \(x\) _be the top vertex of the stack._ 1. _If the subtree of_ \(T\) _consisting of_ \(x\) _and its descendants has depth at least 3, add each child of_ \(x\) _whose subtree has depth at least two to the stack._ 2. _If the subtree of_ \(T\) _consisting of_ \(x\) _and its descendants has depth 2, remove_ \(x\) _from the stack, increment_ \(\gamma\)_, and replace_ \(T\) _with_ \(T-D[x]\) _if_ \(x\) _has one child, and_ \(T-D(x)\) _otherwise, where_ \(D(x)\) _is the set of descendants of_ \(x\)_._ 3. _Otherwise, remove_ \(x\) _from the stack._ 3. _If_ \(T\) _is not empty, increment_ \(\gamma\)_._ 4. _Return_ \(\gamma\)_._
**Theorem 3.5**.: _Let \(T\) be a tree. Then the output, \(\gamma\), of Algorithm 3.4, for an arbitrary vertex \(r\) of \(T\), is equal to \(\gamma^{\infty}_{all,2}(T)\). Moreover, the running time of Algorithm 3.4 for such an input is linear in the number of vertices._
Proof.: We proceed by induction on the number of vertices in the following way, for any rooted tree \(T\) of order less than \(n\) with root \(r\), Algorithm 3.4 will output \(\gamma^{\infty}_{all,k}(T)\).
As our base case consider a tree \(T=(V,E)\) rooted at a fixed but arbitrary vertex \(r\) which has depth at most 2. Then \(T\) has diameter at most 4. Note in particular that this is the case for all graphs on at most 3 vertices. Suppose \(T\) has diameter at most 2, then by Lemma 3.3, \(\gamma^{\infty}_{all,2}(T)=1\). If \(r\) is a universal vertex of \(T\), then \(T\) has depth 1, so the algorithm returns 1. Otherwise, \(r\) is a leaf and \(T\) has depth 2, so \(L[x]=V\), hence, \(T\) is replaced by \(\emptyset\), and the algorithm returns 1. If \(T\) has diameter 3 or 4, then by Lemma 3.3, \(\gamma^{\infty}_{all,2}(T)=2\). We have that \(T\) has depth 2 and \(r\) has multiple children. Thus \(T\) is replaced by the single vertex \(r\), so the algorithm returns 2.
Now let \(T\) be a tree rooted at \(r\) such that \(T\) has depth at least 3. Let \(v\) be the first vertex to appear on top of the stack for which the subtree of \(T\) consisting of \(v\) and its descendants has depth 2; clearly \(v\neq r\), furthermore, it should be clear that a vertex \(v\) will appear at some point during the run-time of
Algorithm 3.4 (as there must exist a descendant of \(r\) with the properties of \(v\) given \(T\) has depth at least \(3\), while vertices on top of the stack of type (a) add vertices to the stack whose subtree has smaller depth, vertices of type (b) are exactly vertices \(v\), while vertices of type (c) are removed from the stack not to be considered again). Then \(v\) is a \(2\)-leaf of \(T\), and the diameter of \(T\) is at least \(3\). If \(T\) has diameter at most \(4\), then by Lemma 3.3, \(\gamma^{\infty}_{all,2}(T)=2\). Then \(T\) is replaced with a nonempty tree of diameter at most \(2\). So the algorithm returns \(2\) as desired by induction, noting that the steps of the algorithm applied to this tree are a subset of the steps applied to \(T\).
If \(T\) has diameter greater than \(4\), then the ancestor of \(v\) is not in \(L[v]\). If \(\deg(v)=2\), then by Lemma 3.2, \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T-L[v])+1\), which by the induction hypothesis is precisely the value returned by the algorithm. If \(\deg(v)\geq 3\), then by Lemma 3.2, \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T-L(v))+1\), which by the induction hypothesis is precisely the value returned by the algorithm.
Finally, we observe that each vertex of the tree is only kept on the stack if its corresponding subtree has depth at least \(3\). When we return to this vertex, its descendants have depth at most \(1\), so it has depth at most \(2\), and is therefore removed from the stack. Furthermore, a vertex is considered while not on top of the stack only when first, second, or third ancestor is on top of the stack during the while loop. Thus, each vertex is considered at most a constant number of times and each consideration is linear time. Therefore, the algorithm runs in linear time. This completes the proof.
## 4 Trees with \(\gamma^{\infty}_{all,2}=\gamma\)
In this section we resolve the question posed in [6] of characterizing when a tree has eternal distance-\(2\) domination number equal to its domination number. The primary tool in our analysis is the reduction given in Lemma 3.2. We begin this section by pointing out that there are trees \(T\) where every vertex in a minimum dominating set has at least \(2\) private neighbours, but the eternal distance-\(2\) domination number is arbitrarily far from the domination number. This resolves another question in [6] in the negative.
For an example of such a tree take a star \(K_{1,n}\) and add two leaves to each leaf of \(K_{1,n}\) to form \(T_{n}\). Then, \(\gamma(T_{n})=n\) while every vertex in the unique minimum dominating set in this tree has exactly two private neighbours. However as the diameter of \(T_{n}\) is \(4\), Lemma 3.3 implies \(\gamma^{\infty}_{all,2}(T_{n})=2\).
We first note the following lemma which is a special case of Proposition 1 from [6].
**Lemma 4.1** ([6]).: _Let \(G=(V,E)\) be a graph. Then, \(\gamma^{\infty}_{all,2}(G)\leq\gamma(G)\)._
In order to characterize the trees whose domination number is equal to their eternal distance-\(2\) domination number we must first define the following family of trees and explore several properties of this family. We define the family of trees \(\mathcal{T}\) recursively as follows:
1. Every tree with diameter at most \(3\) is in \(\mathcal{T}\), and
2. If \(T\in\mathcal{T}\), then the tree \(T^{\prime}\) formed by appending a star \(K_{1,m}\) with \(m\geq 2\) with an edge from a leaf of \(K_{1,m}\) to any vertex of \(T\) is also in \(\mathcal{T}\), and
3. If \(T\in\mathcal{T}\) and \(v\in V(T)\) is a leaf such that there is a minimum dominating set containing \(v\), then for all trees \(T^{\prime\prime}\) formed by appending a star \(K_{1,m}\) where \(m\geq 1\) to \(v\) by adding an edge from \(v\) to the high degree vertex of the star, as well as appending \(t\geq 1\) leaves to \(v\), is also in \(\mathcal{T}\).
4. If \(T\in\mathcal{T}\) and \(v\in V(T)\) is a leaf such that \(\gamma(T-v)=\gamma(T)-1\), then \(T^{\prime\prime\prime}\) formed by appending two stars \(K_{1,m}\) and \(K_{1,M}\) where \(m,M\geq 1\) to \(v\) by adding an edge from \(v\) to the high degree vertex of the stars, is also in \(\mathcal{T}\).
**Theorem 4.2**.: _If \(T\) is a tree, then \(\gamma^{\infty}_{all,2}(T)=\gamma(T)\) if and only if \(T\in\mathcal{T}\)._
Proof.: Suppose \(T\in\mathcal{T}\). If \(T\) has diameter at most \(2\), then \(T\) has a universal vertex, so \(\gamma^{\infty}_{all,2}(T)=\gamma(T)=1\). If \(T\) has diameter \(3\), then \(\gamma(T)=2\) since \(T\) does not have a universal vertex, but the two non-leaves form a dominating set, and \(\gamma^{\infty}_{all,2}(T)=2\) by Lemma 3.3.
Suppose all trees \(S\in\mathcal{T}\) on fewer than \(n\) vertices satisfy \(\gamma^{\infty}_{all,2}(S)=\gamma(S)\), and let \(T\in\mathcal{T}\) be a tree with \(n\) vertices and diameter at least \(4\). Suppose \(T\) is formed by appending \(K_{1,m}\), \(m\geq 2\), to some \(T^{\prime}\in\mathcal{T}\). Then we have \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T^{\prime})+1\) by Lemma 3.2. Moreover, \(\gamma(T)\leq\gamma(T^{\prime})+\gamma(K_{1,m})=\gamma(T^{\prime})+1\), and any minimum dominating set of \(T\) contains either the stem of \(K_{1,m}\) or its unique leaf neighbour, which dominates no vertex of \(T^{\prime}\), so \(\gamma(T^{\prime})\leq\gamma(T)-1\). Therefore, \(\gamma(T)=\gamma(T^{\prime})+1\). Since \(\gamma^{\infty}_{all,2}(T^{\prime})=\gamma(T^{\prime})\) by the induction hypothesis, it follows that \(\gamma^{\infty}_{all,2}(T)=\gamma(T)\).
Now suppose \(T\) is formed from some \(T^{\prime\prime}\in\mathcal{T}\) by taking a leaf \(v\) for which there is a minimum dominating set of \(T^{\prime\prime}\) containing \(v\), and appending to \(v\) a star \(K_{1,m}\) where \(m\geq 1\) by adding an edge to the
high degree vertex of the star, as well as appending \(t\geq 1\) leaves. Then \(v\in V(T)\) is a 2-leaf where \(T[L[v]]\) has diameter 3 and \(T^{\prime\prime}=T-L(v)\). Then Lemma 3.2 implies \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T^{\prime\prime})+1\). Furthermore, let \(D\subset V(T^{\prime\prime})\) be a minimum dominating set of \(T^{\prime\prime}\) where \(v\in D\). Then, \(D\cup\{u\}\) where \(u\) is the high degree vertex of the star appended to \(v\) in \(T\) is a dominating set of \(T\). This implies \(\gamma(T)\leq\gamma(T^{\prime\prime})+1\). As \(T^{\prime\prime}\in\mathcal{T}\) implies \(\gamma^{\infty}_{all,2}(T^{\prime\prime})=\gamma(T^{\prime\prime})\) and Lemma 4.1 implies \(\gamma(T^{\prime\prime})+1=\gamma^{\infty}_{all,2}(T^{\prime\prime})+1=\gamma^ {\infty}_{all,2}(T)\leq\gamma(T)\) we conclude that \(\gamma(T)=\gamma(T^{\prime\prime})+1\). Hence, \(\gamma^{\infty}_{all,2}(T)=\gamma(T)\) as required.
Finally, suppose \(T\) is formed by appending two stars \(K_{1,m}\) and \(K_{1,M}\) where \(m,M\geq 1\) to \(v\) by adding an edge from \(v\) to the the high degree vertices of each star, where \(v\in V(T^{\prime\prime\prime})\) for some \(T^{\prime\prime\prime}\in\mathcal{T}\) and leaf \(v\) satisfying \(\gamma(T^{\prime\prime\prime}-v)=\gamma(T^{\prime\prime\prime})-1\). Then \(v\) is a 2-leaf in \(T\) and Lemma 3.2 implies \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T^{\prime\prime\prime})+1\). As \(\gamma(T^{\prime\prime\prime}-v)=\gamma(T^{\prime\prime\prime})-1\), all minimum dominating sets of \(T^{\prime\prime\prime}-v\) do not contain \(v\) or a neighbour of \(v\). Then, letting \(D\) be a minimum dominating set of \(T^{\prime\prime\prime}-v\) and letting \(u_{1}\) and \(u_{2}\) be the high degree vertices of the appended stars we see that \(D\cup\{u_{1},u_{2}\}\) is a dominating set in \(T\), hence, \(\gamma(T)\leq\gamma(T^{\prime\prime\prime}-v)+2=\gamma(T^{\prime\prime\prime} )+1\). It is easy to see that \(\gamma(T)\geq\gamma(T^{\prime\prime\prime}-v)+2=\gamma(T^{\prime\prime\prime} )+1\) as \(T\) contains 2 leaves with no common neighbours adjacent to no vertices in \(T^{\prime\prime\prime}-v\). Thus, \(\gamma(T)=\gamma(T^{\prime\prime\prime})+1\) implying \(\gamma(T)=\gamma^{\infty}_{all,2}(T)\) as required.
Conversely, suppose there exists a tree \(X\) such that \(X\notin\mathcal{T}\), but \(\gamma^{\infty}_{all,2}(X)=\gamma(X)\). Let \(Y\) be a minimal such tree with respect to induced subgraph. Then the diameter of \(Y\) is at least 4. It follows from Lemma 3.1 that there exists a 2-leaf in \(Y\).
Suppose \(v\in V(Y)\) is a 2-leaf such that \(Y[L[v]]\) has diameter 2, and let \(Y^{\prime}=Y-L[v]\). By Lemma 3.2, we have \(\gamma^{\infty}_{all,2}(Y)=\gamma^{\infty}_{all,2}(Y^{\prime})+1\), and by Lemma 4.1, we have \(\gamma^{\infty}_{all,2}(Y^{\prime})\leq\gamma(Y^{\prime})\). Consider a leaf \(\ell\neq v\) in \(Y[L[v]]\). Then any dominating set of \(Y\) contains a vertex in \(L[v]\) in order to guard \(\ell\), and that guards no vertex in \(Y^{\prime}\). Thus \(\gamma(Y^{\prime})<\gamma(Y)\). But this implies \(\gamma(Y)-1=\gamma^{\infty}_{all,2}(Y)-1=\gamma^{\infty}_{all,2}(Y^{\prime}) \leq\gamma(Y^{\prime})\leq\gamma(Y)-1\). It follows that \(\gamma^{\infty}_{all,2}(Y^{\prime})=\gamma(Y^{\prime})\). By the minimality of \(Y\), we have \(Y^{\prime}\in\mathcal{T}\). But then by definition of \(\mathcal{T}\), we have \(Y\in\mathcal{T}\) by condition (2), a contradiction. Hence we may assume that for every 2-leaf \(v\) in \(T\), the diameter of \(Y[L[v]]\) is at least 3.
Suppose \(v\in V(Y)\) is a 2-leaf such that \(Y[L[v]]\) has diameter at least 3, and let \(Y^{\prime\prime}=Y-L(v)\). Observe \(v\) is a leaf in \(Y^{\prime\prime}\). By Lemma 3.2, we have \(\gamma^{\infty}_{all,2}(Y)=\gamma^{\infty}_{all,2}(Y^{\prime\prime})+1\), and by Lemma 4.1, we have \(\gamma^{\infty}_{all,2}(Y^{\prime\prime})\leq\gamma(Y^{\prime\prime})\). Let \(w\) and \(x\) be the ends of a longest path in \(Y[L[v]]\). Then \(x\) and \(w\) are leaves and \(3\leq\mathrm{dist}(w,x)\leq 4\). Then there is a minimum dominating set of \(Y\) that contains the unique neighbour of \(w\) and the unique neighbour of \(x\) both of which are members of \(N[v]\).
Suppose \(\mathrm{dist}(w,x)=3\) and assume without loss of generality that \(\mathrm{dist}(w,v)=2\). Note that as \(\mathrm{dist}(w,x)=3\) and \(\mathrm{dist}(w,v)=2\), \(v\) must be the neighbour of \(x\). Let \(D\) be such a minimum dominating set of \(Y\) containing \(v\) (the unique neighbour of \(x\)). It follows that the neighbour of \(w\) is not necessary to dominate any vertex in \(Y^{\prime\prime}\), so \(\gamma(Y^{\prime\prime})<\gamma(Y)\). As before this implies that \(\gamma(Y)-1=\gamma^{\infty}_{all,2}(Y)-1=\gamma^{\infty}_{all,2}(Y^{\prime \prime})\leq\gamma(Y^{\prime\prime})\leq\gamma(Y)-1\), so \(\gamma(Y^{\prime\prime})=\gamma(Y)-1\). Then \(D\setminus N(w)\) (which contains \(v\)) is a minimum dominating set of \(Y^{\prime\prime}\) and \(\gamma^{\infty}_{all,2}(Y^{\prime\prime})=\gamma(Y^{\prime\prime})\) and by the minimiality of \(Y\), \(Y^{\prime\prime}\in\mathcal{T}\). But this implies \(Y\in\mathcal{T}\) by condition (3), a contradiction.
Suppose then that \(\mathrm{dist}(w,x)=4\). Then \(\mathrm{dist}(w,v)=\mathrm{dist}(x,v)=2\). Let \(D\) be a minimum dominating set of \(Y\) containing the neighbour of \(w\) and \(x\), call them \(u_{x},u_{w}\). As both \(u_{x},u_{w}\) are neighbours of \(v\), \(D\setminus\{u_{x},u_{w}\}\) is a dominating set for \(Y^{\prime\prime}-v\), thus, \(\gamma(Y^{\prime\prime}-v)\leq\gamma(Y)-2\). Additionally \((D\cup\{v\})\setminus\{u_{x},u_{w}\}\) is clearly a dominating in \(Y^{\prime\prime}\). Hence, \(\gamma(Y^{\prime\prime})\leq\gamma(Y)-1\). Again, \(\gamma(Y)-1=\gamma^{\infty}_{all,2}(Y)-1=\gamma^{\infty}_{all,2}(Y^{\prime\prime}) \leq\gamma(Y^{\prime\prime})\leq\gamma(Y)-1\), so we conclude \(\gamma(Y^{\prime\prime})=\gamma(Y)-1\) and \(\gamma^{\infty}_{all,2}(Y^{\prime\prime})=\gamma(Y^{\prime\prime})\) implying \(Y^{\prime\prime}\in\mathcal{T}\). Observe that \(\gamma(Y^{\prime\prime})\leq\gamma(Y^{\prime\prime}-v)+1\) as any dominating set of \(Y^{\prime\prime}-v\) union \(v\) dominated \(Y^{\prime\prime}\). Then \(\gamma(Y^{\prime\prime}-v)=\gamma(Y^{\prime\prime})-1\). But this is a contradiction as condition (4) implies \(Y\in\mathcal{T}\). This concludes the proof.
## 5 Eternal Distance-2 Domination Critical Trees
We say a graph \(G\) is eternal distance-2 domination critical if deleting any non-cut vertex, \(v\), ensures that \(G-v\) has eternal distance-2 domination number strictly less than \(G\). Of course if \(T\) is a tree this is equivalent to stating that deleting any leaf from \(T\) reduces the eternal distance-2 domination number of \(T\). In this section we characterize which trees are eternally distance-2 domination critical. We begin with the following observation.
**Lemma 5.1**.: _If \(T=(V,E)\) is a tree and \(v\in V\) is a vertex adjacent to two leaves \(u,w\), then \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T-u)\)._
Proof.: Suppose \(T=(V,E)\) is a tree and \(v\in V\) is a vertex adjacent to two leaves \(u,w\). It is clear that \(\gamma^{\infty}_{all,2}(T-u)\leq\gamma^{\infty}_{all,2}(T)\) so it is sufficient to show \(\gamma^{\infty}_{all,2}(T-u)\geq\gamma^{\infty}_{all,2}(T)\). By definition, there exists an initial configuration \(D\) in \(T-u\) such that for any infinite sequence of attacks \(\mathcal{A}^{\prime}\) there exists a strategy for deploying the guards \(\mathcal{D}^{\prime}\) which is eternally distance-2 dominating. Given \(\mathcal{D}^{\prime}\) we define \(\mathcal{D}\) to be the same strategy in \(T\), except if \(u\) is attacked the guards respond as if \(w\) were attacked in \(T-u\), with the exception of the guard who would move to \(w\) who instead moves to \(u\). As \(\mathrm{dist}(x,u)=\mathrm{dist}(x,w)\) for all \(x\in V\setminus\{u,w\}\), then \(\mathcal{D}\) is eternally distance-2 dominating. Hence, \(\gamma^{\infty}_{all,2}(T-u)\geq\gamma^{\infty}_{all,2}(T)\) as required.
Note that Lemma 5.1 implies that if \(T\) is eternal distance-2 domination critical, then every stem of \(T\) has exactly one leaf. Using this observation we are able to show the following result regarding the structure of 2-leaves in an eternal distance-2 domination critical tree.
**Lemma 5.2**.: _If \(T=(V,E)\) is eternal distance-\(2\) domination critical, then for all \(2\)-leaves \(v\in V\), \(L[v]\) is isomorphic to \(P_{3}\) or \(P_{4}\)._
Proof.: Suppose \(T=(V,E)\) is an eternal distance-2 domination critical tree and let \(v\in V\) be a 2-leaf in \(T\). Suppose \(v\) is adjacent to multiple 1-leaves \(x_{1},x_{2}\) with respective leaves \(y_{1},y_{2}\). By Lemma 3.2, \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T-L(v))+1\). Now, for \(T-y_{2}\), we have again by Lemma 3.2 that \(\gamma^{\infty}_{all,2}(T-y_{2})=\gamma^{\infty}_{all,2}(T-L(v))+1\), since \((T-y_{2})[L(v)\setminus\{y_{2}\}]\) has diameter at least 3. Thus \(\gamma^{\infty}_{all,2}(T)=\gamma^{\infty}_{all,2}(T-y_{2})\), which contradicts \(T\) being eternal distance-2 domination critical. It follows, together with Lemma 5.1, that \(T[L[v]]\) is isomorphic to \(P_{3}\) or \(P_{4}\).
In the following pair of results, we demonstrate that when these structured 2-leaves exist in an eternal distance-2 domination critical graph, it must have been built from an eternal distance-2 domination critical graph.
**Lemma 5.3**.: _Let \(T\) be a tree, and let \(T^{\prime}\) be formed by appending a path on three vertices to a leaf of \(T\). Then \(T^{\prime}\) is eternal distance-2 domination critical if and only if \(T\) is eternal distnace-2 domination critical._
Proof.: Let \(v\) be a leaf in both \(T\) and \(T^{\prime}\). By Lemma 3.2, we have \(\gamma^{\infty}_{all,2}(T^{\prime})=\gamma^{\infty}_{all,2}(T)+1\) and \(\gamma^{\infty}_{all,2}(T^{\prime}-v)=\gamma^{\infty}_{all,2}(T-v)+1\). Thus \(\gamma^{\infty}_{all,2}(T-v)<\gamma^{\infty}_{all,2}(T)\) if and only if \(\gamma^{\infty}_{all,2}(T^{\prime}-v)<\gamma^{\infty}_{all,2}(T^{\prime})\).
Now, let \(\ell\) be the leaf of the appended path, and let \(x\) be the leaf of \(T\) to which the path was appended. By Lemma 3.2, we have \(\gamma^{\infty}_{all,2}(T^{\prime})=\gamma^{\infty}_{all,2}(T)+1\) and \(\gamma^{\infty}_{all,2}(T^{\prime}-\ell)=\gamma^{\infty}_{all,2}(T-x)+1\). Thus \(\gamma^{\infty}_{all,2}(T-x)<\gamma^{\infty}_{all,2}(T)\) if and only if \(\gamma^{\infty}_{all,2}(T^{\prime}-\ell)<\gamma^{\infty}_{all,2}(T^{\prime})\). Therefore, \(T^{\prime}\) is eternal 2-domination critical if and only if \(T\) is eternal 2-domination critical.
**Lemma 5.4**.: _Let \(T\) be a tree and let \(T^{\prime}\) be formed by appending a path on two vertices and a single vertex to a leaf of \(T\). Then \(T^{\prime}\) is eternal 2-domination critical if and only if \(T\) is eternal 2-domination critical._
Proof.: Let \(T=(V,E)\) be a tree and let \(v\in V\) be a leaf in \(T\). Form \(T^{\prime}\) by appending a path on two vertices and a single vertex to \(v\). Then Lemma 3.2 implies \(\gamma^{\infty}_{all,2}(T^{\prime})=\gamma^{\infty}_{all,2}(T^{\prime}-L(v))+1= \gamma^{\infty}_{all,2}(T)+1\) as \(T^{\prime}-L(v)=T\). Let \(u\) be the leaf of \(T^{\prime}\) resulting from the path of length 2 appended to \(v\) and let \(w\) be the lone vertex appended to \(v\).
Let \(x\) be a leaf in both \(T\) and \(T^{\prime}\). By Lemma 3.2, we have \(\gamma^{\infty}_{all,2}(T^{\prime})=\gamma^{\infty}_{all,2}(T)+1\) and \(\gamma^{\infty}_{all,2}(T^{\prime}-x)=\gamma^{\infty}_{all,2}(T-x)+1\). Thus \(\gamma^{\infty}_{all,2}(T-x)<\gamma^{\infty}_{all,2}(T)\) if and only if \(\gamma^{\infty}_{all,2}(T^{\prime}-x)<\gamma^{\infty}_{all,2}(T^{\prime})\).
By Lemma 3.2, we have \(\gamma^{\infty}_{all,2}(T^{\prime})=\gamma^{\infty}_{all,2}(T)+1\) and \(\gamma^{\infty}_{all,2}(T^{\prime}-w)=\gamma^{\infty}_{all,2}(T-v)+1\). Thus \(\gamma^{\infty}_{all,2}(T-v)<\gamma^{\infty}_{all,2}(T)\) if and only if \(\gamma^{\infty}_{all,2}(T^{\prime}-w)<\gamma^{\infty}_{all,2}(T^{\prime})\).
Moreover, it is clear that \(\gamma^{\infty}_{all,2}(T^{\prime}-u)\leq\gamma^{\infty}_{all,2}(T-v)+1\) given \(T^{\prime}[L[v]\setminus\{u\}]\) has diameter 2. Recalling \(\gamma^{\infty}_{all,2}(T^{\prime})=\gamma^{\infty}_{all,2}(T^{\prime})+1\) if \(\gamma^{\infty}_{all,2}(T-v)<\gamma^{\infty}_{all,2}(T)\), then \(\gamma^{\infty}_{all,2}(T^{\prime}-u)<\gamma^{\infty}_{all,2}(T^{\prime})\). So if \(T\) is eternal distance-2 domination critical, then \(T^{\prime}\) is eternal distance-2 domination critical but \(T\) is not. Then there exits a leaf \(z\in V\) such that \(\gamma^{\infty}_{all,2}(T-z)=\gamma^{\infty}_{all,2}(T)\). If \(z\neq v\), then we have already showed this leads to a contradiction given \(z\) will be a leaf in \(T\) and \(T^{\prime}\).
If \(z=v\), then \(\gamma^{\infty}_{all,2}(T-v)=\gamma^{\infty}_{all,2}(T)\). By our assumption that \(T^{\prime}\) is eternal distance-2 domination critical \(\gamma^{\infty}_{all,2}(T^{\prime}-w)=\gamma^{\infty}_{all,2}(T^{\prime})-1\) as \(T^{\prime}\) requires more guards than \(T^{\prime}-w\) and placing a guard on \(w\) then defending the rest of \(T^{\prime}\) with \(\gamma^{\infty}_{all,2}(T^{\prime}-w)\) guards is sufficient. Recall Lemma 3.2 implies \(\gamma^{\infty}_{all,2}(T^{\prime}-w)=\gamma^{\infty}_{all,2}(T-v)+1\). Hence,
\[\gamma^{\infty}_{all,2}(T^{\prime})=\gamma^{\infty}_{all,2}(T^{\prime}-w)+1= \gamma^{\infty}_{all,2}(T-v)+2=\gamma^{\infty}_{all,2}(T)+2\]
contradicting Lemma 3.2 which implies \(\gamma_{all,2}^{\infty}(T^{\prime})=\gamma_{all,2}^{\infty}(T)+1\). Thus, if \(T^{\prime}\) is eternal distance-2 domination critical, then \(T\) is eternal distance-2 domination critical. This concludes the proof.
The previous results allow us to provide a characterization of eternal distance-2 domination critical graphs.
**Theorem 5.5**.: _Let \(\mathcal{C}\) be the family of all trees \(T\) that can be obtained from the sequence \(T_{1},\ldots,T_{j}\) of trees such that \(T_{1}\) is the single vertex tree \(K_{1}\) and \(T=T_{j}\), such that \(T_{i+1}\) is the 1-sum of \(P_{4}\) and \(T_{i}\) at a leaf of \(T_{i}\) and any vertex of \(P_{4}\). If \(T\) is a tree, then \(T\) is eternal distance-2 domination critical if and only if \(T\in\mathcal{C}\)._
Proof.: Suppose \(T\in\mathcal{C}\). If \(T=K_{1}\), then \(T\) is trivially eternal distance-2 domination critical. If \(T=P_{4}\), we have \(\gamma_{all,2}^{\infty}(P_{4})=2\) and \(\gamma_{all,2}^{\infty}(P_{3})=1\) by Lemma 3.3, so \(T\) is eternal distance-2 domination critical.
Suppose all trees \(S\in\mathcal{C}\) on fewer than \(n\) vertices are eternal distance-2 domination critical, and let \(T\in\mathcal{C}\) be a tree with \(n\) vertices. Then \(T\) is the 1-sum of \(P_{4}\) and some \(T^{\prime}\in\mathcal{C}\) at a leaf of \(T^{\prime}\). It follows by Lemma 5.3 or Lemma 5.4 that \(T\) is eternal distance-2 domination critical.
Conversely, suppose there exists a tree \(X\) such that \(X\notin\mathcal{C}\) but \(X\) is eternally distance-2 domination critical. Let \(Y\) be a minimal such tree with respect to induced subgraph. If the diameter of \(Y\) is at most 2, then by Lemma 3.3, \(\gamma_{all,2}^{\infty}(T)=1\), so \(Y=K_{1}\), and \(Y\in\mathcal{C}\), a contradiction. If the diameter of \(Y\) is 3, then Lemma 5.1 implies \(Y=P_{4}\), so \(Y\in\mathcal{C}\), a contradiction. Hence, we may assume \(Y\) has diameter at least 4. Therefore, by Lemma 3.1, \(Y\) has a 2-leaf \(v\), and by Lemma 5.2, \(L[v]\) is isomorphic to \(P_{3}\) or \(P_{4}\). It follows that \(Y\) is the 1-sum of \(P_{4}\) with some tree \(Z\). By Lemma 5.3 and Lemma 5.4, \(Z\) is eternal distance-2 domination critical, so by minimality of \(Y\), \(Z\in\mathcal{C}\). Hence, by construction, \(Y\in\mathcal{C}\), which is the contradiction completing the proof.
## 6 Trees with \(\gamma_{all,2}^{\infty}=\gamma_{2}\)
Recall the family \(\mathbb{T}\) introduced for Theorem 2.2. Perhaps surprisingly we will show that \(\mathbb{T}\) is also the class of graphs with \(\gamma_{2}(T)=\gamma_{all,2}^{\infty}(T)\). That is there is no tree \(T\) such that \(\gamma_{2}(T)=\gamma_{all,2}^{\infty}<\gamma(T)\). Equivalently \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\) if and only if \(\gamma(T)=\gamma_{2}(T)\).
**Theorem 6.1**.: _Let \(T\) be a tree. Then \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\) if and only if \(T\in\mathbb{T}\)._
Proof.: If \(T\in\mathbb{T}\), then Lemma 2.2 implies \(\gamma(T)=\gamma_{2}(T)\). But \(\gamma_{2}(T)\leq\gamma_{all,2}^{\infty}(T)\) trivially while \(\gamma_{all,2}^{\infty}(T)\leq\gamma(T)\) by Lemma 4.1. Hence, \(\gamma(T)=\gamma_{2}(T)\) implies \(\gamma_{2}(T)=\gamma_{all,2}^{\infty}(T)\).
Let \(T\) be a tree with \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\). Let \(v_{0},v_{1},\ldots,v_{k}\) be a longest path in \(T\). If \(k\leq 2\), then \(T\) is \(P_{1}\) or a star \(K_{1,m}\) for some non-negative integer \(m\), and clearly \(T\) is in \(\mathbb{T}\).
If \(k\in\{3,4\}\), then \(\gamma_{2}(T)=1\), but \(\gamma_{all,2}^{\infty}(T)>1\). Hence, we may assume \(k\geq 5\). We proceed by induction on the number \(n(T)\) of vertices of a tree \(T\) with \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\). It is easy to check that there is no graph on strictly less than 6 vertices with \(k\geq 5\) and \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\). Suppose then that \(n(T)\geq 6\). If \(n(T)=6\), then \(T\cong P_{6}\) is the only graph with \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\) and \(P_{6}\in\mathbb{T}\), by applying operation \(\mathbb{T}_{3}\) to \(P_{2}\). Now let \(T\) be a tree with \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\) and \(n(T)\geq 7\), and assume that each tree \(T^{\prime}\) with \(n(T^{\prime})<n(T)\), \(k\geq 5\), and \(\gamma_{all,2}^{\infty}(T^{\prime})=\gamma_{2}(T^{\prime})\) is in \(\mathbb{T}\).
Suppose \(T\) contains a stem \(v\) with leaves \(x\) and \(y\). By Lemma 5.1, we have \(\gamma_{all,2}^{\infty}(T)=\gamma_{all,2}^{\infty}(T-x)\). It is clear that any 2-dominating set of \(T-x\) also dominates \(T\), since the guard that dominates \(y\) is at most distance two from both \(x\) and \(y\). Thus \(\gamma_{2}(T)=\gamma_{2}(T-x)\). Therefore, \(\gamma_{all,2}^{\infty}(T-x)=\gamma_{2}(T-x)\), so \(T-x\in\mathbb{T}\). But then \(T\in\mathbb{T}\) by Operation \(\mathbb{T}_{1}\), a contradiction. Thus, we may assume every stem of \(T\) has exactly one leaf.
As \(v_{0}\) must be a leaf, this implies \(\deg(v_{1})=2\). Suppose \(\deg(v_{2})>2\). Then \(v_{2}\) is adjacent to a stem or leaf \(x\). Let \(D\) be the configuration of a minimum set of guards to eternally distance-2 dominate \(T\) after an attack on \(x\). Then in order to defend against a possible attack at \(v_{0}\), at least one of \(v_{0},v_{1},v_{2}\in D\). Let \(D^{\prime}=(D-L[v_{2}])\cup\{v_{2}\}\). Then \(|D^{\prime}|<|D|\) and \(D^{\prime}\) distance 2-dominates \(T\), contradicting that \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\). Thus \(\deg(v_{2})=2\).
Suppose \(\deg(v_{3})>2\). If \(v_{3}\) is a stem with leaf \(x\), then let \(D\) be the configuration of a minimum set of guards to eternally distance-2 dominate \(T\) after an attack on \(x\). Then in order to defend against a possible attack at \(v_{0}\), at least one of \(v_{0},v_{1},v_{2}\in D\). Let \(D^{\prime}=(D-L[v_{2}]-\{x\})\cup\{v_{2}\}\). Then \(|D^{\prime}|<|D|\) and \(D^{\prime}\) distance-2 dominates \(T\), contradicting that \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\). Thus, \(v_{3}\) is not a stem of \(T\). Let
\(T^{\prime}=T-L[v_{2}]\). Since \(\deg_{T}(v_{3})>2\), \(v_{3}\) is not a leaf in \(T^{\prime}\), and since \(k\geq 5\), \(v_{3}\) is not a stem in \(T^{\prime}\). By Lemma 3.2, we have \(\gamma_{all,2}^{\infty}(T)=\gamma_{all,2}^{\infty}(T^{\prime})+1\), and \(\gamma_{2}(T)\leq\gamma_{2}(T^{\prime})+1\). Therefore, Lemma 4.1 implies,
\[\gamma_{2}(T)\leq\gamma_{2}(T^{\prime})+1\leq\gamma_{all,2}^{\infty}(T^{\prime })+1=\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T).\]
Thus, \(\gamma_{2}(T^{\prime})=\gamma_{all,2}^{\infty}(T^{\prime})\) so by the induction hypothesis, \(T^{\prime}\in\mathbb{T}\). Since \(T\) is obtained from \(T^{\prime}\) by operation \(\mathbb{T}_{2}\), we conclude \(T\in\mathbb{T}\).
Thus, we assume \(\deg(v_{3})=2\). Let \(D\) be a configuration of a minimum set of guards to eternally distance-2 dominate \(T\) after an attack on \(v_{3}\). Then in order to defend against a possible attack at \(v_{0}\), at least one of \(v_{0},v_{1},v_{2}\in D\). Let \(D^{\prime}=(D-L[v_{3}])\cup\{v_{2}\}\). Since \(|D^{\prime}|<|D|\), we have that \(D^{\prime}\) is not a distance-2 dominating set of \(T\). Thus \(v_{3}\in D\) and \(v_{4}\) has a neighbour, say \(u\), that is not distance-2 dominated by \(D^{\prime}\).
Consider the components of \(T-u\). Each component can be eternally distance-2 dominated by its vertices in \(D\), since no vertex of \(D\) except \(v_{3}\) is within distance 2 of \(u\). If \(\deg(u)>1\), then let \(z\neq v_{4}\) be a neighbour of \(u\), and let \(\hat{D}\) be the configuration of the guards after an attack on \(z\), starting from the configuration \(D\). It is clear that the guard which defends the attack at \(z\) must be in a component of \(T-u\) distinct from \(v_{4}\), as otherwise \(v_{4}\in D\) which would imply that \(u\) be within distance 2 of a vertex in \(D^{\prime}\). Let \(\hat{D}_{u}=\hat{D}\).
Then for all non-leaf neighbours of \(v_{4}\) that are not distance-2 dominated in \(D^{\prime}\), \(u_{1},\ldots,u_{l}\), we can define sets \(\hat{D}_{u_{i}}\) as above. Furthermore, as each attack on a vertex \(z_{i}=z\) (given \(u_{i}=u\)) calls for a distinct set of guards \(g_{i}\) to defend against it, there is a eternally distance-2 dominating set \(\mathcal{D}\) given by all guards \(g_{i}\) defending against attacks at \(z_{i}\), for all \(1\leq i\leq l\), while the rest of the guards in \(D\) do not move. Then \(\mathcal{D}^{\prime}=(\mathcal{D}-L[v_{3}])\cup\{v_{2}\}\) is a distance-2 dominating set of \(T\), given all vertices of the 2-neighbourhood of \(v_{3}\) are dominated, and \(|\mathcal{D}^{\prime}|<|\mathcal{D}|\), contradicting that \(\gamma_{all,2}^{\infty}(T)=\gamma_{2}(T)\). Hence all neighbours of \(v_{4}\) are leaves in \(T\), and therefore \(v_{4}\) is a stem. Let \(T^{\prime}=T-L[v_{3}]\). We can verify that \(\gamma_{all,2}^{\infty}(T^{\prime})+1=\gamma_{all,2}^{\infty}(T)\) and \(\gamma_{2}(T^{\prime})+1=\gamma_{2}(T)\). Thus, \(\gamma_{all,2}^{\infty}(T^{\prime})=\gamma_{2}(T^{\prime})\), so by the induction hypothesis, \(T^{\prime}\in\mathbb{T}\). Since \(T\) is obtained from \(T^{\prime}\) by operation \(\mathbb{T}_{3}\), we conclude \(T\in\mathbb{T}\).
## 7 Extremal Families of Trees for Eternal Distance-\(k\) Domination
Motivated by a question posed in [6], we explore the extreme values the eternal distance-\(k\) domination number can take in trees. We begin by giving a general upper bound for the eternal distance-\(k\) domination number of trees. As the eternal distance-\(k\) domination number of a graph is upper bounded by the eternal distance-\(k\) domination number of its spanning subgraphs, this implies a general upper bound for connected graphs. Note this generalises a result of Chambers, Kinnersley, and Prince [4]. Beyond this we construct families of trees which meet this upper bound, as well as a family of trees which has eternal domination number equal to \(\frac{n}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\), which is the lowest possible value the distance-\(k\) domination number can take.
**Theorem 7.1**.: _If \(G\) is a connected graph of order \(n\), then \(\frac{n}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\leq\gamma_{all,k}^{\infty}(G) \leq\left\lceil\frac{n}{k+1}\right\rceil.\)_
Proof.: The lower bound is trivial as each vertex can distance-\(k\) dominate at most \(1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}\) vertices. So \(\gamma_{all,k}^{\infty}(G)\geq\gamma_{k}(G)\geq\frac{n}{1+\sum_{i=1}^{k} \Delta(\Delta-1)^{i-1}}\). It remains to be shown that \(\gamma_{all,k}^{\infty}(G)\leq\left\lceil\frac{n}{k+1}\right\rceil\).
As deleting edges will never help the guards defend the graph it is sufficient to prove the result for trees. Suppose then that \(G=T=(V,E)\) is a tree. When the diameter of \(T\) is strictly less than \(2k\) the result follows by Lemma 3.3. Otherwise, the diameter of \(T\) is at least \(2k\), in which case Lemma 3.1 implies that there exists a \(k\)-leaf \(v\in V\).
Suppose \(T\) is a smallest counterexample. Then the diameter of \(T\) is at least \(2k\). Let \(v\in V\) be a \(k\)-leaf in \(T\) and let \(T^{\prime}=T-L[v]\) when the diameter of \(T[L[v]]\) is \(k\), and \(T^{\prime}=T-L(v)\) otherwise. By the definition of a \(k\)-leaf this implies \(|V(T^{\prime})|\leq n-(k+1)\). By the minimality of \(T\), \(T^{\prime}\) is not a counterexample, hence,
\[\gamma_{all,k}^{\infty}(T^{\prime})\leq\left\lceil\frac{n-(k+1)}{k+1}\right\rceil =\left\lceil\frac{n}{k+1}\right\rceil-1.\]
Let \(\gamma^{\infty}_{all,k}(T^{\prime})\) guards defend the subgraph \(T^{\prime}\) of \(T\) and place a single guard on \(v\). Note that this will be at most \(\left\lceil\frac{n}{k+1}\right\rceil\) guards. If the diameter \(T[L[v]]\) is \(k\), then the guard at \(v\) can protect \(L[v]\) indefinitely while the \(\gamma^{\infty}_{all,k}(T^{\prime})\) guards protect \(T^{\prime}\) indefinitely. Otherwise, the diameter \(T[L[v]]\) is at least \(k+1\). In this case, let the guards proceed by the same strategy described in case 2 of the proof of Lemma 3.2. This strategy will protect \(T\) indefinitely.
Thus,
\[\gamma^{\infty}_{all,k}(T)\leq\gamma^{\infty}_{all,k}(T^{\prime})+1\leq\left \lceil\frac{n}{k+1}\right\rceil,\]
which concludes the proof.
Note that [6] showed that paths are a family of graphs that meet the upper bound of Theorem 7.1. The reason for this is because in any distance-\(k\) dominating set we must maintain guards within distance \(k\) of each leaf, therefore to protect against attacks on leaves these guards can never leave the \(k\) neighbourhood of each leaf. Hence, more guards are required to protect vertices which are distance slightly more than \(k\) from a leaf. By induction this will force paths to require the maximum possible number of guards.
Using a similar ideas we construct a large family \(\mathcal{T}_{M,k}\) of trees which also have eternal distance-\(k\) domination number \(\left\lceil\frac{n}{k+1}\right\rceil\). We define \(\mathcal{T}_{M,k}\) as follows. Let \(T^{\prime}\) be any tree, construct \(T\in\mathcal{T}_{M,k}\) by connecting a path \(P_{v}=v_{1}\ldots v_{k}\) to each vertex \(v\in V(T^{\prime})\). That is for each \(v\) add an edge \((v,v_{1})\) so that \(P_{v}\) becomes part of \(T\).
Theorem 7.2.: _If \(T\in\mathcal{T}_{M,k}\), then \(\gamma^{\infty}_{all,k}(T)=\left\lceil\frac{n}{k+1}\right\rceil\)._
Proof.: Let \(T=(V,E)\in\mathcal{T}_{M,k}\), then \(T\) is given by appending paths of length \(k\) to each vertex in some other tree \(T^{\prime}\). Notice that \(|V|=n=(k+1)|V(T^{\prime})|\). We aim to show that \(\gamma^{\infty}_{all,k}(T)=|V(T^{\prime})|\). By Theorem 7.1, \(\gamma^{\infty}_{all,k}(T)\leq|V(T^{\prime})|\) so all that remains to be shown is that \(\gamma^{\infty}_{all,k}(T)\geq|V(T^{\prime})|\).
Note that in each distance-\(k\) dominating set of \(T\) for each path \(P_{v}\) in \(T\) appended to a vertex \(v\in V(T^{\prime})\) there must be at least \(1\) guard in \(P_{v}\cap\{v\}\) to distance-\(k\) dominate the leaf at the end of \(P_{v}\). As for each \(u\neq v\in V(T^{\prime})\), \((P_{v}\cup\{v\})\cap(P_{u}\cup\{u\})=\emptyset\) this implies \(\gamma_{k}(T)\geq|V(T^{\prime})|\). Hence, \(\gamma^{\infty}_{all,k}(T)\geq\gamma_{k}(T)\geq|V(T^{\prime})|\) as required. This completes the proof.
Now we construct a family of trees \(\mathcal{T}_{m,k,\Delta}\) which all have eternal distance-\(k\) domination number exactly \(\frac{n}{1+\sum_{i=1}^{k}\frac{n}{\Delta(\Delta-1)^{i-1}}}\) for odd \(k\). Let \(T_{k,\Delta}\) be the complete \((\Delta-1)\)-ary tree of depth \(\left\lfloor\frac{k}{2}\right\rfloor\) except the root vertex has \(\Delta\) followers rather than \(\Delta-1\). Let \(T_{k,\Delta}\in\mathcal{T}_{m,k,\Delta}\), we complete out definition of \(\mathcal{T}_{m,k,\Delta}\) as follows; for any \(T_{1},T_{2}\in\mathcal{T}_{m,k,\Delta}\) let \(T_{3}\) be any tree formed by adding an edge between two vertices of \(T_{1}\) and \(T_{2}\) which have degree strictly less than \(\Delta\), then \(T_{3}\in\mathcal{T}_{m,k,\Delta}\). Notice that like \(\mathcal{T}_{M,k}\), \(\mathcal{T}_{m,k,\Delta}\) contains arbitrarily many vertices of high degree.
Theorem 7.3.: _If \(T\in\mathcal{T}_{m,k,\Delta}\), then \(\gamma^{\infty}_{all,k}(T)=\frac{n}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\)._
Proof.: Let \(T=(V,E)\in\mathcal{T}_{m,k,\Delta}\). Theorem 7.1 implies that \(\gamma^{\infty}_{all,k}(T)\geq\frac{n}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\) so it is sufficient to show \(\gamma^{\infty}_{all,k}(T)\leq\frac{n}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\). Note that if \(T=T_{k,\Delta}\), then the result follows from Lemma 3.3. Suppose then that \(T\neq T_{k,\Delta}\) is a smallest counterexample.
By the definition of \(\mathcal{T}_{m,k,\Delta}\), there exists an edge \(e\in E\) such that \(T-e\) is a graph with two connected components \(T_{1}\) and \(T_{2}\) each of which is \(\mathcal{T}_{m,k,\Delta}\). As \(T\) is a smallest counterexample, \(T_{1}\) and \(T_{2}\) are not counterexamples. Thus, if \(|V(T_{1})|=a\) and \(|V(T_{1})|=b\), we know \(n=a+b\) and \(\gamma^{\infty}_{all,k}(T_{1})=\frac{a}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\) and \(\gamma^{\infty}_{all,k}(T_{2})=\frac{b}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\).
It is not hard to see that if as \(V=V(T_{1})\cup V(T_{2})\) and \(T_{1}\), \(T_{2}\) are subgraphs of \(T\),
\[\gamma^{\infty}_{all,k}(T)\leq\gamma^{\infty}_{all,k}(T_{1})+ \gamma^{\infty}_{all,k}(T_{2})=\frac{a}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1} }+\frac{b}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\] \[=\frac{n}{1+\sum_{i=1}^{k}\Delta(\Delta-1)^{i-1}}\]
this completes the proof.
## 8 Conclusion
We have resolved a number of questions regarding the eternal distance-2 domination number of trees, in particular several questions raised in [6]. We do this by giving a polynomial time algorithm (Theorem 3.5) for calculating the eternal distance-2 domination of a tree, then using the reductions involved in this algorithm we characterize which trees have domination number equal to their eternal distance-2 domination number (Theorem 4.2). Beyond this we characterize which trees are eternal distance-2 domination critical (Theorem 5.5) and which trees have eternal distance-2 domination number equal to their distance-2 domination number (Theorem 6.1). These results are similar to work by Klostermeyer and MacGillivray [13] who proved several characterizations for trees in the context of eternal domination number.
Additionally, we generalize upper bounds on the eternal domination number given by Chambers, Kinnersley, and Prince [4] to the eternal distance-\(k\) domination number. We also give a lower bound for the eternal distance-\(k\) domination number in terms of order and maximum degree. To demonstrate that both of these bounds are tight we construct infinite families of tree that meet each bound. We conclude the paper by listing several open problems and conjectures.
The first of these conjectures arise from our observation that eternal distance-\(k\) domination seems to have some fundamental differences in the \(k\leq 2\) and \(k>2\) cases. This is highlighted by the effectiveness of Lemma 3.2 for \(k\leq 2\) and subsequent failure for \(k>2\). Despite this we conjecture the following.
**Conjecture 8.1**.: _For all integers \(k>2\), determining \(\gamma^{\infty}_{all,k}(T)\) where \(T\) is a tree is polynomial time bounded by a polynomial \(p(n)\) independent of \(k\)._
Next we observe that Theorem 6.1 combined with work in [17] implies that if \(\gamma_{2}(T)=\gamma^{\infty}_{all,2}(T)\), then \(\gamma^{\infty}_{all,2}(T)=\gamma(T)\). We conjecture that a similar result holds for \(k>2\). Recall that \(\gamma_{k}\leq\gamma^{\infty}_{all,k}\leq\gamma_{\lfloor\frac{k}{2}\rfloor}\) for all graphs.
**Conjecture 8.2**.: _For all \(k>2\) and trees \(T\), if \(\gamma_{k}(T)=\gamma^{\infty}_{all,k}(T)\), then \(\gamma^{\infty}_{all,k}(T)=\gamma_{\lfloor\frac{k}{2}\rfloor}(T)\)._
The following are several questions which merit study in future work as they are natural to consider yet outside the scope of this paper.
**Question 8.3**.: _What is the maximum \(m\) as a function of \(n\), so that there exists a graph on \(n\) vertices and \(m\) edges with eternal distance-\(k\) domination number \(\left\lceil\frac{n}{k+1}\right\rceil\)._
**Question 8.4**.: _Is \(\gamma^{\infty}_{all,k}(T)\leq\frac{n}{t}\) for some \(t>k+1\) for all graphs with treewidth at least \(N\) for some sufficiently large constant \(N\)?_
**Question 8.5**.: _Does there exist a family of graphs \(\mathcal{G}\) where determining \(\gamma^{\infty}_{all,k}(G)\) is polynomial time but determining \(\gamma^{\infty}_{all,t}(G)\) is not polynomial time for \(G\in\mathcal{G}\) and some \(k\neq t\). If so which families exhibit this dichotomy._
## Acknowledgements
We would like to acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through the Canadian Graduate Scholarship - Master's program.
|
2309.05205 | A Review of the Applications of Quantum Machine Learning in Optical
Communication Systems | In the context of optical signal processing, quantum and quantum-inspired
machine learning algorithms have massive potential for deployment. One of the
applications is in error correction protocols for the received noisy signals.
In some scenarios, non-linear and unknown errors can lead to noise that
bypasses linear error correction protocols that optical receivers generally
implement. In those cases, machine learning techniques are used to recover the
transmitted signal from the received signal through various estimation
procedures. Since quantum machine learning algorithms promise advantage over
classical algorithms, we expect that optical signal processing can benefit from
these advantages. In this review, we survey several proposed quantum and
quantum-inspired machine learning algorithms and their applicability with
current technology to optical signal processing. | Ark Modi, Alonso Viladomat Jasso, Roberto Ferrara, Christian Deppe, Janis Noetzel, Fred Fung, Maximilian Schaedler | 2023-09-11T02:50:13Z | http://arxiv.org/abs/2309.05205v2 | # A Review of the Applications of Quantum Machine Learning in Optical Communication Systems
###### Abstract
In the context of optical signal processing, quantum and quantum-inspired machine learning algorithms have massive potential for deployment. One of the applications is in error correction protocols for the received noisy signals. In some scenarios, non-linear and unknown errors can lead to noise that bypasses linear error correction protocols that optical receivers generally implement. In those cases, machine learning techniques are used to recover the transmitted signal from the received signal through various estimation procedures. Since quantum machine learning algorithms promise advantage over classical algorithms, we expect that optical signal processing can benefit from these advantages. In this review, we survey several proposed quantum and quantum-inspired machine learning algorithms and their applicability with current technology to optical signal processing.
Quantum Machine Learning, Quantum Algorithms, Quantum Computing, 6G Communication, Quantum-Classical Hybrid Algorithms, Optical Communication
## I Summary
Artificial intelligence has made significant progress thanks to modern large-scale machine learning (henceforth referred to as ML), leading to the deployment of weakly intelligent cognitive systems in various aspects of daily and professional life. ML involves adjusting software agent parameters through training processes, allowing them to develop problem-solving skills. This progress relies on analyzing large amounts of task-specific training data to learn desired input-output behaviours. The success of modern ML is largely attributed to working with domain-agnostic models and training algorithms, with deep learning being especially successful. Deep learning utilizes artificial neural networks with billions of adjustable parameters, making them flexible and effective in various computational intelligence tasks. However, training deep neural networks requires vast amounts of representative data and considerable computational resources. To train state-of-the-art systems effectively, like OpenAI's GPT-3, dedicated compute clusters and high-performance computing hardware are necessary due to the immense computational demands. As a result, the practical feasibility and success of current ML applications are highly dependent on access to such advanced computing resources.
Researchers are increasingly exploring quantum computing as a potential solution to the computational demands of modern ML systems. A "quantum advantage" can manifest in various ways, primarily affecting time complexity or execution time, and accuracy. Quantum computing has made significant strides and promises faster computations in scientific and industrial applications. A number of works such as [1, 2, 3, 4, 5, 6, 7] claim to achieve a time advantage while works such as [8, 9, 10, 11, 12] show accuracy and convergence gains. Quantum computers operate on qubits, which exist in superposition and can carry more information than classical bits. Computation with qubits is probabilistic, and measurements cause decoherence, collapsing the qubit to a specific state. Quantum bits can be entangled, meaning the state of one qubit affects the state of others. We see (so far) that "quantum" advantage arises from these two key properties of quantum systems - entanglement and sampling. Sampling advantage is noticeable in linear algebraic quantum machine learning (henceforth contracted to QML) procedures; however, in the Noisy Intermediate-Scale Quantum (NISQ) era, classical replication of this advantage is possible with only linear slowdowns [13]. The paradigm changes when Quantum Random Access Memory (QRAM) becomes available since it allows for the amortization of state preparation costs over multiple iterations, making QML more efficient. On the other hand, entanglement, a quintessential quantum phenomenon, endows quantum systems with two primary advantages: (a) complex correlations: quantum entanglement enables the storage of intricate correlations within the data, facilitating the efficient representation of complex relationships in quantum machine learning models. (b) quantum parallelism: entanglement allows for quantum parallelism, enabling the simultaneous processing of multiple data points or states, substantially accelerating some computations. Entanglement allows quantum computers to work with exponentially larger search spaces, making them particularly useful for
combinatorial optimization in artificial intelligence and certain ML techniques.
Adiabatic quantum computers, like those produced by D-Wave, are designed for solving combinatorial optimization problems known as QUBOs, which have applications in ML tasks like data clustering and support vector machine training. Adiabatic quantum computing formulates problems as energy minimization tasks and uses Hamiltonian operators to find their ground states, which represent solutions to the problems. This approach utilizes the adiabatic theorem, allowing the system to transition from a known initial Hamiltonian to the problem Hamiltonian, finding the ground state in the process. This method also benefits from quantum tunnelling, enabling it to overcome local minima and potentially solve problems exponentially faster than classical optimization. Adiabatic quantum computing shares similarities with the classical paradigm of Hopfield neural networks, making it a viable quantum analogue of this classical approach.
On the other hand, quantum gate computing manipulates qubits using quantum mechanical operators, resembling classical digital computing in terms of gates. Quantum circuits, composed of quantum gates, perform computations on qubits to achieve specific input/output behaviours. Designing effective quantum circuits is a challenging task, and classical ML is increasingly used as a tool for quantum circuit design. Quantum gate computing is attractive for ML due to its mathematical foundation in complex linear algebra and its ability to handle exponentially larger state spaces compared to classical computing. This makes it appealing for tasks involving large high-dimensional data vectors, as it is expected to offer quantum speedup and potentially solve intractable problems.
A QML "mini-revolution" has occurred recently, with numerous scientific reports proposing quantum circuits for various ML tasks, such as linear algebra routines, regression, and classification. Present-day approaches for QML involve variational quantum computing algorithms or hybrid quantum-classical methods. These methods use parameterized quantum circuits with tunable quantum gates and rely on classical optimization techniques to adjust the gate parameters for the desired computation.
Variational and hybrid quantum-classical algorithms are appealing because they reduce the quantum computing resources needed for successful QML. Researchers also consider parameterized quantum circuits as a quantum analogue of classical deep neural networks, but there are important differences. Quantum gates implement unitary operators, not non-linear functions like in classical neural networks, and reading out the internal states of a quantum circuit destroys their quantum coherence. As a result, variational or hybrid quantum-classical algorithms are currently the primary approach for optimizing parameterized quantum circuits in QML.
The growing literature on QML shows promising potential for mainstream applications. However, it is essential to temper overly optimistic expectations, especially given the limitations of current NISQ computing.
Quantum algorithm design abstracts away the limitations of physical qubits present in current NISQ devices. These devices have less than a hundred qubits, limited coherence times, and low fault tolerance due to noise and fluctuations. Creating and maintaining quantum states reliably over longer periods is challenging but expected to improve with technological advancements. Promising candidates to look out for include superconducting qubits and topological qubits. Quantum error correction mechanisms, similar to those used in classical computing, are crucial for fault tolerance.
Present-day quantum computers face difficulties in handling large quantum circuits due to error-prone quantum gate operations. Adiabatic quantum computers offer more reliable manipulation of larger qubit systems, but they are limited to specific energy minimization problems and lack the universality of quantum gate computers. Despite theoretical equivalence, emulating quantum circuits on adiabatic quantum computers requires unrealized qubit connectivity structures.
There are several practical limitations and challenges that need to be considered when applying quantum computing to ML tasks. They can be summarised as follows:
1. _Encoding and decoding data_: Quantum algorithms require careful consideration of how classical data is encoded into quantum states and decoded back to classical representations. The effort for preparing quantum states and reading them back into classical memory greatly impacts the quantum advantage, especially if it becomes exponential.
2. _Bit-level computing_: Present-day quantum computing is primarily focused on bit-level computing, lacking abstract data structures and control structures found in classical programming. This means that certain ML algorithms relying on these constructs are not realizable on current quantum computers.
3. _Quantum compilers and APIs_: Efforts are being made to develop quantum compilers and high-level application programming interfaces (APIs) for quantum computing. However, these tools are still in their early stages, and users need to think at the linear algebraic level of quantum computing.
4. _Simulated quantum processors_: Some APIs allow for efficient digital simulations of quantum information processing, but these simulations are based on universal quantum processors. Algorithms that work on simulated quantum computers may not necessarily work on existing physical quantum computers.
5. _Probabilistic nature_: Quantum computations involving measurements are inherently probabilistic, requiring repeated runs to obtain results. Any outcome needs to be interpreted in terms of expectations rather than deterministic outcomes.
Taking these limitations into account is crucial before making claims about the superiority of quantum algorithms in ML. Practical implementation challenges and the current state of quantum computing technology need careful consideration.
QML is a relatively new field, and its best practices and standards are still evolving. Similar to the early days of classical ML, QML is currently facing challenges related to verifiability and reproducibility. In classical ML's early stages, practical results were often reported without disclosing imple
mentation details, data collection or processing protocols, and experimental procedures, leading to issues with the validity of claimed capabilities.
Presently, the field of QML often exhibits similar omissions in reporting crucial details of practical results. There is a lack of transparency in the scientific literature, making it difficult to evaluate rigorously the methods and reproducibility of reported outcomes. While this might be somewhat acceptable for a nascent field, it is essential to consider that the performance of current QML methods may not scale or generalize well to larger or different application settings.
To establish trust and credibility in QML, it will be vital for researchers to adopt practices similar to those in classical ML, providing code, data, and experimental protocols in their publications. By doing so, the field can progress towards ensuring the reliability and reproducibility of reported results and accelerate its growth.
Despite the caveats and limitations of QML, recent technological progress justifies serious engagement with the topic. While quantum computers and algorithms are not yet mature enough to impact ML practically, ongoing development and substantial investments suggest rapid improvement is likely. As the underlying technology advances, viable solutions may emerge, leading to unexpected developments and disruptions.
However, potential risks related to QML have not received as much attention as their benefits. Ethics, reliability, trustworthiness, and safety have been recognized as important topics in classical ML, but similar scrutiny is yet to be applied to QML. As QML may significantly impact artificial cognitive systems, it is crucial to assess potential security issues. Further studies investigating the reliability, vulnerability, and potential new forms of attacks or defense mechanisms for critical digital infrastructures in the context of QML are required. The assessment of QML from a cybersecurity perspective and a determination of measures to address security challenges is required.
In summary, QML is a promising, cutting-edge, and complex field that requires further development and exploration to unlock its full potential. As with any transformative technology, it will take time, research, and advancements in hardware and algorithms to fully understand its capabilities and limitations.
## II Application of QML in Optical Communication
QML is an emerging field that combines quantum computing principles with ML algorithms to solve complex problems. When applied to optical communication systems, QML can offer several advantages and applications. Here are some of the key areas where QML can be beneficial in optical communication systems:
1. _Channel Estimation and Equalization_: In optical communication, signal distortions can occur due to various factors like dispersion and noise. QML techniques can be used to estimate and equalize the channel conditions, enabling more reliable data transmission and improved communication performance [14].
2. _Fault Detection and Error Correction_: QML algorithms can be applied to detect and correct errors that arise during data transmission in optical communication systems. This can enhance the overall reliability and robustness of the communication network [15, 10].
3. _Optimal Resource Allocation_: QML can optimize the allocation of resources in optical communication networks, such as determining the best routes for quantum signals or optimizing the placement of quantum repeaters to extend the communication distance [16, 17].
4. _Adaptive Photonics_: QML can be applied to adaptive photonics, where the properties of photons are optimized in real-time to maximize the communication performance. This can lead to adaptive and self-optimizing optical communication systems [18].
As mentioned before, it is important to note that QML is still a developing area, and its practical implementation in optical communication systems is an active research field. Currently, most of the research output is theoretical and sometimes not in the NISQ context, making the determination of the state-of-the-art difficult. However a comparison with well-known methods for decoding M-QAM optical fibre signals is presented in [10]. As quantum technologies advance, the integration of QML with optical communication holds great promise for revolutionizing the way we transmit and process information.
Fig. 1 summarises the application of ML in optical communication systems. In general, in most applications where ML can be used, there exists a competing QML algorithm. As with any real-world application, the general algorithm has to be engineered, at an algorithmic, software and hardware level to achieve the best possible results. A case in point is k-means clustering - if one replaces the k-means algorithm with a hybrid quantum-classical implementation of the general quantum k-means algorithm proposed in [6], the 'advantage' is quite questionable, as shown in [15]. However, as demonstrated in [10], with some engineering modifications and innovation, one can outperform the classical algorithm. Another important thing to consider is the emerging class of quantum-inspired algorithms [1, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33], which use methods and ideas inspired from quantum computing to optimise classical algorithms. For most of the applications mentioned in Fig. 1, there exist competing quantum or quantum-inspired methods that show promise in providing advantage over classical methods. Some of these algorithms are listed as follows:
* Quantum Neural Networks (QNN) [34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
* Quantum kernels [8, 9, 12, 53, 54, 55, 56, 57, 58, 59]
* Quantum PCA [60], Quantum Inspired PCA [13, 21, 22, 61, 62]
* Quantum SVM (QSVM) [63, 64, 65, 66, 67, 68, 9, 68]
Expectation-Maximization Algorithm [69, 70, 71, 72]
* Quantum and Quantum-inspired kNN clustering [73, 10, 74, 13]
* Quantum Logic regression [75, 76, 77, 78]
* Quantum and Quantum-inspired random forest algorithms [79, 62, 80]
## III Conclusions
Quantum and Quantum-Inspired Machine Learning is an emerging field with significant potential and promise. However, like any nascent area of research, it faces several challenges and limitations. The following points describe the current state of QML.
* _Early Stage of Development_: QML is still in its early stages, and researchers are actively exploring its potential applications and limitations. Many of the algorithms and techniques are still being developed and refined.
* _Hardware Limitations_: Building and maintaining quantum computers with a sufficient number of qubits and low decoherence rates is a very challenging task. As of now, quantum computers have not reached the level of efficiency and scalability to outperform classical computers for most ML tasks, especially when considering an industrial deployment.
* as can be seen in the case of sampling-based quantum-inspired algorithms.
* this promises some advantage but the applications have to be engineered carefully.
* _Algorithm Complexity_: Implementing and optimizing QML algorithms can be challenging and computationally expensive.
* _Data Requirements_: QML algorithms may require a large amount of high-quality quantum data, which is currently difficult to obtain. Obtaining and preparing such data for QML tasks can be a significant hurdle. The classical data loading problem, a result of the unavailability of stable quantum memory, is a significant issue that often introduces an exponential slowdown in the hybrid quantum-classical implementations of QML procedures.
While QML faces these challenges, current research and technological development in this field are ongoing. As quantum computing technology, the potential for QML to impact various fields, including optimization, cryptography, and material science, remains a subject of active investigation.
Fig. 1: Source: [19]. Summary of Machine Learning applications in Optical Communication Systems.
## Acknowledgement
This work was funded by the TUM-Huawei Joint Lab on Algorithms for Short Transmission Reach Optics (ASTRO). J.N. was funded from the DFG Emmy-Noether program under grant number NO 1129/2-1 and Munich Center for Quantum Science and Technology (MCQST). C.D., and J.N. were funded by the Federal Ministry of Education and Research of Germany in the joint project 6G-life, project identification number: 16KISK002. C.D., J.N., and A.V.J. were funded the Munich Quantum Valley (MQV) which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. C.D., J.N., and R.F. were funded by the Bavarian State Ministry for Economic Affairs, Regional Development and Energy in the project 6G and Quantum Technology (6GQT). A.M., and C.D. were funded by the Federal Ministry of Education and Research of Germany in the project QR.X with the project number 16KISQ028. We acknowledge useful discussions with Kareem H. El-Safry, and the use of ChatGPT 3.0 as a language tool.
|
2309.12779 | Terahertz scale microbunching instability driven by nonevaporable getter
coating resistive-wall impedance | Non-evaporable getter (NEG) coating is widely required in the next generation
of light sources and circular $e^+e^-$ colliders for small vacuum pipes to
improve the vacuum level, which, however, also enhances the high-frequency
resistive-wall impedance and often generates a resonator-like peak in the
terahertz frequency region. In this paper, we will use the parameters of the
planned Hefei Advanced Light Facility (HALF) storage ring to study the impact
of NEG coating resistive-wall impedance on the longitudinal microwave
instability via particle tracking simulation. Using different NEG coating
parameters (resistivity and thickness) as examples, we find that the impedance
with a narrow and strong peak in the high frequency region can cause
micro-bunching instability, which has a low instability threshold current and
contributes to a large energy spread widening above the threshold. In order to
obtain a convergent simulation of the beam dynamics, one must properly resolve
such a peak. The coating with a lower resistivity has a much less sharp peak in
its impedance spectrum, which is helpful to suppress the micro-bunching
instability and in return contributes to a weaker microwave instability. | Weiwei Li, Tianlong He, Zhenghe Bai | 2023-09-22T10:43:14Z | http://arxiv.org/abs/2309.12779v1 | # Terahertz Scale Micro-Bunching Instability Driven by NEG-Coating Resistive-Wall Impedance
###### Abstract
Non-evaporable getter (NEG) coating is widely required in the next generation of light sources and circular \(e^{+}e^{-}\) colliders for small vacuum pipes to improve the vacuum level, which, however, also enhances the high-frequency resistive-wall impedance and often generates a resonator-like peak in the terahertz frequency region. In this paper, we will use the parameters of the planned Hefei Advanced Light Facility (HALF) storage ring to study the impact of NEG coating resistive-wall impedance on the longitudinal microwave instability via particle tracking simulation. Using different NEG coating parameters (resistivity and thickness) as examples, we find that the impedance with a narrow and strong peak in the high frequency region can cause micro-bunching instability, which has a low instability threshold current and contributes to a large energy spread widening above the threshold. In order to obtain a convergent simulation of the beam dynamics, one must properly resolve such a peak. The coating with a lower resistivity has a much less sharp peak in its impedance spectrum, which is helpful to suppress the micro-bunching instability and in return contributes to a weaker microwave instability.
**PACS numbers**: 29.27.Bd, 41.75.Ht
## I Introduction
The non-evaporable getter (NEG) coating [1] has been successfully applied to inner surfaces of many vacuum chambers of particle accelerators. It can provide distributed pumping along vacuum chambers, thus, the specified ultrahigh vacuum pressure level could be met with a reduced number and a size of external pumps.
The resistivity wall (RW) impedance, produced by the finite conductivity of the beam vacuum chamber, plays an important, often dominant, role in modern accelerators, especially in those with a small transverse size of the vacuum chamber [2; 3]. The presence of the NEG coating films makes the surface resistance of the beam pipe higher than the one without coating and the resistive wall effect is more pronounced. This impact is especially important for large machines with small beam pipe dimensions such as circular \(e^{+}e^{-}\) colliders [4; 5] and diffraction-limited storage rings (DLSRs) [6; 7; 8]. In addition, there could be also an uncertainty on the coating conductivity measurements in the high frequency region which may give inaccurate predicts about the instability threshold. In order to reduce the RW impedance contribution and the uncertainty due to the mostly unknown coating resistivity, one of the best ways is to reduce the NEG coating thickness. Several DLSR projects [6; 7; 8] have set the NEG coating thickness targets equal to or less than 1 um. Recent study [8] also shows that there is a regime in which a coating with a lower resistivity produces even a larger loss factor than that with a higher resistivity.
The RW has a strong longitudinal impedance in the high frequency region, contributing to a sharp variation of the point-charge longitudinal wakefield in very short distances. The presence of the NEG coating will further enhance the high-frequency impedance and often generates a resonator-like peak [9; 10]. The multi-particle tracking simulations [11; 12; 13; 14] are widely used to study the beam dynamics but may have computational issues to study the longitudinal microwave instability (MWI), where a large number of simulation particles is needed to study the response of small-scale bunch structures to high frequency wakefield components. If not equipped with suitable algorithms (i.e., smoothing/filtering techniques, fine grids, etc.), the simulation can fail to produce reliable results [15]. Since the collective behavior usually doesn't depend on the behavior of the wakefield/impedance at very small length scales/very high frequencies, one popular solution is to use the wake potential of a very short Gaussian bunch of rms length \(\overline{\sigma}_{s}\) (sometimes called pseudo-Green function) in place of the point-charge wakefield [3; 4; 16; 17]. Then the impedance used in tracking simulation becomes that of the point-charge multiplied by a Gaussian filter. The remaining item is to determine the required length \(\overline{\sigma}_{s}\), which effectively means finding the frequency range over which the impedance affects the dynamics. Usually the guideline that is 10 or 15 times smaller than that of the equilibrium beam gives a reasonable estimate [3]. However, the MWI simulations are quite sensitive to numerical noise, so it is necessary to vary different tracking parameters to make sure that the tracking results are accurate.
In this paper, the Hefei Advanced Light Facility (HALF) [18], a fourth generation light source in design, will be used to study the impact of the NEG coating parameters. The equilibrium beam rms length at zero current limit is 2.1 mm, however, in order to obtain a quasi convergent simulation of the beam dynamics, the required length \(\overline{\sigma}_{s}\) should typically be even smaller than 0.02 mm and millions of macro-particles are necessary. Under convergent simulations, we find that the coating
with a high resistivity (in the order of \(10^{-5}\)\(\Omega\,\)m), whose impedance has a sharp peak in the terahertz region, can cause an undesirable micro-bunching instability (MBI), with a low threshold and a large energy spread widening.
This paper is organized as follows. In Sec. II, the RW impedance with different NEG coating parameters of HALF will be presented. In Sec. III, the particle tracking method will be introduced. Section IV shows the simulation results with different tracking and coating parameters. The conclusions and discussions are presented in Sec. V.
## II Resistive wall impedance in HALF
We consider a simplified model of the ring consisting 6 parts and the main parameters of the vacuum chambers are listed in Table 1. The resistivities of the materials are listed in Table 2.
The NEG resistivity value depends on the compound composition and coating method [19]. The resistivity measurement results also have very large discrepancy using different methods [20]. Thus three different resistivity \(\rho_{\text{NEG}}\) values will be taken into account: \(1\times 10^{-5}\)\(\Omega\,\)m, \(5\times 10^{-6}\)\(\Omega\,\)m and \(1\times 10^{-6}\)\(\Omega\,\)m. The target of NEG coating thickness is \(d=1\)\(\,\)m for the HALF project, but two different film thicknesses will be studied for comparison: \(0.5\)\(\,\)m and \(1\)\(\,\)m.
The resistive wall impedance is computed using the ImpedanceWake2D (IW2D) code [21], which solves for a circular geometry and then applies a Yokoya factor for elliptical or rectangular cases [22; 23]. Their longitudinal impedance as a function of frequency is shown in Fig. 1. At low frequency, all the impedance is similar since the thickness of coating is much smaller than its skin depth. At high frequency, the impedance is greatly enhanced with NEG coatings and generates a resonator-like peak in the terahertz region. If the coating resistivity or thickness is smaller, the impedance will be more close to that of no coating. A smaller coating resistivity will reduce the quality factor and the peak impedance. A thinner coating thickness will make the peak frequency shift downward.
It is useful to define the effective impedance as [24; 25; 26]
\[\left(\frac{Z_{\parallel}}{n}\right)_{\text{eff}}=\frac{\int_{-\infty}^{ \infty}Z_{\parallel}\left(\omega\right)\frac{\omega_{0}}{\omega}h\left( \omega\right)d\omega}{\int_{-\infty}^{\infty}h\left(\omega\right)d\omega}, \tag{1}\]
where \(n=\omega/\omega_{0}\) is the revolution harmonic number, \(\omega_{0}\) is the revolution angular frequency, \(h\left(\omega\right)=\tilde{\lambda}\left(\omega\right)\tilde{\lambda}^{*} \left(\omega\right)\) is the bunch power spectrum, \(\tilde{\lambda}\left(\omega\right)\) is the Fourier trans
\begin{table}
\begin{tabular}{c c} \hline \hline Material & Resistivity (\(\Omega\,\)m) \\ \hline Cu & \(1.68\times 10^{-8}\) \\ CuCrZr & \(2.3\times 10^{-8}\) \\ Al6063 & \(3.16\times 10^{-8}\) \\ Ni & \(6.93\times 10^{-7}\) \\ SS316L & \(7.41\times 10^{-7}\) \\ Inconel 625 & \(1.29\times 10^{-6}\) \\ \hline \end{tabular}
\end{table}
Table 2: Main parameters of HALF
Figure 1: The real (top) and imagine (bottom) parts of the longitudinal impedance for different coating parameters.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Type & Material + film (thickness: \(\,\)m) & Shape & Aperture/Radii (mm) & Length (m) \\ \hline Main Chamber & CuCrZr+NEG (d) & Round & 13 & 344.2 \\ Fast Corrector & Inconel + NEG (d) & Round & 13 & 7.2 \\ beam pipes with antechamber & Stainless steel + Cu (20) & Round & 13 & 48.2 \\ out vacuum Insertion devices & Al & Elliptical & \(8\times 26\) & 43 \\ In-Vacuum Undulators & NdFeB + Ni (75) + Cu (75) & Rectangular & \(6\times 65\) & 5.4 \\ Others (ie., bellows, flanges) & Stainless Steel & Round & 13 & 32 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main parameters of the Vacuum Chambers
form of the longitudinal charge density \(\lambda\left(t\right)\). Assuming a Gaussian bunch, \(h\left(\omega\right)=e^{-\omega^{2}\sigma_{t}^{2}}\), where \(\sigma_{t}\) is the rms bunch length in time. The effective impedance with natural bunch length (7 ps) for different NEG coating parameters are listed in Table 3.
## III Multi-particle tracking simulation
Most of the macroparticle tracking codes compute the wake potential as the convolution between the longitudinal wake function \(w_{\parallel}(t)\), i.e., the Green function of a point charge, and the bunch distribution \(\lambda\left(t\right)\): [12]
\[W_{\parallel}\left(t\right)=\int_{-\infty}^{t}w_{\parallel}\left(t-t^{\prime} \right)\lambda\left(t^{\prime}\right)dt^{\prime}. \tag{2}\]
The longitudinal wake function can be expressed in terms of the longitudinal impedance by an inverse Fourier transform
\[w_{\parallel}\left(t\right)=\frac{1}{2\pi}\int_{-\infty}^{\infty}Z_{\parallel }\left(\omega\right)e^{i\omega t}d\omega. \tag{3}\]
For the convenience of numerical calculation, the time coordinate in Eq. 2 should be equi-spaced and discrete Fourier Transforms can be used. The bin size is \(\Delta_{t}=\frac{0.5}{F_{m}}\), where \(F_{m}\) is the maximum frequency of the impedance in Eq. 3. \(\lambda\left(t\right)\) is obtained by counting the particle number in each bin and \(>\sim 1000\) particles per bin is typically required. However, the input RW wake function can cause some problems to simulations based on this approach, since it covers a very high frequency range and a very large number of slices would be necessary, thus increasing the computational load. As discussed in Sec. I, the pseudo-Green function from a very short Gaussian pulse will be used. The pulse length \(\overline{\sigma}_{s}\) will determine the frequency reach of the impedance calculation and needs to be much smaller than the real bunch length used in tracking simulations to cover the spectrum of interest. However, in order to resolve the impedance peak, the computational load is still very heavy.
The STABLE code [27] will be dedicated to conduct multi-particle tracking simulations for longitudinal beam dynamics studies. It is implemented in a MATLAB environment with the usage of the state-of-the-art of graphics-processing-unit acceleration technique so that the tracking efficiency is significantly improved. The original version of STABLE is written for multi-bunch and multi particle simulation, and a 2D matrix is used to store the macro-particle's coordinates, with each column corresponding to one bunch. In order to accurately simulate the single bunch dynamics, which usually requires millions or even tens of millions of macro-particles, we need only modify the STABLE code by dividing the macro-particles of single bunch into multiple parts and storing them in each column of the 2D matrix. We can separately count the bin-distribution of each column, and then summarize them to obtain the total bunch distribution. In addition, a fixed bin width instead of the number of bins is set in default. Therefore, the bin number will increase as the bunch lengthening. The remaining operations, such as convolution of the bunch distribution and the short-range wake (or short-bunch wakepotential), and interpolation to obtain the short-range wake kick of each macro-particle, can be same as those in the original version of STABLE.
## IV Numerical results for the parameters of half
In this section, the impact of NEG-coated resistive-wall impedance on the longitudinal beam dynamics is investigated by tracking simulations in the framework of HALF project. The main parameters of the HALF storage ring with insertion devices are summarized in Table 4.
### Convergence Study
We first carry out the convergence study with two typical examples, the coatings with \(\rho_{\text{NEG}}=1\times 10^{-5}\)\(\Omega\) m \(d=1\)\(\mathrm{\SIUnitSymbolMicro m}\) and \(\rho_{\text{NEG}}=1\times 10^{-6}\)\(\Omega\) m, \(d=1\)\(\mathrm{\SIUnitSymbolMicro m}\) respectively.
#### iv.1.1 Pseudo-Green Function
The wake potentials for the short Gaussian bunch of \(\overline{\sigma}_{s}=0.1\) mm and \(0.01\) mm are shown in Fig. 2, where the positive (negative) value means energy loss (gain), and they will be used instead of the wake function generated
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Parameter** & Symbol & Value \\ \hline Ring circumference & C & 480 m \\ Beam energy & \(E_{0}\) & 2.2 GeV \\ Nominal beam current & \(I_{0}\) & 350 mA \\ Longitudinal damping time & \(\tau_{z}\) & 14 ms \\ Momentum compaction & \(\alpha_{c}\) & \(9.4\times 10^{-5}\) \\ Natural energy spread & \(\sigma_{s}\) & \(7.3\times 10^{-4}\) \\ Harmonic number & \(h\) & 800 \\ Energy loss per turn & \(U_{0}\) & 400 keV \\ Voltage of MC & \(V_{RF}\) & 1.2 MV \\ Natural rms bunch length & \(\sigma_{t0}\) & 7 ps \\ \hline \end{tabular}
\end{table}
Table 4: Main parameters of HALF
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline d (\(\mathrm{\SIUnitSymbolMicro m}\)) & \(\rho_{\text{NEG}}(\Omega\,\mathrm{m})\) & \(1\times 10^{-5}\) & \(5\times 10^{-6}\) & \(1\times 10^{-6}\) \\ \hline
1 & 77.7 & 77.5 & 76.5 \\ \hline
0.5 & 67.2 & 67.1 & 66.6 \\ \hline \end{tabular}
\end{table}
Table 3: The effective impedance (\(\mathrm{m}\Omega\)) with natural bunch length for different NEG coating parameters
from a point charge. To calculate the effective impedance or loss factors for a perfect Gaussian bunch distribution with nominal bunch length \(\sigma_{t0}\) from the pseudo-Green function, using the wake potentials of \(\overline{\sigma}_{s}\) = 0.1 mm can usually obtain sufficiently accurate results. Note that \(\overline{\sigma}_{s}\) = 0.1 mm is more than 20 times shorter than the natural bunch length \(\sigma_{t0}\).
The longitudinal impedance multiplied by the Fourier spectrum \(\bar{\lambda}\left(\omega\right)\) of the Gaussian bunches with different \(\overline{\sigma}_{s}\) is plotted in Fig. 3. In order to resolve the impedance peak clearly, \(\overline{\sigma}_{s}\) should be smaller than 0.02 mm.
#### iii.1.2 The Case for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\)\(\upmu\)m
Figure 4 shows the predicted energy spread and bunch lengthening from the tracking simulations as a function of the single bunch current for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\)\(\upmu\)m with various values of \(\overline{\sigma}_{s}\), where the particle number \(N_{p}\) is 5 millions (M), 40,000 turns are tracked for each current and the bin size \(\Delta_{t}\) is set to be 0.02 ps. The corresponding \(F_{m}\) is 25 THz, which should be high enough to cover the frequency region of interest. Another guideline is that \(\Delta_{t}\) should be small enough to resolve the wake potential generated from the short Gaussian bunch with \(\overline{\sigma}_{s}\). The mean value and standard deviation of bunch length and relative energy spread are computed on last 10,000 turns. To benchmark the simulation results from the STBALE code, the Pelegant code [28] is also used with the same parameters except for fewer current and \(\overline{\sigma}_{s}\) values and the results are also marked in Fig. 4. In order to obtain one data point in Fig. 4, it takes about 200 min for Pelegant using 80 CPU cores, while less than 10 min for STABLE using 3584 CUDA cores. Good agreements are achieved since their underlying physical models are the same. Thus we will only use the STABLE code for the other simulations.
To validate the choice of the bin size \(\Delta_{t}\) and the particle number \(N_{p}\), convergence studies are performed for the the case of \(\overline{\sigma}_{s}=0.01\) mm, since a shorter \(\overline{\sigma}_{s}\) requires more severe convergence conditions. The corresponding energy spreads are shown in Fig. 5. There is no significant variation when \(\Delta_{t}\) decreases from 0.02 ps to 0.01 ps or \(N_{p}\) varies from 2 M to 10 M.
As seen in Fig. 4, at the low current below the MWI threshold, the simulations using a relative long \(\overline{\sigma}_{s}=0.01\) mm have already given enough convergent results. However, in order to accurately evaluate the MWI, one must properly resolve the resonator-like peak impedance. To obtain a full convergent simulation for the coating with \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\)\(\upmu\)m, the required \(\overline{\sigma}_{s}\) is 0.02 mm. The MWI behavior can be significantly underestimated if the wakefield resolution \(\overline{\sigma}_{s}\) is not sufficient.
Figure 3: The real part (solid) and the absolute value of the imagine part (dashed) of the longitudinal impedance multiplied by different Gaussian filters for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m (top) and \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m (bottom).The coating thickness is \(d=1\)\(\upmu\)m.
#### iii.1.3 The Case for \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m and \(d=1\) um
Figure 6 shows the predicted energy spread from the tracking simulations as a function of the single bunch current for \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m and \(d=1\) um with various values of \(\overline{\sigma}_{s}\) and \(N_{p}\), where \(\Delta_{t}=0.02\) ps and the current step is 0.25 mA. For \(\overline{\sigma}_{s}=0.1\) mm, if \(N_{p}=2\) M is adopted, it shows obvious energy spread widening, but as with the increment of \(N_{p}\), the energy spread widening becomes smaller and when \(N_{p}\) increases to 50 M, there is nearly no MWI within 2 mA. However, the peak impedance is still not resolved, thus we further study the case of \(\overline{\sigma}_{s}=0.01\) mm and 0.005 mm. With the same \(N_{p}\), their results are close, so \(\overline{\sigma}_{s}=0.01\) mm should be enough to cover the frequency region of interest. Within 1.5 mA, the energy spread widening becomes smaller as with the increment of \(N_{p}\), but the full convergence is still not achieved even when \(N_{p}=50\) M, but the energy spread widening becomes relative small. Therefore we can conclude that there is no or very weak MWI within 1.5 mA and it is reasonable to use a long \(\overline{\sigma}_{s}\) such as 0.1 mm to filter out the high frequency wakefield components. For \(\overline{\sigma}_{s}=0.01\) mm or 0.005 mm at a high current of 2 mA, there is no significant variation when \(N_{p}\) varies from 10 M to 50 M and obvious energy spread widening can be seen, so in this situation one should also use a small \(\overline{\sigma}_{s}\) to resolve the resonator-like peak impedance in order to accurately
Figure 4: The rms energy spread (top) and bunch length (bottom) versus single bunch current for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\) μm with different \(\overline{\sigma}_{s}\). The solid lines are the mean values and the dashed lines including their fill areas represent the standard deviation obtained by STABLE with current step of 0.05 mA. The discrete error bars are obtained by Pelegant with current step of 0.5 mA.
predict the MWI behavior.
### Micro-bunching Instability Phenomena
In the previous subsection, it has shown that the coating with \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\) um causes a much more serious MWI than that with \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m and \(d=1\) um although they are close in effective impedance with natural bunch length, so there is a practical interest in exploring the underlying mechanism.
The energy spread evolution over the pass turn for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\) um with \(\overline{\sigma}_{s}=0.01\) mm and \(N_{p}=20\) M at current of 0.5 mA together with the longitudinal phase space distributions at three different turns are shown in Fig. 7. More simulated particles are used just to make the plots of the phase spaces more smooth and clear. There appears strong sawtooth-shaped fluctuations of energy spread over the pass turn. The error bars in Figs. 4 and 5 also characterize the amplitudes of the fluctuations. The MBI in the phase space is visible corresponding to a modulation frequency around 0.58 THz. It implies that the sharp peak in the impedance spectrum plays an important role in the MBI. A possible reason is that when the peak is narrowband (or has a high quality factor), the corresponding wakefield lasts for several oscillation cycles (as shown in Fig. 2), which allows the wakefield from the micro-bunches far apart to be coherently enhanced. While for \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m and \(d=1\) um, the peak is more broadband, the wakefield attenuates quickly with the increasing of distance, which prevents the cooperation between the micro-bunching fluctuations far apart.
### Impact of Coating Parameters
We have shown the coating with \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\) um can cause the MBI effect with a low current threshold for the HALF ring. To avoid its occurrence, there is a practical interest in exploring the dependence on the coating parameters.
We consider the coating parameters given in Sec. II, in which their impedances are given. Figure 8 shows the predicted energy spread and bunch lengthening from the tracking simulations as a function of the single bunch current for different coating parameters, where the current step is 0.25 mA, \(\Delta_{t}=0.02\) ps, \(\overline{\sigma}_{s}=0.01\) mm which is small enough to resolve the resonator-like peak impedance, and the particle number is 10 M except for the case of \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m and \(d=1\) um where 50 M particles are used. The convergence stud
Figure 8: The rms energy spread (top) and bunch length (bottom) versus single bunch current for different coatings. The solid lines are the mean values and the dashed lines including their fill areas represent the standard deviation. The discrete error bars are obtained using \(N_{p}=20\) M to validate the convergences.
Figure 7: The rms energy spread evolution over the pass turn for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\) μm at 0.5 mA together with the longitudinal phase space plots (\(t-\delta\)) at three marked points.
ies for the cases \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m, \(d=1\) um and \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m, \(d=1\) um have been done in the subsection IV.1. To validate the choice of \(N_{p}\) for the other cases, we also carry out the simulations using \(N_{p}=20\) M and the results are also plotted in Fig. 8 as error bars, and there is no significant variation for each case. With the same coating thickness \(d=1\) um or \(0.5\) um, the MWI for \(\rho_{\rm NEG}=5\times 10^{-6}\)\(\Omega\,\)m is less serious and has a higher threshold current than that for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m, but there still exits MBI when the current exceeds the threshold. For the coating with resistivity \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m or \(5\times 10^{-6}\)\(\Omega\,\)m, reducing the thickness from \(1\) um down to \(0.5\) um is helpful to weaken the MBI, since the peak frequency in the impedance spectrum for \(d=0.5\) um is higher than that for \(d=1\) um as seen in Fig. 1. For the coating with \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m and \(d=1\) um, it doesn't contribute to MBI, so it has a much higher instability threshold and smaller energy spread widening. For the coating with \(\rho_{\rm NEG}=1\times 10^{-6}\)\(\Omega\,\)m, reducing the coating thickness from \(1\) um down to \(0.5\) um will make a lower instability threshold, this is because the impedance peak of the latter is much sharper as shown in Fig. 1, which can also lead to MBI.
### Impact of Bunch Lengthening with HHC
Bunch lengthening with higher harmonic cavities (HHCs) is also a very helpful means to fight against most collective effects, including the MWI. A passive superconducting 3rd harmonic cavity will be installed in the HALF storage ring [29; 30]. Instead of multibunch simulations, we just carry out single bunch simulations by introducing an ideal HHC voltage potential because of the heavy computational loads and making the bunch length increase to a factor of 2 or 3 at zero current. Figure 9 shows the predicted energy spread and bunch lengthening from the tracking simulations as a function of the single bunch current for the case of \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\) um with different bunch lengthening factors. In order to obtain a convergent simulation, \(\overline{\sigma}_{s}\) should still be as small as that without HHC to resolve the resonator-like peak impedance. The bunch lengthening with HHC can raise the MWI threshold since it lowers the charge density.
## V Conclusion and Discussion
In this paper, we have studied the impact of NEG coating resistive-wall impedance on the longitudinal microwave instability (MWI) for the HALF storage ring via particle tracking simulation, where the wake potential of a very short Gaussian bunch with rms length of \(\overline{\sigma}_{s}\) serves as pseudo Green function. In order to obtain a quasi convergent simulation of the beam dynamics, \(\overline{\sigma}_{s}\) should be small sufficiently to resolve the peak impedance in the high frequency region. For the cases presented in this paper, \(\overline{\sigma}_{s}\) should be at most \(0.02\) mm, which is more than \(100\) times shorter than that of the equilibrium beam at zero limit. Otherwise, one is likely to underestimate the MWI behavior. Recent microwave instability studies [17] for the ESRF-EBS show that the measured current threshold is significantly lower than the stimulated one, but all the wakefield models are computed with \(\overline{\sigma}_{s}=\)1 mm, which is far from satisfying the resolution of the high frequency RW components and can be a possible answer to explain the discrepancy.
The effective impedance is often used to evaluate the
Figure 9: The rms energy spread (top) and bunch length (bottom) versus single bunch current for \(\rho_{\rm NEG}=1\times 10^{-5}\)\(\Omega\,\)m and \(d=1\) μm with different bunch lengthening factors (LFs). The solid lines are the mean values and the dashed lines including their fill areas represent the standard deviation.
MWI, however, our studies show that the characteristics of the peak in the high frequency region are also critical. A strong and narrowband peak can cause an undesirable micro-bunching instability (MBI), which has a low threshold current and makes the dynamics of MWI more complex. For a high coating resistivity (in the order of \(10^{-5}\)\(\Omega\) m), reducing the thickness is helpful to weaken the MBI by shifting the peak impedance to higher frequency but the MBI is still dangerous. Another effective way to suppress the MBI is to reduce the coating resistivity, which has a broadband impedance. The bunch lengthening with HHC can raise the MWI/MBI threshold. A storage ring with NEG coating of low resistivity applied to inner surfaces of many vacuum chambers and a low frequency main cavity [31, 32] can still suffer from the MBI since it has high single bunch charges.
The NEG-coating resistive-wall high frequency impedance can play an important role in the MBI, so accurate measurements of the NEG resistivity are very important in order to perform accurate simulations on their impact on the beam dynamics. The MBI will reduce the beam quality but also has the potential to tailor the emitted CSR radiation and its fluctuations for possible applications of the terahertz radiation.
###### Acknowledgements.
The authors would like to thank Sihui Wang at USTC and Na Wang at IHEP for useful discussions on NEG coatings and Biaobin Li at USTC for useful discussions on the numerical convergence. This work was supported by National Natural Science Foundation of China (No. 12105284 and No. 11875259) and the Fundamental Research Funds for the Central Universities (No. WK2310000090).
|
2309.15016 | Question-Answering Approach to Evaluating Legal Summaries | Traditional evaluation metrics like ROUGE compare lexical overlap between the
reference and generated summaries without taking argumentative structure into
account, which is important for legal summaries. In this paper, we propose a
novel legal summarization evaluation framework that utilizes GPT-4 to generate
a set of question-answer pairs that cover main points and information in the
reference summary. GPT-4 is then used to generate answers based on the
generated summary for the questions from the reference summary. Finally, GPT-4
grades the answers from the reference summary and the generated summary. We
examined the correlation between GPT-4 grading with human grading. The results
suggest that this question-answering approach with GPT-4 can be a useful tool
for gauging the quality of the summary. | Huihui Xu, Kevin Ashley | 2023-09-26T15:36:29Z | http://arxiv.org/abs/2309.15016v2 | # Question-Answering Approach to Evaluate Legal Summaries
###### Abstract
Traditional evaluation metrics like ROUGE compare lexical overlap between the reference and generated summaries without taking argumentative structure into account, which is important for legal summaries. In this paper, we propose a novel legal summarization evaluation framework that utilizes **GPT-4** to generate a set of question-answer pairs that cover main points and information in the reference summary. GPT-4 is then used to generate answers based on the generated summary for the questions from the reference summary. Finally, GPT-4 grades the answers from the reference summary and the generated summary. We examined the correlation between GPT-4 grading with human grading. The results suggest that this question-answering approach with GPT-4 can be a useful tool for gauging the quality of the summary.
S 2023
## 1 Introduction
Due to the ever-increasing volume of legal information on the internet, there is an urgent need for automatically processing the information for legal professionals and the general public. Legal documents are usually long and hard to read or understand; shorter summaries are often useful to users [1].
Readers need summaries to convey a rough idea of what a case is about and why it is important. This enables users to connect a case to their personal needs and to decide whether to read the case in full. As a result, evaluating the quality of legal summaries is important. Commonly used summary evaluation metrics such as ROUGE scores [2] focus primarily on surface-level aspects like word overlap and grammatical correctness. These metrics do not consider factors such as contextual understanding or the alignment of the summary with the reader's specific goals or preferences.
In this work, we propose a novel method to evaluate the quality of a legal summary by leveraging automated question-answering while incorporating legal argumentative structure. The argument structure comprises three elements: **Issue** - legal question that a court addressed in the case; **Reason** - elaboration of why the court reached the conclusion; and **Conclusion** - the court's final decision regarding the issue. Our method
consists of three steps: (1) Given a reference legal summary, a question-answer generation model (GPT-4) produces a set of question-answer pairs based on the legal argumentative structure of the reference summary. (2) Then we use a question-answering model (GPT-4) to answer the questions from the reference summary based on the text of the generated summary. (3) Finally, GPT-4 compares the answers in step (1) from the reference summary with the answers in step (2) from the generated summary and assigns grades based on the similarity.
## 2 Related Work
Recently, the intersection of question answering (QA) and summarization has gained significant attention in the research community. [3] introduced the Stanford Question Answering Dataset (SQuAD). SQuAD has become the benchmark for QA research and sparked the research for intersection between summarization and QA. [4, 5] proposed and concluded that QA-based metrics on abstractive summarization evaluation are preferred by human evaluators. Taking the QA-based evaluation research methods even further, [6] tackle the unfaithfulness of neural abstractive summarization by proposing QA-based automatic metrics, FEQA.
LLM-based evaluators have gained attention in recent years because of their impressive ability to generate human-like texts. [7] proposed an evaluation framework, GPTScore, with generative pre-training models like GPT-3. The authors suggest that higher-quality text is more likely to be generated by following a given instruction in a given context. [8] presents a preliminary study of using ChatGPT as an NLG evaluator and shows ChatGPT evaluation is correlated with human judgments in most cases.
Few prior works apply a Question-Answering (QA) approach to evaluation in a legal context. Our approach leverages recent progress in large language models (LLM) while taking legal argumentative structure into account. Before the bloom of LLMs, a QA approach relied on corresponding curated datasets [9, 10]. Our approach does not require a specific question-answering dataset, and generating questions and answers automatically.
## 3 Methodology
### Experimental Design
We utilize GPT-4 to generate question-answer pairs, incorporating the example prompt illustrated in Figure 1. This enhanced prompt enables us to generate not only question-answer pairs but also the corresponding question types. Subsequently, we utilized these questions as prompts to the model for predicting responses based on model-generated summaries. The prompt used for prediction is shown in Figure 2. The prompt to GPT-4 to evaluate the answers is shown in Figure 3.
Throughout our research, we experimented with three models for generating summaries: Longformer Encoder-Decoder (LED) [11], BART [12], and GPT-4. LED and BART require fine-tuning in order to generate reasonable summaries while GPT-4 can generate summaries in a zero-shot setting.
### Data
[13, 14] developed a type system to annotate Canadian legal case summaries. This type system includes three key components: Issue, Reason, Conclusion. The dataset initially consisted of 1,049 annotated summaries along with their corresponding full-case decisions. We use the same dataset to support this work.
We used 90% of the data for fine-tuning LED and BART models. The remaining 10% of the data was for testing purposes. GPT-4 was used to generate summaries for this 10% subset of data without fine-tuning. Considering the cost associated with GPT-4 and human evaluation, however, we opted to leverage our QA approach to evaluate 10 summaries generated by each model.
## 4 Results and Discussion
There are 48 question-answer pairs for 10 cases. The human evaluator assessed whether the generated question-answer pairs adequately captured the necessary information and are being addressed correctly. The evaluation options were limited to "YES" and "NO". Based on the results, 42 out of 48 questions accurately captured the required information, while all 48 answers were correct and appropriately addressed the questions. Table 1 shows an example of GPT-4 generated QAs. This example shows GPT-4 can generate coherent and contextual relevant answers to specific types of questions. Those QAs are served as ground truth when comparing to the predicted grading.
Figure 2 is for predicting answers based the previous generated questions and generated summaries. Figure 3 shows the prompt we used for grad
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Type** & **Question** & **Answer** \\ \hline Issue & What was the warrant issued for? & The warrant was issued to search a dwelling house for weapons allegedly used in an attempted armed robbery. \\ Reason & What test did the judge apply to determine the validity of the warrant? & The judge applied the test that the justice of the peace ’must be satisfied on reasonable grounds.’ \\ Conclusion & What was the conclusion of the case? & Substantial compliance was found and the warrant was upheld. \\ \hline \end{tabular}
\end{table}
Table 1: GPT-4 generated QA examples.
Figure 1: Prompt template for generating question-answer pairs based on annotated sentences.
against the ground truth. We converted GPT-4 grades (0-10 scale) into binary by setting a threshold. Set a threshold at a grade of 5, then label grades 5 and above as "YES" and grades below 5 as "NO". Additionally, the human evaluator assessed whether the generated answer correctly addresses the given question in relation to the model-generated summary. The evaluation options were also limited to "YES" and "NO". During the assessment, the human evaluator found some of the answers are legally correct, but it goes well beyond the information provided in the generated summary. Hallucination is also found in some answers as well.
Table 2 provides the examined two types of correlation between the GPT-4 evaluation grade and the human evaluation. Pearson correlation measures the linear relationship between the GPT-4 evaluation and human evaluation, while Spearman correlation measures the monotonic relationship. The automatic evaluation on BART-generated summary has highest correlation with human evaluation on Issue types of answers in Pearson (0.67) and Spearman (0.72) correlations, LED-generated summary has the highest correlation on Reason (0.43 in Pearson and 0.31 in Spearman) and GPT-4 generated summary has the highest correlation on Conclusion (0.57 in both Pearson and Spearman). In terms of Reason types of answers, we notice that the GPT-4 evaluation has negative correlation with the human evaluation on BART generated and GPT-4 generated summaries in both Pearson and Spearman correlation.
Overall, LED exhibits robust linear (0.81) and monotonic (0.83) relationships; The correlation results of BART suggest that the relationship it captures is more consistently monotonic (0.80) than linear (0.61). GPT-4 has a stronger Pearson correlation (0.74). The average correlations across all models are 0.72 (Pearson) and 0.74 (Spearman), indicating generally strong linear and monotonic relationships.
Figure 3: Prompt template for evaluating predicted answer with the real answer.
Figure 2: Prompt template for predicting answers based on the model-generated summary.
In conclusion, the QA approach of evaluation is strongly correlated to human evaluation, which makes a fairly reliable method for assessing the quality of summarization. The strong correlation suggests that this approach aligns well with human perception and understanding of what constitutes a good summary. The proposed method is a valuable tool for assessing and improving summarization systems.
## 5 Future work
While we show that GPT-4 achieves reasonable correlation with human evaluation of summaries, there are limitations that provide directions for future work: (1) Since GPT-4's performance as an evaluation metric is sensitive to the construction of prompts, we plan to explore various prompts to achieve better performance. (2) We need to scale up the experiment and show more robust comparison results. (3) The quality control on model generation is in great need, especially when the input context gets longer and has more complex structure.
## Acknowledgements
This work has been supported by grants from the Autonomy through Cyberjustice Technologies Research Partnership at the University of Montreal Cyberjustice Laboratory and the National Science Foundation, grant no. 2040490, FAI: Using AI to Increase Fairness by Improving Access to Justice. The Canadian Legal Information Institute provided the corpus of paired legal cases and summaries. This work was supported in part by the University of Pittsburgh Center for Research Computing through the resources provided. Specifically, this work used the H2P cluster, which is supported by NSF award number OAC-21176
\begin{table}
\begin{tabular}{c c c c} \hline Model & Type & Pearson & Spearman \\ \hline \multirow{4}{*}{BART} & Issue & 0.67 & 0.72 \\ & Reason & -0.07 & -0.17 \\ & Conclusion & 0.29 & 0.29 \\ & Issue & 0.69 & 0.52 \\ & Reason & 0.43 & 0.31 \\ & Conclusion & 0.13 & 0.13 \\ & Issue & 0.56 & 0.56 \\ & Reason & -0.09 & -0.20 \\ & Conclusion & 0.57 & 0.57 \\ & IRC & 0.61 & 0.80 \\ & LED & IRC & 0.81 & 0.83 \\ & GPT-4 & IRC & 0.74 & 0.60 \\ \hline \end{tabular}
\end{table}
Table 2: The correlation between GPT-4 evaluation grade and the human evaluation. IRC is short for Issue, Reason and Conclusion. |
2309.15770 | Generating Transferable Adversarial Simulation Scenarios for
Self-Driving via Neural Rendering | Self-driving software pipelines include components that are learned from a
significant number of training examples, yet it remains challenging to evaluate
the overall system's safety and generalization performance. Together with
scaling up the real-world deployment of autonomous vehicles, it is of critical
importance to automatically find simulation scenarios where the driving
policies will fail. We propose a method that efficiently generates adversarial
simulation scenarios for autonomous driving by solving an optimal control
problem that aims to maximally perturb the policy from its nominal trajectory.
Given an image-based driving policy, we show that we can inject new objects
in a neural rendering representation of the deployment scene, and optimize
their texture in order to generate adversarial sensor inputs to the policy. We
demonstrate that adversarial scenarios discovered purely in the neural renderer
(surrogate scene) can often be successfully transferred to the deployment
scene, without further optimization. We demonstrate this transfer occurs both
in simulated and real environments, provided the learned surrogate scene is
sufficiently close to the deployment scene. | Yasasa Abeysirigoonawardena, Kevin Xie, Chuhan Chen, Salar Hosseini, Ruiting Chen, Ruiqi Wang, Florian Shkurti | 2023-09-27T16:42:06Z | http://arxiv.org/abs/2309.15770v3 | # Generating Transferable Adversarial Simulation Scenarios for Self-Driving via Neural Rendering
###### Abstract
Self-driving software pipelines include components that are learned from a significant number of training examples, yet it remains challenging to evaluate the overall system's safety and generalization performance. Together with scaling up the real-world deployment of autonomous vehicles, it is of critical importance to automatically find simulation scenarios where the driving policies will fail. We propose a method that efficiently generates adversarial simulation scenarios for autonomous driving by solving an optimal control problem that aims to maximally perturb the policy from its nominal trajectory. Given an image-based driving policy, we show that we can inject new objects in a neural rendering representation of the deployment scene, and optimize their texture in order to generate adversarial sensor inputs to the policy. We demonstrate that adversarial scenarios discovered purely in the neural renderer (surrogate scene) can often be successfully transferred to the deployment scene, without further optimization. We demonstrate this transfer occurs both in simulated and real environments, provided the learned surrogate scene is sufficiently close to the deployment scene.
## 1 Introduction
Safety certification of a self-driving stack would require driving hundreds of millions of miles on real roads, according to [1], to be able to estimate miles per intervention with statistical significance. This could correspond to decades of driving and data collection. Procedural generation of driving simulation scenarios has emerged as a complementary approach for designing unseen test environments for autonomous vehicles in a cost-effective way. Currently, generation of simulation scenarios requires significant human involvement, for example to specify the number of cars and pedestrians in the scene, their initial locations and approximate trajectories [2], as well as selection of assets to be added to the simulator. In addition to being challenging to scale, having a human in the loop can result in missing critical testing configurations.
In this paper, we cast adversarial scenario generation as a high-dimensional optimal control problem. Given a known image-based driving policy that we want to attack, as well as the dynamics of the autonomous vehicle, we aim to optimize a photorealistic simulation environment such that it produces sensor observations that are 3D-viewpoint-consistent, but adversarial with respect to the policy, causing it to deviate from its nominal trajectory. The objective of the optimal control problem is to maximize this deviation through plausible perturbations of objects in the photorealistic environment.
Our optimal control formulation requires differentiation through the sensor model in order to compute the derivative of the sensor output with respect to the underlying state perturbation. However, most existing photorealistic simulators for autonomous vehicles are not differentiable; they can only be treated as black boxes that allow forward evaluation, but not backpropagation. Instead of using an off-the-shelf photorealistic simulator and adding assets to match the scene, we train an editable
neural rendering model that imitates the deployment scene, allowing us to insert new objects in the simulator and to optimize their texture through gradient-based optimization. This editable neural rendering model acts as a surrogate physics and rendering simulator, enabling us to differentiate through it in an efficient way in order to attack the driving policy's input sensor observations.
Unlike many existing types of adversarial attacks in the literature [3; 4; 5], our work aims to discover environment perturbations/attacks that satisfy the following properties: (a) **They are temporally-consistent**. The influence of the attack is not instantaneous, it is amortized through time via the optimal control loss function. (b) **They are transferable**. An attack discovered in the surrogate scene should ideally be adversarial in the actual deployment scene. (c) **They are object-centric**. The attack introduces and edits objects as opposed to unstructured high-frequency perturbations across all pixels. Specifically, we make the following contributions:
1. We formulate adversarial scenario generation across time as an optimal control problem that relies on a learned, surrogate NeRF simulator. The solution to this problem yields 3D-view-consistent, object-centric, adversarial attacks that often transfer to the deployment environment. We show how to solve this problem efficiently using implicit differentiation.
2. Differentiable rendering of our surrogate NeRF model enables gradient-based adversarial object insertion and scales to high dimensional parameterizations of multiple objects.
3. We show that our adversarial attacks discovered in the surrogate NeRF simulator can be realized in the real-world and retain their ability to disrupt the policy.
We experimentally validate our framework by reconstructing scenes using only pose-annotated images and generate adversarial object insertion attacks with multiple trajectories.
## 2 Related Work
**Adversarial scenarios for autonomous driving**. Perceptual adversarial attacks make modifications to prerecorded sensor data from real driving sessions to fool the perception system. Since this sensor data is fixed, they lack the ability to resimulate and typically only operate on the individual frame level. Previous works, [4; 6] attempt to attack a LiDAR object detection module by artificially inserting an adversarial mesh on top of car rooftops or objects in a prerecorded LiDAR sequence. They extend the scope of their attack further by incorporating textures to be able to attack image-based object detectors as well [3]. In both these works, the inserted object has a very low resolution
Figure 1: First-person-view (FPV) of our adversarial attack transfer to an RC car with overhead trajectory view on the right. Row 1: Unperturbed policy execution; Row 2: Random search texture attack; Row 3: Our adversarial attack directly transferred to the real deployment scene, without additional optimization; Row 4: Our adversarial attack discovered in the surrogate NeRF simulator.
and nondescript geometry. Recent self-driving simulators, such as DriveGAN [7], GeoSim [8] and UniSim [9] address these issues, with the latter enabling manipulable sensor-based simulators based on prerecorded datasets. These works, however, have not dealt with discovering attacks.
Another prominent line of works produce dynamic state-level adversarial attacks. These generally target the control/planning system only by perturbing trajectories of other agents in the scene. Without considering the perception system, these methods use simplified traffic and state-based simulators that do not incorporate 3D rendering [10; 11; 12].
Closest to our work, a few methods have proposed to attack end-to-end policies by adding perturbations to existing self-driving simulators. In [13], the trajectories of other agents in a CARLA scene are modified to generate a collision event. Due to the non-differentiability of the simulator, a black-box Bayesian optimization is used. Gradient-based attacks on top of simulators have also been investigated. However, the requirement of differentiability has so far limited their scope to very simplified geometries that are composited post-hoc onto renderings from CARLA. In [5], flatly colored rectangles are composited on top of frames from the CARLA simulator and optimized to cause maximal deviation of an end-to-end image-based neural network steering controller. Similarly, work in [14] attempts to play a video sequence of adversarial images on a billboard in the scene using image composition. To our knowledge, no works in this setting have been able to demonstrate transfer of adversarial attacks to the real world, as these attacks rely on a pre-existing simulator that they augment. Compared to these, our attacks are entirely performed on a surrogate neural simulator that is reconstructed from only posed images captured from any deployment scene. Furthermore, our surrogate neural simulator allows for inserting arbitrary objects reconstructed from posed images.
The driving simulator VISTA [15] generates high fidelity sensor data using a large collection of real world data. In our case, we are able to train a NeRF using the data, allowing us to generalize to a wider range of novel views. [16] samples adversarial masking of existing LiDAR data using reinforcement learning. Work on perception error models [17] avoids using a simulator altogether and instead focuses on learning a surrogate policy that uses lower dimensional salient features, which are attacked. However, it would be very difficult to infer the real world perceptual disturbance that would cause the attack, so these attacks are very challenging to transfer to the real world.
**Robust adversarial examples.** Adversarial attacks for classification have commonly used minimal perturbations on the input images [18] that may not always transfer to the physical world or another
Figure 2: Our method can be summarized in the four steps shown. (a) In the top left, we obtain posed images from the deployment scene which can be a simulator or the real world. (b) In the bottom left, we reconstruct a surrogate scene by fitting a NeRF to the posed images as a differentiable simulator and observe only minor perceptual gap. (c) Having the surrogate scene, we can insert objects, which are also represented as NeRFs, and attack their color fields to generate textural attacks. (d) The discovered adversarial objects are introduced back into the deployment scene.
domain. To enhance robustness to domain transfer [19] proposes a class of adversarial transformations that optimize for adversarial attacks under a distribution of image transforms [20].
## 3 Background
### Neural Rendering
Neural 3D representations, such as neural radiance fields (NeRF), have seen significant activity in recent years due to their ability to reconstruct real world objects and scenes to very high detail using only posed images. A survey of recent progress in neural rendering can be found in [21].
In [22], physics simulations of known objects are combined with their learned NeRF to create high fidelity dynamic environments for training deep reinforcement learning agents that can transfer to the real world. In our work, we use composition of NeRFs to insert and optimize adversarial objects. This is shown in Fig. 3, and the details are in Sections 4.1 and 4.2. We render the scene using the volume rendering equation:
\[I(x,\omega)=\int_{0}^{T}\sigma(x+t\omega)\exp\bigg{(}\int_{0}^{t}\sigma(x+ \hat{t}\omega)\mathrm{d}\hat{t}\bigg{)}L(x+t\omega,-\omega)\mathrm{d}t \tag{1}\]
Where \(I(x,\omega)\) is the intensity at a location \(x\) given in world space in the direction \(\omega\). \(L\), and \(\sigma\) are the learned color and density fields in NeRF. For the sake of performance, we choose to use grid-based volume representations. Structured grid NeRFs reduce computation cost by storing direct density and color variables [23] or latent features [24; 25] on explicit 3D grids. In essence, they trade extra memory utilization for large performance improvements. Instant Neural Graphics Primitives (iNGP) [26] uses multi-scale dense grids and sparse hash grids of features that are decoded to color and density by a MLP. We chose to use iNGP because it balances our performance and memory tradeoffs well.
## 4 Method
Our framework generates successful adversarial attacks of end-to-end image-based self-driving policies with only access to posed images from the deployment scene. An overview of the high-level steps in our framework is shown in Figure 2.
We now briefly describe the setting and our adversarial attack method. More details are included in Appendix B. Let \(x_{t}\) denote the state of the car at time \(t\), \(x^{*}\) denote a reference trajectory to track and CTE the cross-track error. Starting from Eqn. 6, we set the cost function \(C(x_{t})\) of our problem as the car's proximity to the reference \(x^{*}\):
\[C(x_{t})=-\text{CTE}(x_{t},x^{*}) \tag{2}\]
In other words, we want to maximize deviation from the desired trajectory. We set the constraint function \(G(x_{t},x_{t+1},\theta)=0\) to be the following set of constraints:
\[u_{t}=\pi_{\phi}(o_{t}) \tag{3}\]
\[o_{t}=h_{\gamma,\theta}(x_{t}) \tag{4}\]
\[x_{t+1}=f_{c}(x_{t},u_{t}) \tag{5}\]
Where \(\pi\) is the fixed driving policy*, \(h\) is the neural rendering sensor model that outputs image observations \(o_{t}\) given the state of the car. The renderer depends on \(\theta\), the parameters of adversarial NeRF objects and \(\gamma\), the fixed rendering parameters of the background scene NeRF. Finally, \(f_{c}\) denotes the dynamics of the ego vehicle that must be considered, since we want to find adversarial trajectories that are consistent across multiple frames.
Footnote *: We train our own policy and provide details in Appendix C.2.
### Differentiable Renderer
Traditional simulators like CARLA do not admit computation of gradients. Thus, prior works rely on artificially compositing simplistic textured geometries on top of rendered images from CARLA and
obtaining gradients with respect to the composited alteration [14]. We use NeRFs to learn surrogate models of the scene and sensor model instead. This surrogate model not only gives us an automated method to reconstruct scenes from pose-annotated images, but also provides efficient gradient computation giving us a differentiable form for the sensor \(h\). For the purposes of optimization, we found traditional NeRF representations to be intractable in terms of compute and memory requirements (during gradient computation). Thus, we opt to use the multi-resolution hash grid representation, Instant-NGP [26].
Note that, similar to existing work, we detach the gradients of the image observation with respect to the camera coordinates (which are attached to the ego vehicle) [27]. We include more details regarding this in Appendix B.3.
### Adversarial Object Insertion
We use insertion and texturing of multiple objects as our adversarial perturbations to the background scene. To do this, we first reconstruct regular objects, such as cars, as individual NeRFs from pose-annotated images. For our object NeRFs we simply store color values directly on the voxel grids of Instant-NGP, which are tri-linearly interpolated within each voxel. By choosing these color voxel grids as our adversarial parameters \(\theta\), we can perform independent adversarial texture attacks over multiple objects.
The object NeRFs can be easily composed with our background scene NeRF. This is done via alpha compositing, which leverages opacity and depth values that can be easily computed.
### Gradient computation via implicit differentiation
We use implicit differentiation for gradient computation [28], also known as the adjoint method, which enables constant memory gradient computation with respect to trajectory length. In discrete time, the adjoint method amounts to propagating gradients through an implicit relationship \(G\) for
Figure 4: Base car on the left; random texture in the middle; adversarial texture on the right.
Figure 3: A computation diagram of our algorithm for generating adversarial attacks. The inner driving loop consists of three components: the neural rendering model, the differentiable driving policy, and the differentiable kinematic car model. We inject the adversarial perturbation to the surrogate scene by composing the outputs of one or more neural object renderers (the single object case is shown above for simplicity) with the output of the neural scene renderer. The parameters of the object renderer(s) are optimized to maximize the deviation of the realized trajectory from the reference trajectory, while keeping the parameters of the driving policy and scene renderer frozen.
problems of the form:
\[\min_{\theta}J(\theta)=\sum_{t=0}^{T}C(x_{t})\text{ such that }G(x_{t-1},x_{t},\theta)=0 \tag{6}\]
Explicitly, the method performs a forward simulation to compute the variables \(x_{t}\) and then subsequently a backward pass to compute adjoint variables \(\lambda_{t}\) by solving the equations:
\[\frac{\partial G(x_{t-1},x_{t})}{\partial x_{t}}^{\top}\lambda_{t}=-\frac{ \partial C(x_{t})}{\partial x_{t}}^{\top}-\frac{\partial G(x_{t},x_{t+1})}{ \partial x_{t}}^{\top}\lambda_{t+1} \tag{7}\]
with the boundary condition:
\[\frac{\partial G(x_{T-1},x_{T})}{\partial x_{T}}^{\top}\lambda_{T}=-\frac{ \partial C(x_{T})}{\partial x_{T}}^{\top} \tag{8}\]
Finally, the gradient of the loss can be calculated as:
\[\nabla_{\theta}J=\lambda_{1}^{\top}\frac{\partial G(x_{0},x_{1},\theta)}{ \partial x_{0}}\frac{\partial x_{0}}{\partial\theta}+\sum_{t=1}^{T}\lambda_{ t}^{\top}\frac{\partial G(x_{t-1},x_{t},\theta)}{\partial\theta} \tag{9}\]
Throughout both passes we do not need to store large intermediate variables and only need to accumulate the gradient at each step.
### Gradient-based Adversarial Attack
Obtaining gradients for the problem in Eqn. (6) should be possible with an autodifferentiation framework such as PyTorch [29]. We find that naively computing the gradient via backpropagation results in memory issues as we scale up trajectory lengths due to all the intermediary compute variables used to compute the integral in Eqn. 1 being stored until the end of the trajectory. We achieve drastic memory savings by using the adjoint method [30] which only keeps track of the adjoint variables \(\lambda\) along the trajectory. In our case, the adjoint variables are three-dimensional, allowing us to only use as much memory as it takes to compute a single jacobian vector product of the composition of models given by (5), (3), (4) in the optimization problem in Eqn. (6).
To summarize, the computation of our gradient-based adversarial attack proceeds as follows:
1. We rollout our policy in our surrogate simulator to compute the loss and the trajectory \(x_{1:T}\) in Eqn. (6).
2. We perform a backward pass to compute adjoint variables for gradient computation.
3. Using the adjoint variables, we compute the gradient \(\nabla_{\theta}J\) and update parameters \(\theta\).
## 5 Experiments
To demonstrate the effectiveness of our framework, we aim to reconstruct a driving scenario from posed images, generate adversarial attacks and validate that those attacks transfer to the deployment scene. Through our experiments, we would like to answer the following key questions:
1. Can gradient based optimization find better adversarial examples than random search?
2. Are NeRF models suitable surrogate renderers for gradient based adversarial optimization?
3. Are adversarial attacks transferable from NeRF back to the deployment domain?
### Experimental Details
**CARLA Deployment Scenes.** We first validate our method in simulation, treating CARLA as a proxy for a real deployment scene. We perform experiments on a 3-way intersection of the CARLA [31] simulator. For the 3-way intersection, we consider 3 different trajectories to be followed by the ego vehicle. For the object models, we train surrogate NeRF models for two sample objects, a fire hydrant and a small car using only posed images (any other object could be used). We
manually insert 2 small cars and 3 fire hydrants into the driving scene in an initial placement. Our adversarial attacks jointly optimize the NeRF color parameters and object rigid transforms.
**Real World Deployment Scenes.** Our real-world experiments are performed on an autonomous RC car driving around a square race track in an indoor room. It is difficult to manufacture adversarial attacked objects with complex shapes in the real world. Hence, for practicality, we insert a NeRF object representing a flat square texture pattern that can be projected by a display monitor in the real world and optimize its color parameters. In order to create a first version of the attack we choose to directly compose the adversarial texture on to the robot camera feed. We then move on to a more difficult task of physically realizing this attack, for this we opted to display the texture on a monitor to simplify lighting conditions. Additional details of our real world experimental setup are given in C.4.1
### Evaluation Metrics
We measure the effectiveness of an attack with our adversarial objective, the cross track error of the vehicle. We use the road center as the reference and so even an unperturbed driving policy has some non-zero deviation which we report under "Unperturbed" in Table 1. To characterize the insensitivity of our method to random seeds, we run 5 separate attacks per scenario for both the gradient-based and random attacks with different random initializations of the adversarial parameters. We report the mean and standard deviation of our metric.
Our proposed method of attack is via gradient-based optimization using the method outlined in Section 4.4. The gradient-based attack uses \(50\) iterations of optimization using Adam, with a learning rate of \(0.1\). Due to the high dimensional parameterization, detailed in B.3.1, bayesian optimization becomes computationally intractable. Therefore, as a baseline for our method, we perform a random search parameter attack on the NeRF surrogate model that samples parameters from a Gaussian distribution with mean zero and a standard deviation of \(5\). We chose this standard deviation to match the distribution over parameters we found in our gradient attacks. We use the same number of function evaluations, selecting the best achieved attack among the 50 random samples for the CARLA experiment. For real-world experiments, we didn't find much variation between random attacks in the surrogate simulator, showing the difficulty of random search in high dimensional parameter spaces.
### Experimental Results
Example gradient attack trajectories are shown in Figure 5. We include more visualization of results for deployments of adversarial attacks, both in CARLA simulation in the real world and preliminary results of retraining the CARLA policy using new data, in Appendix D. In Table 1 we compare the total cross track errors caused by our adversarial attack against the expert lane following controller.
We observe in all 3 CARLA scenarios (averaged over 5 seeds each) that our adversarial attacks using gradient optimization consistently produce significant deviation from the lane center. When transferring these attacks back into the deployment scene, we see that although the magnitude of the deviation is reduced, we still retain a significant increase over the unperturbed or random search setting. The difference is likely due to visual imperfections in our surrogate NeRF simulator compared to the deployment scene. The random search perturbations are far less effective, remaining near the baseline unperturbed trajectory for 2 out of the 3 cases.
Figure 5: Selected overhead views and snapshots from adversarial deployment trajectories in the real world (top row: monitor displays adversarial texture discovered in NeRF), and in CARLA (bottom row: adversarial objects inserted in the simulator).
For the real world experiment, we observed a similar result. Random attacks consistently fail to elicit deviation from the driving policy both in the surrogate and deployment scenes. Over 5 random seeds, not a single random attack was able to cause the vehicle to exit the track. Gradient attacks on the other hand are reliably able to find strong attacks with little variance in the surrogate scene. When transferring our attacks to the real world, we find the attacks to retain their strength in the green screen setup. The strength of the attack is relatively diminished when using the monitor to project the attack but is nonetheless consistently higher than the random attack and causes the vehicle to understeer and exit the track on occasion. We suspect this is due to the display properties of the monitor which can alter the appearance of the adversarial perturbation.
## 6 Limitations
Despite showing the ability to generate 3D-consistent adversarial scene perturbations, there are a few avenues for improvement. First, we assume that the vision-based driving policy is differentiable. Recent works have shown high potential for modular end-to-end learned policies that could leverage synergies between vision and planning, such as neural motion planning [32], transfuser [33] and many others [34]. We discuss three potential methods to handle non-differentiable policies in Appendix E.1. Second, while we do optimize for both adversarial textures and object poses, our experiments in section D.3 of the Appendix show that the latter produces significantly non-smooth loss landscapes that necessitated multi-start gradient optimization methods to handle local minima.
## 7 Conclusion
We presented a method for generating 3D-consistent object-based adversarial perturbations in autonomous driving scenarios. Unlike previous approaches that rely on making edits on top of fixed pre-recorded data or black-box simulators, we develop a differentiable simulator directly with a neural radiance field representation of geometry and texture of a scene that admits gradients through the rendering of camera and depth observations. Through alpha-compositing, we can introduce new objects also represented as neural radiance fields into the scene and optimize color perturbations of the objects. We validate our framework both in simulation and on a real-world RC car race track driving scenario showing successful sim-to-real transfer of discovered attacks. While our particular implementation is only a first step towards demonstrating NeRF based adversarial attack generation, we believe that our framework shows a promising new direction for automatic evaluation of autonomous vehicles. We expect our method to benefit greatly from continued improvements being made to neural rendering and their wider adoption for AV/robotic simulation.
\begin{table}
\begin{tabular}{l|c|c c|c c} \hline \hline \multicolumn{2}{c|}{CARLA Deployment} & \multicolumn{2}{c}{Surrogate Scene} & \multicolumn{2}{c}{CARLA Deployment} \\ \cline{2-6} Scenario & Unperturbed & Random & Gradient & Random & Gradient \\ \hline Straight & \(1166\) & \(1132\pm 7\) & \(2347\pm 49\) & \(1193\pm 19\) & \(1702.\pm 160\) \\ Right & \(1315\) & \(2084\pm 10\) & \(4105\pm 847\) & \(1476\pm 12\) & \(2101.\pm 75\) \\ Left & \(1448\) & \(1460\pm 8\) & \(4125\pm 124\) & \(1158\pm 163\) & \(2240.\pm 574\) \\ \hline \multicolumn{6}{c}{Physical Deployment} & \multicolumn{2}{c}{Surrogate Scene} & \multicolumn{2}{c}{Physical Deployment} \\ \cline{2-6} Setup & Unperturbed & Random & Gradient & Random & Gradient \\ \hline Green Screen & \multirow{2}{*}{\(48\)} & \multirow{2}{*}{\(34\pm 4\)} & \multirow{2}{*}{\(157\pm 1\)} & \(46\pm 3\) & \(248\pm 72\) \\ Monitor & & & & \(47\pm 3\) & \(76\pm 48\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the total cross-track error for all the scenario tested. Results are shown for the following cases: (1) no attack in the deployment scene (unperturbed), (2) an adversarial attack (random or gradient) in the surrogate NeRF scene, (3) an attack in the deployment scene. We separate results from the CARLA and physical deployments, we show that gradients in our surrogate simulator are useful for finding adversarial attacks and these attacks remain effective when transferred to the deployment environment. |
2301.00239 | Promoting the transition to quantum thinking: development of a secondary
school course for addressing knowledge revision, organization, and
epistemological challenges | We describe the development of a course of quantum mechanics for secondary
school designed to address the challenges related to the revision of classical
knowledge, to the building of a well-organized knowledge structure on the
discipline, and to the development of a plausible picture of the quantum world.
The course is based on a systemic approach to conceptual change, which relies
on its analysis in the transition from classical to quantum mechanics, and
coordinates cognitive and epistemological aspects. We show how our approach
drives the derivation of design principles, how these principles guide the
development of the instructional sequence and of its strategies, how their
implementation requires the blending of different research perspectives and
learning systems. The first challenge is addressed through a path of revision
of classical concepts and constructs which leverages prior knowledge according
to the dynamics of each notion in theory change. The second by adopting a
framework that promotes the construction of a unifying picture of quantum
measurement across contexts. The third by designing the course around a
modelling process that engages students in epistemic practices of the
theoretical physicist, such as generating and/or running thought experiments,
and mathematical modelling in a purely theoretical setting. All is aimed to
help students accept the quantum description of the world as a plausible
product of their own inquiry. This process is assisted by the discussion of the
facets of the foundational debate that are triggered by each of the suggested
interpretive choices, with the goal to promote an awareness of its cultural
significance, of the limits the chosen stance, of the open issues. Data on the
cycles of refinement illustrate how a set of activities have been made
effective in addressing the challenges at a local level. | Giacomo Zuccarini, Marisa Michelini | 2022-12-31T16:09:37Z | http://arxiv.org/abs/2301.00239v8 | Promoting the transition to quantum thinking: development of a secondary school course for addressing knowledge revision, organization, and epistemological challenges
###### Abstract
We describe the development of a course of quantum mechanics for secondary school designed to address the challenges related to the revision of classical knowledge, to the building of a well-organized knowledge structure on the discipline, and to the development of a plausible and reliable picture of the quantum world. The course is based on a coordinated application of an analysis of conceptual change in the learning of a successive theory, of a framework describing the epistemic practices of theoretical physicists, and of a careful approach to interpretive themes. We show how they drive the derivation of the design principles, how these principles guide the development of the instructional sequence and of its strategies, how their implementation requires the blending of different research perspectives and learning systems. The first challenge is addressed through a path of revision of classical concepts and constructs which leverages student resources according to the trajectory of each notion. The second by adopting a framework that promotes the construction of a unifying picture of quantum measurement across contexts. The third by designing the course around a modelling process that engages students in epistemic practices of the theoretical physicist, such as generating and/or running thought experiments, and mathematical modelling in a purely theoretical setting. All is aimed to help students accept the quantum description of the world as a plausible product of their own inquiry. This process is assisted by the discussion of the facets of the foundational debate that are triggered by each of our interpretive choices, with the goal to promote an awareness of its cultural significance, of the limits the chosen stance, of the open issues. Data on the cycles of refinement are used to illustrate the coherence between the principles and the activities designed to implement them, as well as the process by which the revision of the activities contributed to shape the initial guidelines.
## I Introduction
Research on the teaching and the learning of quantum mechanics (QM) holds a special position in physics education and science education at large, since it is at the crossroads of general research threads and key topics in the field.
First of all, students learning QM face a substantial challenge in achieving an effective knowledge revision. In theory change, basic terms of classical physics, such as'measurement' and'state', undergo a shift in meaning. Students struggle to interpret the properties of their quantum counterparts, as reported by research conducted at different educational levels. Investigations on upper division students elicited several issues with the new features of ideal quantum measurement [1]; at a sophomore level, the interpretation of its probabilistic character, and as a result, of quantum uncertainty has been recognized as a major challenge to students [2]; in the context of photon polarization, research revealed difficulties to interpret the concept of quantum state, identified by secondary school students as a physical quantity [3]. The impossibility to visualize quantum systems and the unintuitive nature of the new versions of the concepts represent an educational bottleneck that can be overcome with the support of mathematical sense-making. However, also familiar constructs such as vectors and vector superposition change both in properties and representational role [3]. Not surprisingly, students struggle to develop a consistent physical interpretation of the quantum version of these constructs: even at the beginning of graduate instruction, they have difficulties to identify the referent of vector superposition in QM, as they tend to associate it with mixed states, which can be described classically as lack of knowledge about the state of the system [4].
Studies on knowledge revision represent a general line of research also in the initial learning of science [5; 6; 7]. Identifying analogies and differences between the introductory case and the quantum one might be useful for interpreting empirical results on student understanding and devising strategies to promote an effective revision.
Another challenge faced both by introductory science students and physics maj enrolled in a QM course is the difficulty to overcome knowledge fragmentation, respectively as regards introductory science [8] and the quantum model [9]. Research conducted at the end of upper-division QM courses and at the beginning of graduate instruction suggests that student reasoning is strongly context-dependent [10], and therefore that the development of a globally consistent knowledge structure may be only halfway even after prolonged periods of in
struction. So far as we know, no investigation of this issue is available on secondary school students and non-physics-or-engineering majors who received traditional instruction on QM. However, the more limited scope of teaching/learning sequences (TLSs) designed for such student populations enhances the risk of promoting the construction of disconnected models valid only in the context of an individual phenomenon or experiment [11]. As in the case of knowledge revision, the causes and features of fragmentation in learning QM might be investigated and usefully contrasted with those described by research on introductory science students.
A challenge specifically related to the learning of QM is due to the controversial character of its scientific epistemology: the nature of the systems described by the mathematical formalism, the completeness or not of the information we can get on them, and the explanation of observations in the lab depend on the chosen interpretive stance. The traditional presentation of the theory comes with a seemingly counterintuitive picture of the world, which requires students to revise or renounce very basic tenets about nature such as the well-defined position of physical objects (e.g., [12]). Research indicates that QM can be accepted as a personally convincing description of physical reality only if the quantum model is perceived as plausible and reliable by the learner [13]. One may ask how to address this need.
Overall, a major goal of physics education research (PER) on QM is helping students overcome the many-fold challenges discussed in the previous paragraphs. For this purpose, the PER community has produced in recent years a number of instructional materials and TLSs drawing on various approaches to the subject matter and on the results of currently active research lines. For instance, C. Singh and the PER team at the University of Pittsburgh built on their own research on common difficulties to design and revise interactive tutorials (QuILTs) on several topics, with the aim to promote the construction of schemas consistent with QM principles (e.g., [14]). Favoring the development of an integrated model has been a basic aim of Malgieri _et al._, who implemented Feynman's sum over paths approach with the extensive use of GeoGebra simulations, so as to allow secondary school students to analyze different experimental setups with the same conceptual tools [11]. Wittmann and Morgan pursue the same goal, but their TLS for nonscience majors places special emphasis on personal epistemology (not to be confounded with scientific epistemology) as a means to help students work with nonintuitive contents and to strengthen their understanding of scientific modelling [15]. The controversial nature of the physical interpretation of QM and the discussion of students' beliefs about it has been made a topic onto itself by Baily and Finkelstein, who designed a reformed modern physics course for engineering majors aimed to help them develop more consistent views of quantum phenomena, more sophisticated views of uncertainty, and greater interest in QM [16].
Given the growing consensus in PER and science education at large to move the focus from difficulties to student resources, i.e. pieces of prior knowledge that can be productively used in the process of conceptual growth [17; 18], researchers are starting to ask how to put conceptual, symbolic and epistemological resources of students in the service of learning QM [19; 20]. However, as regards instructional materials and reformed courses in QM, there is a need to identify the possible links between specific sets of available knowledge elements or structures and potentially productive educational strategies, and to empirically test their effectiveness.
More in general, since the release of _The Structure of Scientific Revolutions_ by T. Kuhn [21], the theory change from classical mechanics (CM) to QM has been seen as an exemplary case of conceptual change in the history of science [22]. Educational research shows that this is a central element behind the challenges students face in learning QM [23; 24; 25]. With the rise of models of conceptual change in learning [e.g. 26], several researchers started to consider the problem of teaching QM as the design of strategies to effectively promote a conceptual change in individual learners [27; 28; 11; 22]. However, educational models of conceptual change have been primarily developed to account for the transition from naive to scientific knowledge [29], a process associated with the modification of conceptual structures formed in the context of lay culture. The change from CM to QM, instead, involves the modification of a knowledge structure concerning a scientific theory, and developed as a result of instruction. In order to account for this different type of change, advancing research on the interpretation of empirical data and the design of effective strategies for teaching QM, it is important to examine where and how conceptual change models need to be revised.
In this paper, we describe how (a) an analysis of conceptual change in the learning of a successive theory, (b) a framework describing the epistemic practices of theoretical physicists, and (c) a careful approach to interpretive themes are integrated in the development of a QM course for secondary school, with the goal to address the challenges involved in the revision of classical knowledge, in the building of a well-organized knowledge structure on QM, and in the building of a plausible and reliable picture of the quantum world. The analysis of conceptual change involves an in-depth characterization of the first two challenges, and suggests how to leverage prior knowledge for achieving a revision of classical knowledge. Student exploration of authentic practices in a theory-building activity proposes a strategy for promoting the development of a plausible and reliable picture of the quantum world, assisted by the development of an awareness of the foundational debate.
The course includes four units: 1) Introduction to quantum measurement and observables, 2) The quantum state and its vector, 3) Quantum superposition, 4) Propagation and entanglement. Starting from the context of polarization, the course moves on to examine the theoretically significant case of the hydrogen-like atom, and the treatment of the two contexts is presented in sequence
within each unit (see Fig. 9). Our course has been designed to be used either as a stand-alone introductory educational path on QM, or as a preliminary course to quantum information and communication. As a matter of fact, the linearly polarized photon can be examined as a simple form of two-state system and represents a possible physical support for the implementation of a qubit [e.g., 30]. In addition, the course covers most of the physical topics and mathematical structures needed for quantum computing: quantum measurement, state, superposition, interference and entanglement, that are described at a conceptual and mathematical level (the latter, by using a Dirac ket notation).
Since 2014, the course has been progressively refined in cycles of testing and revision conducted in the framework of design-based research [31] on secondary school students. Some preliminary results on the development of a mathematical modelling activity have been already published [3]. In the second part of this article, we report on cycles of refinement of a set of activities chosen to illustrate the implementation of each of the four principles of design. In particular, we show how it is possible to convert epistemic practices of the theoretical physicist such as thought experiments and mathematical modelling into active learning strategies that engage students in a theoretical form of inquiry.
## II Theoretical framework: the identification of the design principles
### A case of conceptual change in the learning of successive theories
Our analysis inspired the development of a model of the transition from the understanding of a theory to the understanding of its successor presented by Zuccarini and Malgieri [32]. It is an initial proposal, including an exploration of the impact of theory change on various factors of learning, its application to the case of QM, and the identification of strategies for promoting the understanding of the new content.
In the design of the course, we focused only on the case at hand, and only as regards two cognitive signatures of the knowledge of a scientific theory, which ideally represents the initial state of the learner. They are the understanding of, and the ability to use for descriptive, explanatory and problem-solving purposes
1. different public representations of relevant concepts: linguistic, mathematical, visual, etc. [33];
2. the exemplars of the theory: tasks and resolution strategies encountered in lectures, exercises, laboratory assignments, textbooks, etc. [34, p. 134].
Theory change is always accompanied by change in exemplars and in relevant concepts at different representational levels (new formation, evolution, disappearance [33]). Therefore, we need to consider not only ontological change in concepts, but also change in constructs used by the scientific community to represent these concepts, as well as the change in tasks and in resolution strategies. These features mark important differences with conceptual change processes at introductory level, since naive science is neither socially shared nor mathematical.
Research shows that trajectories of concepts and constructs from CM to QM often give rise to learning challenges. In general, conceptual dynamics such as new formation may involve coalescence of familiar entities in nonintuitive terms. Evolution may determine difficulties to identify which aspects of a familiar entity can be productively used in the new theory and which not, to develop a consistent understanding of the new aspects, and to clearly discriminate between the old and the new version. Disappearance may deprive students of important resources in organizing scientific knowledge. Change in exemplars - that may be strongly context-dependent - is reasonably related to knowledge fragmentation. However, it is clear that each factor of change may have an influence on both challenges, and therefore that overcoming these challenges requires a coordination of knowledge revision and knowledge integration strategies.
The analysis was developed from 2014 onwards in parallel with the course presented in this article. Design experiments described in Section V show how the principles of design were implemented or shaped during the cycles of refinement. After the end of the experiments, the framework underwent further development, e.g., integrating the evaluation of the impact of theory change on epistemic and affective factors, the relation between epistemological themes and conceptual change in the learning of QM, the adoption and revision of dynamic frames: a tool to visualize theory change in concepts and constructs. The custom syntax of the frames is ideal to illustrate which aspects of a notion can lead to productive reasoning in which theoretical context. Therefore, we present this tool in the next pages, explaining how it is related to the previous work on the course.
According to this framework, the basis for addressing knowledge revision and its organization in the transition from CM and QM are respectively the educational analysis of change in concepts/constructs, and of change in exemplars. We present them in two separate subsections.
#### ii.1.1 Change in concepts and constructs: the challenges and the strategies
The examination of this factor was initiated in the first cycle of refinement of the course. In order to delimit the scope of the analysis, we denominated as "concepts" the basic conceptual instruments used for the description of a physical system: _physical quantity_, _measurement_, _state_, _time evolution_, _general model_. We denominated as "constructs" the mathematical representations of these concepts and basic mathematical processes used to get information from or on the world: _vector_, _vector superposition_, _wave function_, _operator_; and the visual representation of
systems and mathematical constructs: _system diagram_, _wave diagram_.
All the notions under scrutiny evolve in theory change, with the exception of system diagrams, which disappear. An extensive analysis of existing research on student understanding of QM was performed in the search for the connections between common difficulties and individual aspects of the trajectory of each notion, which resulted in a map of specific cognitive demands. The analysis evidenced that, in addition to introductory-like challenges, new types of challenges arise due to different forms of change in the role of mathematical constructs that are familiar to students and of their visual representation.
The design of the course was informed by the description of educationally relevant changes in individual notions, which, according to tools used by researchers on conceptual change, were displayed in comparison tables (see, e.g., Vosniadou, 2008, table 1.1 [7]). Zuccarini and Maligeri [32] converted these tables into dynamic frames, an instrument used by philosophers of science to visualize aspects of the categorical structure of a concept in a scientific theory, and therefore to analyze its dynamics in theory change [35]. The traditional format of a single frame was subsequently adapted to the direct description of change, not only in concepts (ontological change) but also in constructs (representational change). For clarity, in this article we represent change in scientific notions by means of dynamic frames.
An example is provided by Fig. 1.a and 1.b. The first one displays the visualization of change in the concept of _system quantity_. This expression refers to physical quantities describing properties of systems and includes both dynamical variables, that in QM become observables, and parameters such as mass, that in non-relativistic QM behaves as a classical quantity. Fig. 1.b describes change in the _vector_ construct, that in CM is primarily used to represent physical quantities, while in QM typically refers to the state of a system.
In the frame representation, the categorical structure of each entity is visualized as a hierarchy of nodes that starts from the _superordinate concept/construct_ (on the left in the figures) and is organized into sets of _values_ (conceptual constituents, on the right), each set corresponding to a different _attribute_ that specifies the relation between the set and the superordinate concept. In our case, the superordinate concept is either a basic term of both CM and QM (1.a) or a construct evolving in theory change (1.b). A value is white if it pertains to an instance of the classical version of the superordinate notion, black if it pertains to an instance of the quantum one, gray to both theories.
From an educational perspective, this visual representation of conceptual dynamics from CM to QM is potentially productive in two ways. First, while other modern theories present a clear demarcation line between their phenomena and classical ones (a low \(v/c\) ratio in special relativity), in QM the so-called "classical limit" is a deep and controversial issue [36]. It appears that students need to bridge, at a conceptual and a formal level, the world of the new theory to that of the old one, in order to facilitate the transition between the two perspectives. A visualization of continuity and change in concepts and constructs allows us to offer them this kind of support, not in terms of limiting processes, but of categorical structure. Second, while we had already identified different patterns of change which informed strategies for the revision of classical knowledge, the frame format helps to pinpoint and describe these patterns in a compact way. For instance:
* _categorical generalization_: each value of an attribute either pertains to both theories or only to quantum one (Fig. 1.a, but also _measurement_);
* _value disjunction_: each value of an attribute either pertains only to the classical theory or only to the quantum one (Fig. 1.b, but also _superposition_).
In Section V.2.1 and V.2.2, we show how these two patterns drove the development of different strategies to put prior intuition in the service of learning QM. The model of conceptual change described above advocates the use of frames in general, as a guide to curricular design in the learning of successive theories.
All this leads us to our first design principle:
PRINCIPLE OF KNOWLEDGE REVISION
the analysis of continuity and change in concepts and
constructs will be used for developing
* trajectory-dependent strategies for a smooth transition to their quantum versions
* end-of-unit tables containing interpretive tasks on selected aspects of their trajectory \(\Rightarrow\)
promoting the discrimination between the classical and the quantum version of a notion by identifyng the correct context of application of each aspect
as a result, this approach to knowledge revision
provides an opportunity to address student's need of comparability with CM
#### ii.1.2 Change in exemplars: the challenges and the strategies
The analysis of challenges related to knowledge fragmentation in QM has played a fundamental role in the development of the course. A difficulty was represented by the search for quantum exemplars at secondary school level. As a matter of fact, quantum formalism is among the less common curriculum content in traditional TLSs for secondary school students, as well as real lab assignments and simulated experiments [38]. In upper-division courses, instead, students are exposed to the basic mathematical machinery of non-relativistic QM and to plenty of exercises in lectures, recitations, homework and exams. As a result, analyzing the nature of these tasks and corresponding resolution strategies became the key for contrasting classical and quantum exemplars.
This work fed into a recent publication on the structure of quantum knowledge for instruction [39]. According to it, textbooks and educational research mainly focus on the following tasks and related subtasks: finding information (1) on the results of the measurement of an observable on a state, (2) on the time evolution of the state, and (3) on the time evolution of the probability distribution of an observable on a state. Subtasks can be the solution of the energy eigenvalue problem for a given potential or the expansion of a state vector in terms of a different set of eigenstates. No classical equivalent exists for tasks (1) and (3), since physical quantities are assumed to have a definite value on a system. Task (2), instead, requires a coordinated use of notions that have evolved in theory change (e.g., state, superposition, operators). Compared with CM, the number of different quantum tasks included in an introductory upper-division course is minimal. However, the strategies for accomplishing them are radically different from those used in solving CM problems, and vary depending on the system (free particle, harmonic oscillator, etc.) and other conditions.
Discussing the issues related to the resolution of quantum tasks requires the adoption of a theoretical perspective suitable to understand how scientific concepts function in determining a particular class of information about the physical world. One perspective specifically designed for this purpose is the coordination class theory [40; 41]. In this framework, the aforementioned tasks become three different coordination classes. A support in describing their structure is provided by the concept maps presented in [39], which display general pathways of qualitative and quantitative solutions related to each task, that can be employed in every context. For instance, getting information on the measurement of an observable on a state is represented in three maps, respectively for a state expressed as a superposition of other states, as an eigenstate of a given observable, or as an eigenstate of a complete set of compatible observables. See Fig. 2 for the second map.
In the coordination class framework, these maps can be interpreted as a visualization of the quantum coordination classes. By analyzing Fig. 2 through the lens of coordination class terminology [41], we infer that the _extraction_, i.e. the initial information, is the knowledge of the state, of the observable we want to measure, and in some cases also of the Hamiltonian. The _inferential net_ is composed of the relevant knowledge elements (entities, prediction tools, procedures, etc.) and of the net of connections between them. The _readout strategy_ is a path from the extraction to the result, whose direction of travel is indicated by arrowheads on the lines connecting the elements. A _concept projection_ is the smaller map resulting by specifying the state, the observable and the Hamiltonian at hand. An instance of projection is the measurement of the momentum on an energy eigenstate of a harmonic oscillator, whose pathway includes only incompatibility with no need to evaluate whether the state is a simultaneous eigenstate of the two observables involved (empty kernel).
Coordination class theory hypothesizes two particular and characteristic challenges: _span_ (having adequate conceptual resources to operate the concept across a wide
Figure 1: Visualizing categorical change: (a) the concept of _system quantity_; (b) the _vector_ construct.
range of contexts) and _alignment_ (being able to determine the same concept-characteristic information across diverse circumstances) [42]. From the analysis of Fig. 2, it is immediate to identify at least two reasons behind the difficulty to build a global knowledge structure in QM. First, the context specific elements of quantum coordination classes are in turn complex objects, such as the concept of eigenstate of an observable, or the structure of the Hamiltonian of a system (its set of eigenstates and corresponding eigenvalues). Second, the subtasks related to using prediction tools and procedures are also complex, unfamiliar, and highly variable from context to context: the determination of the commutator of two observables, the resolution of the eigenvalue problem for energy, the change of basis, etc.
While these maps represent a general guide to the structure of quantum tasks, they are unsuitable for instruction at secondary school level. If we aim to provide school students with valuable support to make predictions on quantum processes across contexts, we need to considerably simplify the picture. The choices we made are the following: set aside time evolution to focus only on measurement; give priority to qualitative predictions; set aside operators, commutators, and eigenvalue equations. After this work of reduction performed on Fig. 2, we are left with the acquisition, the loss, and the retention of definite values of observables in measurement, and with the nature of this process (stochastic or determinate). As a first brick to discuss the relations between observables (compatibility and incompatibility), we rely on binary relations between their values.
In our course, a value of a _system quantity_ - classical or quantum - that can be said to be either possessed by a physical system (when the probability to measure it is 1) or not, is denoted as physical "property" and relations existing between values are denominated as "relations between properties". The language and the existence criterion for a property are borrowed from the Geneva-Brussels approach [see, e.g., 43]. The concept of property we use is a strongly restricted version of the original one, which includes not only values but also the union of disjoint intervals of values. Unless indicated otherwise, we will describe ideal measurements of discrete and continuous quantities only in terms of single values.
The relations between properties of interest to us are defined as follows: two different properties, \(P_{a}\) and \(P_{b}\), belonging respectively to the _system quantities_\(O\) and \(Q\), not necessarily distinct from each other, are
* _unacquirable_: if any system possessing one of them retains it and can never acquire the other in the measurement of the corresponding quantity. No system can ever possess \(P_{a}\) and \(P_{b}\) at the same time
Figure 2: Measurement on an eigenstate of an observable.
(mutual exclusivity);
* _incompatible_: if any system possessing one of them loses it and may stochastically acquire the other in the measurement of the corresponding quantity. No system can ever possess \(P_{a}\) and \(P_{b}\) at the same time (mutual exclusivity);
* _compatible_: if any system possessing only one of them retains it and may stochastically acquire the other in the measurement of the corresponding quantity. If the system possesses \(P_{a}\) and \(P_{b}\) at the same time, it retains them in the measurement of any of the corresponding quantities.
_Unacquirable_ properties are, in the first place, different properties of the same quantity, but also properties of different quantities that are mutually unaquirable due to physical constraints. An example of the latter situation is the following: if the azimuthal quantum number of a system is \(l=1\), it is not possible for this system either to possess \(m=4\) or to acquire it in the measurement of \(L_{z}\), and viceversa. An arbitrary value of position is always _incompatible_ with any value of its conjugate momentum. A property of spin is always _compatible_ with properties of spatial observables (position, momentum, kinetic energy, orbital angular momentum, etc.). As with the relations between observables, relations between properties are invariant across contexts except for those between energy properties and properties of other observables. For the latter, the term "any" mentioned in the definition is restricted to systems described by the same Hamiltonian.
Various features of the relations between properties make their use in education promising. First, unacquirability and incompatibility naturally arise in the exploration of spin or photon polarization measurements. In particular, it is possible to address both in a simple quantitative form (Malus's law for photon polarization and its equivalent for spin). Second, based on these empirical laws, the relations can be justified to students as empirical regularities that are specific to quantum systems (later we will see that, except for incompatibility, they can expressed also in classical terms). Third, moving on to the relations between _system quantities_ is almost immediate: two quantities are compatible if every property of each one is compatible with at least one property of the other, otherwise they are incompatible. Except in a limited number of cases1, relations between quantities can be qualitatively assessed in a similar way:
Footnote 1: When two quantities are incompatible, but admit simultaneous eigenstates.
The _system quantities_\(O\) and \(Q\) are
* _incompatible_: if any system possessing a property of one of them loses it in the measurement of the other quantity and stochastically acquires one property of the latter. No system can ever possess properties of \(O\) and \(Q\) at the same time;
* _compatible_: if any system possessing only a property of one of them retains it in the measurement of the other quantity and stochastically acquires one property of the latter. If the system possesses properties of \(O\) and \(Q\) at the same time, it retains them in the measurement of these quantities.
This formulation of _compatibility_ and _incompatibility_ allows us to qualitatively manage the measurement process in QM: by knowing which relations exist between the observables that initially have a definite value and the measured observable, it is possible to determine which of them are definite after the measurement and which not. In addition, the possess in advance or not of a property of the measured observable allows us to assess the nature of the process: determinate or stochastic. This task can be accomplished independently of the context and also in the presence of degeneracy. A further generalization to the case in which no initial properties of the system are known is possible by extending the use of the mathematical representation of the state beyond the context of particle spin or photon polarization, allowing us to formulate quantitative prediction in different physical situations. Last, the relations are a structure that can account for measurement outcomes also in CM. In the classical regime, all _system quantities_ are compatible with one another, and every point particle always possesses one property of each quantity. Thus, the emergence of incompatibility can be identified as an explanation of theory change with relation to measurement and the description of systems at a point in time. For the development of the framework of the relations between properties, see Section V.3, and for its use in the quantitative discussion of different contexts, see Fig. 9, activities 2.5, 2.9, and 3.5-3.7. All these features provide the basis for the second principle:
\begin{tabular}{|c|} \hline PRINCIPLE \\ OF KNOWLEDGE ORGANIZATION \\ the framework of the relations between properties \\ and then between observables will be developed \\ together with students in the simple context of \\ two-state systems, and will be used to \\ \end{tabular}
* promote the construction of a unifying picture of quantum measurement and the ability to manage it in problem-solving, allowing students to explore this process in other scientifically significant contexts (e.g., the hydrogen-like atom)
* promote a smooth transition to a quantum \\ perspective and help address student's need \\ of comparability between CM an QM, since \\ it constitutes a transtheoretical framework \\ \end{tabular}
In Section V.3, we also describe the refinement of activities designed to implement this principle.
### Epistemic cognition and scientific epistemology
#### ii.2.1 Personal epistemology: theoretical modelling cycles
Personal epistemology may be introduced as an individual's answers to questions such as "how do you know?" and "why do you believe?" [15]. Recent reviews on conceptual change and epistemic cognition report that there is a convincing body of research establishing a connection between more sophisticated epistemologies and deeper conceptual understanding in a particular domain [44, 29]. QM represents an ideal context for exploiting this synergy: a focus on epistemology may promote the learning of counterintuitive quantum content; on the other hand, a course of QM may be an opportunity for studying the practices of scientific modelling. Wittmann and Morgan, for instance, structured large part of their course around activities in which students work to build new concepts and create new knowledge, using lecture time to discuss and debate ideas in a peer-instruction format [15].
In order to put the aforementioned synergy in the service of learning QM, we also chose to focus on knowledge-building activities. However, given the wide range of possible activities of this kind, we endeavoured to identify the most appropriate ones for the context at hand. According to Sandoval _et al._, the conceptual, procedural, and epistemic expertise of a discipline is bound up in its specific practices [45]. But what practices characterize the construction of QM as a knowledge domain? A peek at the history of physics in the early 20th century suggests that theory-building is at the core of these practices. We concluded that involving students in theoretical modelling activities could be a promising strategy for helping them accept the quantum description of the world as a plausible and reliable product of their own inquiry, developing theoretical reasoning skills in the process. However, educational research on the epistemic practices that characterize the work of theoretical physicists is currently lacking. In Fig. 3, we propose a list of historically significant practices of theoretical nature used by physicists for building new scientific knowledge. In Section III.3, we describe the frameworks used to convert mathematical practices and thought experiments into strategies designed to engage students in theoretical modelling cycles.
The third principle underlying the design of our course is the following:
EPISTEMIC PRINCIP
design the course around a modelling process that includes theoretical practices used by physicists in the historical development of the discipline, with the goal to help students
* accept the quantum description of the world as a plausible and reliable product of their own inquiry, thus promoting a smooth transition to a quantum perspective
* build theoretical reasoning skills
In Section V.4, we describe the development of activities exclusively designed to implement this principle, reporting data on their cycles of refinement.
#### ii.2.2 Scientific epistemology: approach to interpretation
Research on students transitioning from classical to quantum thinking shows that when interpretive themes are deemphasized, interest in QM decreases, while learners still develop a variety of (sometimes scientifically undesirable) views about the interpretation of quantum phenomena [16]. For this reason, we built our course around a _clearly specified_ form of standard approach [46], schematically set apart from other schools of thought by means of rules of correspondence between the structure of the theory and its physical referents in the world:
1. a pure state provides complete information on the behavior of an individual quantum system (ruling out statistical interpretations);
2. an observable of a system has a determinate value if and only if the quantum state of the system is an eigenstate of the operator representing the observable (ruling out modal interpretations);
3. the quantum description of processes includes two different types of state evolution: in the absence of measurement, the unitary evolution governed by the Schrodinger equation; in measurement, the evolution prescribed by the projection postulate (ruling out other no-collapse interpretations).
As mentioned at the end of each statement, all have been questioned by part of the scientific community, with the third being the most unsatisfactory one for a variety of reasons [46], starting from the measurement problem [47].
An additional interpretive choice concerns the wave-particle duality. Baily and Finkelstein adopt a "matter-wave perspective" [16], that allows students to interpret without paradoxes how a system can "know" whether two paths are open or only one of them in a "which-way" experiment. However, if the system propagates as a wave, students may ask what kind of medium supports or, equivalently, is perturbed by this wave. For this reason, in the construction of a full quantum model of a system, we adopt a field ontology, a perspective put forward in education also in recent years [e.g., 48].
In Section III.4.1, we show how the clear specification of these interpretive choices helps in strengthening the coherence of our educational proposal. In Section III.4.2, how it helps in structuring the discussion of different facets of the epistemological debate on QM. The fourth principle of design is the following:
## III Implementing the principles: development of the sequence and of educational strategies
Since the course is designed around the construction of a model, we briefly introduce the framework we used to model the process of modelling in science education (Section III.1). By means of this template, we show how the interplay of the first three design principles guided the selection and organization of the course content (Section III.2). A particularly complex task was turning the epistemic practices of the theoretical physicist into educational strategies that allow students to run these practices personally. In Section III.3, we describe the frameworks for building inquiry activities to engage students in mathematical modelling within a purely theoretical setting and in the generation and conduction of thought experiments. Section III.4 is devoted to the impact of our approach to interpretation on the course design.
### The Model of Modelling
The perspective from which we examine the process of modelling is the _Model of Modelling_[49, 50], a cycle composed of four phases: _Creation of the proto-model_, _Expression of the proto-model_, _Test of the model_, _Evaluation of the model_. The nature of the cycle is non-linear and non-predetermined. Models are understood by the authors as "epistemic artefacts, the purposes of which are related to scientific practices like simplifying, explaining, abstracting, arguing, predicting, representing, designing experiments and/or other models, etc." [50, p. 32].
This artifactual view ascribes particular importance to the process of the creation and expression of the model, and justifies the use of the term 'proto-model' for these initial stages, since the artifact is complete only after it has been expressed by means of an external mode of representation:
1. _Creation of the proto-model_: involves the integration of purposes, experiences and sources. The role of the second and the third component is essential, in that the creation process needs to be 1. supported by experiences that can be acquired in various manners: personal previous knowledge, the examination of relevant literature, the analysis of empirical data, etc.; 2. driven by appropriate sources, that may be an analogy or a mathematical tool, that are instrumental to establish relationships between elements of the experiences.
2. _Expression of the proto-model_ in a mode of representation (visual, virtual, gestural, mathematical, verbal, etc.) or in a combination of these modes. Its selection is guided by the purposes of the model together with
Figure 3: Practices of theoretical nature that have been historically used by physicists for building new knowledge.
1. the nature of the elements to be modelled (static or dynamic, concrete or abstract);
2. the epistemic practices that will be conducted with the manipulation of the model, which might be supported by certain modes and not by others;
3. its target public.
In this phase, the modeller also defines the _codes of representation_, that is, the meaning of specific details of the resulting artifact. For instance, in a concrete ball-and-stick model of a chemical compound, it is necessary to specify that the balls represent the atoms, that the sticks represent covalent bonds, and that different colours for the balls represent specific elements.
As regards the _Test_ and _Evaluation of the model_ (phase 3 and 4), the use of controlled experiments for testing hypotheses is not an essential requirement. A test can also be performed by means of a qualitative exploration or a thought experiment, as the overarching goal of this set of activities is not to 'test variables' but to develop and refine a scientific explanation in the form of a model.
### Structuring the content and the modelling process
The main source of inspiration and materials for this course has been an educational path for the introduction of QM in the context of polarization developed and evaluated by the PER group of the University of Udine [e.g., 51, 52, 53]. The Udine's path begins with the concept of state and the superposition principle, makes use of hands-on activities with cheap experimental tools (polarizing filters, birefringent crystals), quantitative measurements with light intensity sensors, and of JQM [54], an open-ended environment for computer simulated experiments on photon polarization. However, the two curricula are substantially different with respect to their design principles, strategies, physical situations included and learning trajectory. The sequence of activities of our course, their nature, role and content, will be examined in Section IV, and displayed in full in Fig. 9. Here, we illustrate the bulk of the modelling process and of the learning trajectory, showing how the interplay of the _Principle of Knowledge Revision_, _Principle of Knowledge Organization_ and the _Epistemic Principle_ determined its shape. The impact of the _Epistemological Principle_ on the design depends on the chosen learning trajectory, and will be addressed separately in Section III.4.
As a matter of fact, starting with polarization is compatible with the implementation of each of the principles. Since the phenomenon can be experienced by means of classical light beams and explained both in classical and quantum terms, it easily lends itself to a gradual building of a quantum model of the physical situation (_Epistemic Principle_) and to the revision of classical concepts and mathematical constructs (_Principle of Knowledge Revision_). In addition, two relations between properties (uncourirability and incompatibility) naturally arise in photon polarization measurements. Along with compatibility, they represent the conceptual tools needed for extending the qualitative examination of measurement to distant physical situations (in our case, the hydrogen-like atom), promoting the construction of a unifying picture across contexts (_Principle of Knowledge Organization_).
The introductory phases of our modelling cycle are the following:
1. _Creation of the proto-model_: 1. experiences for supporting its creation: (1) exploration of the phenomenology of the linear polarization of light (interaction of macroscopic beams with polarizing filters/birefringent crystals); (2) empirical determination of its quantitative laws (Malus's law for beams polarized at \(\theta\) incident on a filter with axis at \(\phi\): \(I_{out}=I_{in}\cos^{2}\left(\theta-\phi\right)\), reduction to half for unpolarized ones: \(I_{out}=I_{in}/2\)); (3) presentation of fundamental experiments on the detection [55, 56] and polarization of single photons (a modified version of the former); 2. sources: the heuristic criterion according to which the hypotheses on the behavior of individual photons must be compatible (1) with the experimental evidence on the detection and polarization of a photon, and (2) with the classical phenomenology and laws for macroscopic light beams.
2. _Expression of the proto-model_: a fundamental mode of representation used in this course is the iconic language of JQM for the depiction of idealized physical situations and experiments involving the polarization of single photons. The representation includes photons - visualized by means of their polarization property (Fig. 4) - and devices such as single photon sources, polarizing filters, calcite crystals, screens and counters (Fig. 5). This language will represent an essential support for the implementation of theoretical epistemic practices such as thought experiments and the interpretation of classical laws of polarization in terms of photons. Mathematical modes of representation accompany these activities (e.g., Malus's law) and support the implementation of mathematical modelling practices (e.g., hypothesizing a mathematical representation of the quantum state and interpreting the meaning of its properties);
Figure 4: Iconic representation of the photon polarization [54]. Students are informed that the segments are not to be intended as real physical representations of single photons, but as a support for theoretical reasoning about photon polarization and related physical situations.
After its creation and expression, the full-fledged model is developed and revised through a process conducted by means of theoretical epistemic activities (_Epistemic Principle_), where students need to reinterpret, at a single-photon level, macro-phenomena and macro-laws which have already been explored by means of cheap experimental tools. It starts as the model of an object (the photon) for what concerns its detection and polarization. It soon grows to become a model of the interaction between photons and devices composed of filters/crystals followed by counters. The interaction with crystals and detectors is interpreted as an instance of the quantum measurement process, leading students to identify the relations between the initial property and those that correspond to possible outcomes of measurement. By means of these interpretive keys, the discussion can go beyond the scope of polarization: the relations are applied at a global level (incompatibility: position or velocity measurement on a system) and in the context of the hydrogen-like atom (compatibility: measurements of \(E\), \(L\), \(L_{z}\), \(S_{z}\)). The relations between properties are then upgraded in terms of relations between observables. Next, the model is embedded into the algebraic language of the polarization state vectors. Another inroad into the context of the hydrogen-like atom is made to introduce and discuss its state vector in terms of quantum numbers and calculate transition probabilities by means of vector superposition. The model is thus ready to undergo a major revision, incorporating also the propagation of photons - wave-like interference included - and their entanglement, therefore leading to the construction of a far-reaching model of radiation (the photon) and matter (the hydrogen-like atom).
A fundamental choice is addressing the quantum state, its vector and then quantum superposition only after the discussion of the concepts of measurement and observable. There are various reasons behind this choice. First, this sequence allows us to focus on the revision of a notion at a time (_Principle of Knowledge Revision_). This would not be possible if we started directly with the superposition principle, that in QM is inextricably linked to all the other notions. The possibility to postpone the introduction of the state and superposition is granted by the _Principle of Knowledge Organization_, which provides instruments for discussing quantum measurement and observables without resorting to the concept of state. Second, implementing the _Epistemic Principle_ involves structuring math modelling activities, e.g., related to the introduction of the state vector, that may cause a high cognitive load. In the context of polarization, building the mathematical representation of the state requires a consistent understanding of the single-photon interpretation of the Malus's law as probabilistic law of transition between different polarization properties: \(p(\theta\mapsto\phi)=cos^{2}(\theta-\phi)\). Since this topic has been widely discussed in the unit on measurement, the state of polarization can be simply presented as a change of perspective on the same phenomena, without adding new physical content. This allows students to focus exclusively on the revision of the concept of state and on math modelling activities, thus reducing the cognitive load. One example is expressing the law of transition in terms of relations between state (ket) vectors2: \((\left|\theta\right\rangle\cdot\left|\phi\right\rangle)^{2}=cos^{2}(\theta-\phi)\). Third, our course includes not only the context of photon polarization, but also of the hydrogen-like atom. An immediate examination of the concept and mathematical representation of the quantum state of the latter would be too challenging to our student population. Instead, the knowledge of measurement processes on this type of system together with the discussion of the polarization state vector represent a natural basis on which to build the state of a hydrogen-like atom and the corresponding (ket) vector in terms of quantum numbers3: \(\left|n,l,m,s\right\rangle\).
Footnote 2: In the context of linear polarization, there is no need of complex numbers. Therefore, we do not introduce bra vectors and express the Born rule by using the square of a dot product.
Footnote 3: We restrict the mathematical discussion to superposition states with real coefficients: no need of bra and square moduli.
The inclusion of the hydrogen-like atom offers various educational opportunities. In the discussion of the state, it allows us to break the one-to-one correspondence between properties and states that characterizes linear polarization (identifying the state of a hydrogen-like atom requires the specification of four properties), as well as the identity of the angle between polarization properties and corresponding state vectors (directions in the state space of the hydrogen-like atom are clearly unrelated to directions in the physical space). In the case of superposition, linear combinations of \(\left|n,l,m,s\right\rangle\) vectors make it possible to generalize the discussion of measurement and observables to situations in which no known quantity is initially defined, and to address the normalization of the state vector after a measurement, that is trivial when the components of a superposition are limited to two terms, as in the context of polarization.
A solid understanding of the concept of state, of its vector, and of quantum superposition represent a strong basis for building a consistent interpretation of quantum interference and entanglement at a conceptual and
Figure 5: Iconic representation of a single photon source with a predetermined polarization property (vertical, in this case), one vertically polarized photon, a polarizing filter with an arbitrary axis (here at \(45^{\circ}\)), a birefringent crystal with a \(0^{\circ}\) and a \(90^{\circ}\) channel, a screen placed on the extraordinary one, two photon counters.
mathematical level. Hence, the learning trajectory ends with the discussion of propagation ("which-path" experiments) and entanglement (first, of spatial and polarization modes of a photon, then of the polarization of different photons).
To sum up, we identified the following path of learning and concept revision from CM to QM as potentially productive: linear polarization \(\rightarrow\) measurement \(\rightarrow\) system quantity \(\rightarrow\) state \(\rightarrow\) vector \(\rightarrow\) superposition \(\rightarrow\) interference \(\rightarrow\) general model (of a system) \(\rightarrow\) correlation between internal components of the state (in QM, they can be entangled).
### Research perspectives for running theoretical epistemic practices
Converting the specific practices listed in Fig. 3 into authentic inquiry activities has been a central task in the design of our course. In particular, addressing mathematical modelling in a purely theoretical context and thought experiments required the examination of different perspectives and of the ISLE learning system [57]. In the next two sections, we show how they informed the development of this kind of activities.
#### iii.3.1 Mathematical modelling strategies
In order to provide insight on the ways in which mathematics can be put in the service of physical modelling, we drew on theoretical studies on the role and the language of mathematics in physics. Uhden _et al._ identified two fundamental aspects to consider [58]: the deeply tangled unity of mathematical and physical models, and the multifaceted nature of the role of mathematics in physics.
* _Deeply tangled unity of mathematical and physical models_: the authors argue that the geometric representation of physical situations (e.g. visualizing a light beam as a straight line in optics) and entities (e.g. drawing forces as vectors) often implies some mathematization from the very beginning, and in general that even a pure qualitative image can be seen only as a first stage of a physical-mathematical model instead of being a model _per se_.
This is especially true in QM, where systems cannot be visualized, and a purely qualitative description of a physical concept may not be possible at all. For instance, in order to define the basic notion of quantum state in a wave approach, we need to rely on the mathematical structure of probability distributions.
* _Technical and structural roles of math in physics_: in many cases mathematics can be seen an external instrument, a _technical_ tool without any physical content (rote calculations, manipulations of variables and units or internal mathematical rules). However, at a deeper level, mathematics penetrates into the construction of the physical concept itself and, precisely at this point, the distinction between conceptual and mathematical notions becomes artificial. The _structural role_ of mathematics refers to this latter case: it is the role of math in structuring physical concepts and situations that emerges in the processes of mathematization and interpretation.
On this basis, they propose an approach to using mathematics for conceptual understanding that presents a gradual increase in the degree of mathematization accompanied by frequent interpretive steps, reducing at a minimum leaps into pure math for calculation.
A different perspective is provided by Redish and Kuo, who analyze the language of mathematics in physics by means of cognitive linguistics in a resources framework [59]. Their analysis suggests to initially focus on physical intuition and embodied experience rather than equations and principles. As our conceptual system is grounded in our interaction with the physical world, so is our understanding of many mathematical concepts (e.g., spatial orientation, bodily motion, object manipulation, etc.). Starting from the physical meaning and then explicitly mapping this meaning to the mathematics can help make this connection explicit for particular topics and help students see how to make this connection more generally. Secondly, checking for mathematical consistency instead of relying on authority is valuable and productive as it helps students to take an epistemological stance that provides coherence between physical meaning and mathematical formalism.
A common theme of both studies is the line of development of the discourse: from concrete to abstract (from physical issues to mathematics) followed by a new interpretive activity, aimed at clarifying further physical implications of the newly introduced structure (Fig. 6).
Variations on this theme will be used in the design of inquiry activities highlighting the structural role of math in the modelling of physical concepts and situations. Since the construction of an idealized representation of the physical situations discussed in our course has been performed in the creation and initial expression of
Figure 6: Contrasting a traditional chain of activation and one highlighting the structural role of mathematics in physics.
the proto-model, we can skip this step in the structural chain of activation described in Fig. 6.
In Section V.4.4, we present data on the development of a mathematical modelling activity that is based on the model of modelling and the approach illustrated in figure.
#### iii.3.2 Operationalizing thought experiments
Thought experiments may play a significant role in the presentation of modern physics, opening "a unique window to the strange and unknown world of super-large and super-small scales"[60], where real experiments are practically excluded from regular classroom activity. A definition of thought experiment that is potentially productive in education has been proposed by Stephens and Clement [61], who emphasize the process rather than the product: performing an untested thought experiment [..] "is the act of considering an untested, concrete system (the 'experiment' or case) and attempting to predict aspects of its behavior. Those aspects of behavior must be new and untested in the sense that the subject has not observed them before nor been informed about them." This emphasis on the relationship between the agent and the process allows us to widen the scope of thought experiments in educational practice: students making a prediction for an unfamiliar analogy, running a model for the first time, or applying a model to an unfamiliar transfer problem, are performing an untested thought experiment.
As regards creating and running a thought experiment, Gilbert and Reiner [62] propose an analytical schema composed of six steps:
1. posing a question or a hypothesis;
2. creating an imaginary world, consisting of entities (objects, or mental creations which can be treated as objects) relating to each other in a regulated manner;
3. designing the thought experiment;
4. performing the thought experiment mentally;
5. producing an outcome of the thought experiment with the use of the laws of logic;
6. drawing a conclusion.
It is possible to find important analogies between this process and learning systems that engage students in forms of reasoning similar to the ones used in physics for building its body of knowledge. One of them is the ISLE cycle [57]. The activity starts with students observing simple phenomena and finding patterns (observational experiment), in order to develop inductive reasoning. The students are then encouraged to propose different explanations and to design experiments whose outcome can be predicted on the basis of their explanations, ruling out some of them (testing experiment). This is when hypothetico-deductive reasoning is activated.
In our course, thought experiments play the role of a testing procedure in various occasions. We qualify these procedures as _humble thought experiments_, because they are not meant to achieve the purposes of historically significant thought experiments (e.g., Einstein's elevator); yet, their structure corresponds to that described by Gilbert and Reiner [62] for a thought experiment, and their conduction may be within the reach of secondary school students. In Section V.4 we present data on the development of activities in which the instructor
* step by step
- a thought experiment specifically designed by the instructor (Section V.4.2);
* provides a hypothesis and asks students to design a thought experiment to test it, to run the thought experiment, and to draw appropriate conclusions on the initial hypothesis (Section V.4.3).
### Implementing the epistemological principle
#### iii.4.1 Impact of the interpretive choices on the coherence of the design
Here we describe how the interpretive choices are used to strengthen the internal coherence of the course. In what follows, first we recall the individual choice, then we explain how it affects the design.
* a pure state provides complete information on the behavior of an individual quantum system (ruling out statistical interpretations);
In the course, we adopt a single system ontology. Therefore, we always refer to individual systems, favoring a probabilistic language over a statistical one. Ensembles of systems, identically prepared or not, are treated on a probabilistic basis, making use of the law of large numbers when appropriate. The implementation of this language choice played a productive role in the running of epistemic practices such as the interpretation of Malus's law in terms of photons (see Section V.4.1).
* an observable of a system has a determinate value if and only if the quantum state of the system is an eigenstate of the operator representing the observable (ruling out modal interpretations);
Since we do not use operators in the course, we do not introduce the terms "eigenstate" and "eigenvalue". However, the definition of the possession of a property by a system stands for the eigenstate-eigenvalue link: a system possesses a property if and only if the probability to measure it is 1. The language of properties also helps suggest students a coherent interpretation of quantum superposition. As a matter of fact, while the superposition of linear polarization states is usually interpreted as
a "neither, nor" situation (the system is in neither of the component states, and has neither of the corresponding properties), a superposition of two position eigenstates is sometimes interpreted as the system being "in both places." However, the link between possessing a property and measuring it with certainty allows us to reconcile this case with the general frame: the system has neither of the component position properties. Its position is indefinite. For a productive use of this language in the development of an activity, see Section V.4.4: after the passage of a photon through a calcite crystal, it is possible to prove that both position and polarization of the system are indefinite by using the same criterion.
* the quantum description of processes includes two different types of state evolution: in the absence of measurement, the unitary evolution governed by the Schrodinger equation; in measurement, the evolution prescribed by the projection postulate (ruling out no-collapse interpretations);
In the course, we always promote a clear distinction between measurement and propagation. While dealing with transitions in measurement, we make use of iconic representations showing an initial situation, e.g., in which a photon has just been emitted by a single-photon source (Fig. 7.a), and a final one, e.g., in which the photon has been absorbed and counted by a detector (Fig. 7.b). In these situations, we always direct student attention to the preparation and the measurement process. The only exception occurs near the end of the course, when we discuss the "which-way" experiment by means of a photon beam directed to a device composed of a sequence of two calcite crystals, one reversed with respect to the other, followed by a filter and a detector (Fig. 8). This shift in focus is basic both in the discussion of the wave-particle duality and in that of entanglement (see Section V.4.4).
* in the construction of a full quantum model for propagation and measurement, we adopt a field ontology;
While we feel that this model of a quantum system can be perceived as plausible by students, who can make a connection with already familiar classical fields (especially in the case of a photon), ascribing a quantum field ontology to physical systems is a controversial operation [e.g., 63, 64, pp. 133-135]. For this reason, we adhere to a cautious approach, suggesting students to model the system as a "field of actual and potential properties." This expression means that the field describes the system in terms of properties it possesses (e.g., one might be a property of a spin component) and of "potential properties that can possibly be actualized [...] through measurement processes" [65]. For more information on the concept of potential property and its transition to actuality, see also C. J. Isham [66]. Based on the examination of the "which-way" experiment, students are led to identify two further elements of revision in the concept of field: differently from a classical field, a quantum one displays a punctual interaction with detectors (we can identify the detector with which the interaction takes place), but this interaction affects the entire field at the same instant, i.e., in a non-local way.
#### iii.4.2 Structuring the discussion of epistemological themes
The three rules of correspondence naturally lend themselves to a discussion, respectively, of the completeness of the theoretical description, of indefiniteness and uncertainty, and of the measurement problem. Based on the examination of the "which-way" experiment and entanglement, it is also possible to add to the picture a discussion of the problem of locality.
Format, content and placement of the activities on the foundational debate need not only be instrumental to the implementation of the _Epistemological Principle_, but also compatible with the educational level of the student population at hand, the structure of the learning trajectory, and the duration of the course (12 hours). The chosen format consists in a short introductory lecture given by the instructor, followed by a whole class discussion of the topic, which can be supported by pre-class reading assignments. The texts are preferably selected among those works of leading scientists whose understanding does not require sophisticated mathematical or physical knowledge. Since the first three units of the course concern preparation, measurement, and their formalism, while propagation, wave-particle duality, and entanglement are addressed in the last unit, it is natural to discuss first the debates on indefiniteness, uncertainty, and completeness.
The first occasion to introduce the problem of indefiniteness and uncertainty may be the extension of the relations between properties to the case of position and velocity, in Unit 1, where students deal with the loss of
Figure 8: Iconic representation of a“which-way” experiment.
Figure 7: (a) Preparation; (b) Measurement.
the property of one observable in the measurement of the other (a limiting case of the uncertainty principle). Another occasion is offered by an activity of the third unit, which is designed to promote the distinction between a superposition state such as \(|\psi\rangle=a|0^{\circ}\rangle+b|90^{\circ}\rangle\) and a mixture of a fraction of \(a^{2}\) photons prepared in \(|0^{\circ}\rangle\) and \(b^{2}\) in \(|90^{\circ}\rangle\), and to launch the discussion of related epistemological issues (see Section V.4.3 for a description of the goals and the of the development of this activity). Completeness may be examined in Unit 2, during the discussion of the quantum state, or together with the other issues in the third unit.
In the initial versions of the course, we discussed the Heisenberg's microscope thought-experiment and Bohr's criticism of it in the first unit, in order to contrast a disturbance interpretation of the principle - where system properties are well-defined but it is not possible to measure them simultaneously with an arbitrary precision - with an interpretation in which they are not well-defined [67]. For the revision of this activity, see Section V.5. In the third unit, instead, we discussed the debate between Einstein and Bohr on the completeness of quantum mechanics (e.g., hidden variables) and - again - the uncertainty principle, leaving out the part on the EPR paradox, which will be taken up when dealing with entanglement [68]. The discussion of these issues was concluded in the fourth unit, where we proposed students, as a plausible interpretation of the wave-particle duality, an ontology based on the "field of actual and potential properties."
After the conceptual and mathematical examination of the entanglement of two photons produced by parametric down-conversion, students are presented with the problem of locality, which is discussed only at a qualitative level. We explain that the simultaneous collapse at distance of the entangled superposition following a measurement on one photon is incompatible with the relativistic notion of causality. Then, we present the statement of the Bell's theorem and mention the empirical confirmation of the inequality violation [69]: an unexpected key to clarify the Einstein-Bohr debate on EPR, offering the opportunity to settle the question experimentally [70]. This discussion allows us to emphasize the importance of the foundational debate in the development of scientific knowledge. In the words of Alain Aspect, "there was a lesson to be drawn: questioning the 'orthodox' views, including the famous 'Copenhagen interpretation', might lead to an improved understanding of the quantum mechanics formalism, even though that formalism remained impeccably accurate" [70, p. xix]. As regards technological development, it is possible to illustrate to students that a deeper understanding of entanglement is at the root of a second quantum revolution that is now unfolding [e.g., 71], and that John Bell has been its prophet [70].
By having students work on the modelling and interpretation of the mathematical description of entanglement, we gain a further opportunity: ending the course with the discussion of the measurement problem. We illustrate the Schrodinger's cat thought experiment and more in general the measurement problem, indicating three lines of solution proposed by members of the scientific community: 1) accept the standard interpretation and modify the dynamics of the theory; 2) accept the dynamics and modify the standard interpretation; 3) accept both the standard interpretation and the dynamics, and try to show that their conflict can be ignored for all practical purposes [72]. As an instance of the first, we mention the Ghirardi-Rimini-Weber's theory [64], the second is illustrated by hinting at the Everett's "many worlds" interpretation [64], the third is represented by the decoherence research program [47]. We explain that decoherence provides an answer to the nonobservability of interference effects on macroscopic scales. However, outside the scope of decoherence remains the explanation of why only a particular outcome is realized in each measurement [47]: one of the most significant open issues in modern physics, that affects also our proposed interpretation of the wave-particle duality.
## IV The course
### Structure of the course and types of activities
The course is designed for an optimal duration of 12 hours, even if some design experiments lasted only ten. The time devoted to each topic is organized as follows: four hours for Unit 1, two for Unit 2, two for Unit 3, four for Unit 4. Lessons are divided into two-hour blocks, that represent a compromise between the time required to engage secondary school students in a series of inquiry- and modelling-based activities they are not accustomed to, and the need to limit the cognitive load associated with the discussion of non-intuitive and novel content.
The structure of the sequence in terms of units, individual activities and their typology, is displayed in Fig. 9. By examining the figure, it is possible to see that, while each unit builds on the previous ones, individual units can be described as self-contained. In accordance with the implementation of first and the second design principles, each unit is concluded with a bird's-eye view across contexts on the revision, due to theory change, of the basic concepts and constructs addressed in it. However, except for Unit 1, all the others can be introduced by means of a driving question or a need emerging from previous units, that suggests students the importance to acquire further knowledge [15]. For Unit 2 on the quantum state, the driving question is an issue implicitly raised in Unit 1: how to prepare/identify identical quantum systems if some of the observables are necessarily indefinite (activity explicitly displayed in figure). Unit 3 on superposition is associated with the need to quantitatively determine the possible results of measurements on hydrogen-like atoms, which have been qualitatively explored in Units 1 and 2. For Unit 4, the question is how to describe propagation with the same mathematical tools introduced in Unit 3 for describing measurement.
Figure 9: The structure of the sequence as a composition of building blocks: units and individual activities. Two-colour boxes with a white half represent lectures aimed to implement the design principle associated to the other color. The other two-colour boxes represent active-learning strategies that play more than one role.
The typology of each activity has been displayed in Fig. 9 by means of a color code. By looking at the color distribution, it is evident that a large majority of the activities are linked to the implementation of the four principles. The following is a synthetic description of each type of activity:
* _Knowledge revision activity_: relying on the representation of the conceptual trajectory of a classical notion [32] with the aim to promote a consistent interpretation of its quantum counterpart (often structured in terms of interpretive tasks);
* _Knowledge organization activity_: relying on the relations between properties/observables in order to build a coherent body of knowledge by using the same conceptual tools for the analysis of different physical situations;
* _Epistemic practice_: inquiry- and modelling-based activity that mirrors the processes used in theoretical physics for building new scientific knowledge. We remind the reader that, by running this kind of activities, students build knowledge that is _new for the learner_. In order to promote an awareness of the nature of each practice and of its significance in the development of the discipline, the activity is followed (less frequently: preceded) by what we call "a historical snapshot." It consists of a two-minute lecture on the practice and on a historically significant example of how it has been used by theoretical physicists in the building of classical physics knowledge (Fig. 3 includes the summary of a historical snapshot on each practice).
* _NoS and HoS debate_: discussion of issues concerning the scientific epistemology of QM and the historical development of the discipline. The activity involves a ten-minute lecture followed by a whole class discussion. Except for "the troubled history of light quanta" (Fig. 9, activity 1.3), which is instrumental to introduce the discrete nature of electromagnetic radiation, and to highlight the tangled and non-linear relation between experiment and theory in scientific development [73], the other activities of this kind have been already described in Section III.4.2;
* _Empirical exploration_: of the polarization of macroscopic light beams by using cheap experimental materials such as polarizing filters and calcite crystals. During the exploration, their action on the beams is visualized on the wall by means of a overhead projector (see Fig. 10 and Fig. 11). The activity is conducted as a form of _demonstrated inquiry_[74]: the instructor poses questions to the students, soliciting input in the design of the exploration, encouraging them to form hypotheses, to make predictions, and to explain the results. Three empirical explorations are scheduled at different points of the course: right at the start of the learning path, to introduce the phenomenology of the interaction of light with polarizing filters (Fig. 9, activity 1.1); after the probabilistic interpretation of the Malus's law, to present the phenomenology of birefringence, thus providing the experience needed for the modelling of quantum measurement at a microscopic scale (Fig. 9, activity 1.8); at the beginning of Unit 4, to present a simple form of "which-way" experiment, paving the way to the discussion of propagation and entanglement (Fig. 9, activity 4.1).
### Instruments
The instruments we use in the course are the following: 1) worksheets, 2) cheap experimental tools, 3) the JQM environment for simulated experiments, 4) a specific use of language, 5) a slide presentation, 6) homework: reinforcement exercises, reading assignments, and slides used in the previous lessons.
Figure 11: (a) The phenomenon of birefringence; (b) The outgoing light beams are polarized, as shown by adding a filter on the crystal.
Worksheets are designed to emphasize written explanations of student reasoning. In this course, they represent the common thread underpinning the development of learning from beginning to end, and the main instrument for collecting data on student learning. For each unit, we designed a worksheet of two-three pages of tasks. Each worksheet is divided into blocks with a general goal that is split into conceptual micro-steps addressed in different questions. Steps are of just the right size for students to become actively involved. If the steps are too small, little thinking may be necessary. If the steps are too large, the students may become lost unless an instructor is by their side [75]. With the exception of lectures and of former worksheet questions that have been converted into oral ones, the sequence of activities displayed in Fig. 9 mirrors the structure of the worksheets. All worksheets but the last one end with a block containing one or more concept revision tables (Fig. 9, activities 1.18, 2.10, and 3.9). A table on vector superposition used in previous versions of the course is displayed in Fig. 18, as well as its revision (Fig. 19).
As we have seen in the last section, the exploration of the phenomenology of light polarization at a macro-level is performed thanks to kits including an overhead projector, passive filters, polarizing filters (Fig. 10), calcite crystals and tracing paper with a black dot in order to examine the phenomenon of birefringence (Fig. 11).
We already introduced the JQM environment for simulated experiments in Section III.2. With one notable exception, in the current version of the course, the adoption of this instrument is limited to its visual code, which is used both in the slide presentation and in the tasks proposed to students (see, e.g., Fig. 15). This code is instrumental to building a highly idealized environment designed to help students focus on essential theoretical aspects. The exception concerns the ISLE-like "which-path" activity, in which the simulation plays the role of a testing experiment (Fig. 9, activity 4.2).
Language in the slide presentation and in questions has been structured according to the following guidelines: first, the adoption of the language of "properties" and of their relations in order to provide a unified framework for describing measurement, state, and superposition at a point in time; second, the use of colloquial language and student sketches in whole class discussions (as in Fig. 39) and, when possible, in questions (e.g., describing activity 1.7 in terms of a "horoscope of the photon", as illustrated in Section V.4.1).
Every aspect of the lessons (lectures, worksheet questions, correct answers, discussion of the results of empirical explorations) is supported by slides on the multimedia board or on a projector. At the end of each lesson, the slide presentation used in the classroom is made available to students in the form of a pdf file.
We already discussed about reading assignments in Section III.4.2. Homework exercises contain further interpretive questions (e.g., on the physical meaning of the sign of a superposition) and questions for deepening the development of specific aspects of the model (e.g., mathematically deriving the reduction to half of the intensity of an unpolarized beam of photons passing a filter).
The combined use of worksheets, slides, and of an instructor diary reporting student comments and reactions, offered us the possibility to monitor their learning paths during design experiments, identifying unsolved difficulties in the specific question or slide in which they were elicited. As a result, these instruments helped the researchers in their investigation of student ideas and in the refinement of the course.
### Methods
Worksheet activities are conducted in the following way: the instructor displays a slide containing the worksheet items at hand, reads them, and allows some minutes for completing the task (depending on the difficulty of the assignment). Students are asked to write the answer on their individual worksheet, but are allowed to discuss the task with their deskmate. During this time, the instructor walks through the class, listening, observing, checking the progress of each student, answering clarification requests and posing stimulus questions (when realizing that some students are stuck) to help them overcome difficulties and to support their reasoning. Finally, when all of the students have written their answer, a whole class discussion ensues. The instructor plays a facilitating role, e.g., asking a student to share her/his answer, inviting those who have given different answers to express their point of view in the attempt to convince their peers, asking further clarifications if the explanation is not fully clear to the other students, and going on in this process until a consensus has been reached. At the end, the answer of the instructor is displayed on the slide. Then, she/he moves on to the next activity. Oral questions are displayed on a slide and directly addressed in a whole class discussion, after which the answer of the instructor is shown. Also epistemological debates are addressed in a whole class discussion after the initial lecture by the instructor, which is performed with the aid of the slide presentation.
## V Cycles of refinement
### Design-Based Research: data collection and analysis
The course has been refined in cycles of testing and revision conducted in the framework of Design-Based Research (DBR). This framework is a collection of approaches devised for "engineering" teaching and learning sequences, and systematically studying them within the context defined by practices, activities and materials - in short, by the means - that are designed to support that learning [31]. DBR consists of cycles composed of three phases: preparation, design experiment, retrospective analysis. The results of a retrospective analysis feed
a new design phase. When patterns stabilize after a few cycles, the instructional sequence at hand can become part of an emerging instruction theory.
The course has been experimented in classroom contexts of various nature.
The first one is the Summer School of Excellence on Modern Physics, held every year at the University of Udine, Italy. It consists of a one-week full immersion program in modern physics topics. The course was held in the years 2014-2018. Participant students ranged from a minimum of 29 in 2014 to a maximum of 41 in 2015. They were selected among a large number of applicants from a wide range of Italian regions. All of them had just completed the penultimate year of secondary school.
The second context consists of regular classrooms from Italian secondary schools. The course was held in Liceo Statale Corradini, in the city of Thiene, in November 2018 and in Liceo Scientifico Statale Alessi, in the city of Perugia, in February 2019. In the Italian system, Liceo is a type of school attended by students who intend to continue their studies in university. The design experiment involved three classes of the final year from Liceo Corradini, for a total of 61 students, and two classes of the same year from Liceo Alessi, for a total of 39 students.
The third context concerned self selected students from Liceo Scientifico Galilei, in the city of Trieste, at the end of March 2019. The course was offered as an optional study program, and was attended by 18 students.
In this work, we do not test the effectiveness of the course, but the refinement of individual activities. For this purpose, the differences between the three kinds of student population did not represent an issue. Future directions include the analysis of a pre-post-test administered in regular classrooms. Here we report on cycles of refinement concerning a set of activities chosen to illustrate the implementation of each of the four principles of design. For each cycle, we describe the preparation phase, the worksheet items used to implement the design, and the retrospective analysis of design experiments. Except for a limited number of recently added activities, cycles were iterated until patterns stabilized.
Data sources consist of written answers to worksheet questions, occasionally enriched by notes reported in the instructor diary during design experiments. Data were analyzed for correctness and for student lines of reasoning, since both informed the revision of the activities. The second type of analysis was conducted according to qualitative research methods [76]: the identification of crucial conceptual content and the examination of literature on learning difficulties in QM guided the building of a-priori categories. Then, based on conceptual elements introduced by student answers, the categories were revised. This process led to the identification of clusters and coherence elements in student reasoning.
Since the sample changed from experiments to experiment, in order to improve readability and to enable comparison, the rates of answers as regards both correctness and student reasoning are reported by means of percentages.
### Knowledge revision activities
This section is devoted to the cycles of refinement of activities designed to support students in the revision of classical concept and constructs. We report on two cases: the first concerning the ontological shift of a concept (measurement), the second the representational shift of a construct (vector superposition). Here we examine the path for the introduction of quantum measurement, and the end-of-unit table on superposition. Such tables are scheduled at the end of the first three units and are designed to implement the _Principle of Knowledge Revision_, by promoting the discrimination between the classical and the quantum version of a notion with a birds's eye view on the revision process.
#### v.2.1 Measurement
In the transition to a quantum picture, the trajectory of the concept of _ideal measurement_ (see Fig. 12) and,
as a consequence, its revision, are of crucial importance. In the context of polarization there are two additional challenges to take into account. First, while the linear polarization of macroscopic light beams can have any orientation in the plane of polarization and is identified by measuring its angle, the linear polarization of a photon can also have any orientation, but its measurement gives one of two angles that may be different from the initial one. Research found that students have difficulties in interpreting the quantum case as a two-state system [24]. The second challenge concerns the need to interpret the absorption of a photon, either by a polarizing filter or by detectors placed on the output channels of a calcite crystal, as the result of a transition in state (equivalently, in polarization property).
Some textbooks opt for the context of filters, analyzing the superposition of state vectors (e.g., [77]). The same approach is used by Michelini and Stefanel [53] in their educational path, which represented the starting point for the development of this course.
In the initial version of the course (Summer School
Figure 12: Ideal measurement: concept trajectory from CM to QM [32].
of Excellence, 2014, 28 students), we decided to follow a similar route, since the revision of measurement was scheduled right after an extensive work in the context of polarizing filters, both at a macroscopic level and in terms of photons (see Section V.4.2). At this point of the course, the concept of state and its mathematical representation are not available to students. Therefore, as a first step, we designed to guide them to interpret quantum measurement in terms of information obtained on the polarization property of one photon as a result of its interaction with the measurement device. For suggesting students a productive framing of the interaction of a photon with a filter, we denoted the properties belonging to the measured polarization quantity with the expression "outcome-property." Secondly, we aimed to help students develop an understanding of the basic features of quantum measurement. As the trajectory of the concept of measurement is an instance of _categorical generalization_ (see Fig. 12 and Section II.1.1), we planned to start from the special case in which it is classical-like and determinate (when the initial property is an outcome-property). This case is familiar to students, since it can be interpreted as an ideal classical measurement. Then, we move on to discuss its new feature, active and stochastic (when the initial property is not an outcome-property) as a form of generalization of the first case. The main characteristics of strategies related to this pattern are described in the work of Zuccarini and Malgieri [32].
The worksheet block designed to support the conceptual development of students is summarized in Fig. 13. During the previous activities, they had gained enough experience with the physical situation under scrutiny to propose a statistical/probabilistic interpretation of Malus's law. Therefore, we assumed that they would be able to come to consistent conclusions on measurement by means of interpretive tasks, starting from the determinate case (item **C1**). However, while 75% of the students consistently answered **C1**, in the uncertain case (item **C2**), only 18% interpreted absorption in terms of acquisition of a property in the direction perpendicular to the axis of the filter. Most of them (57%) simply turned the sentence used for transmission into the negative form: "the photon had not - or had not acquired - a property in the transmission axis \(\Rightarrow\) it is not transmitted." As to **C3**, designed to help students identify the features of quantum measurement, even if the item started with a definition the process, its results were again affected by the insufficient conceptual construction achieved in the previous step. Only 2 students gave a consistent answer: "It depends: [it does] not [acquire a new property] if the photon's property is = or \(\bot\) to the permitted direction, otherwise it acquires one of these two properties." Three answers were incomplete: students correctly stated that measurement is passive if the photon's initial property coincides with the axis of the filter or is orthogonal to it, but they did not discuss how the property may change if the angles are different. The relative majority of the students (32%) focused only on transmission; 21% answered 'it depends', giving no explanation; the remaining students focused on irrelevant aspects.
A year later (Summer School of Excellence, 2015, 41 students), we revised the design. Measurement was defined from the start in terms of outcome-properties, and - most important - we added two diagrams depicting the possible transitions of the initial property in case of transmission and of absorption (see Fig. 14). In this way, we intended to suggest students to interpret the transition associated with absorption in the uncertain case as a generalization of what it is known to happen in the determinate case.
However, this support was not effective. In the probabilistic case, only 17% interpreted absorption in terms of acquisition of a property in the direction perpendicular to the axis. Quantum superposition would allow us to describe this situation in a more productive way: if a photon is prepared, say, at \(|45^{\circ}\rangle=\frac{|0^{\circ}\rangle+|90^{\circ}\rangle}{\sqrt{2}}\), and then is absorbed by a filter with axis at \(0^{\circ}\), this process can be naturally framed as a result of a transition to \(|90^{\circ}\rangle\). Without a consistent understanding of quantum superposition which, as we know, will be discussed only in Unit 3, promoting the understanding of quantum measurement in the context of a photon-filter interaction is a tricky task.
As regards the conditionality of the active nature of measurement, only 5% of the students interpreted it consistently: "for uncertain interactions, measurement determines the property acquired by the system", while 17% said that measurement can be active or not, but without specifying when: "the initial property may change in some cases." Even worse, some students wondered how the whole situation could be described as a measurement, and not as "just a weird interaction altering the property!".
Given the need to provide students with a context where the results this process can always and clearly be described in terms of outcome-property, in 2016 (Summer School of Excellence, 27 students) we resolved to use a measurement device composed of a birefringent crystal and two counters. We designed an empirical exploration of the physical situation both at the macroscopic scale, performed with real instruments (Fig. 9, activity 1.8), and at the single photon level by means of a predict-observe-explain sequence [78] (Fig. 9, activity 1.9). The latter was conducted with the aid of JQM screenshots on single photons prepared with properties at \(0^{\circ}\), \(90^{\circ}\), \(45^{\circ}\) that go through a measurement device composed of a calcite crystal with \(0^{\circ}\) and \(90^{\circ}\) channels and a detector on each one (see Fig. 7). After that, students were administered a revised worksheet block on measurement (see Fig. 15). Item **C2** is not represented in the figure, because is not related to the issue at hand, and will be discussed in Section V.4.1.
Item **C1** is an elementary form of thought experiment designed to promote a consistent interpretation of the absorption of the photon by the counters in terms of a transition in polarization property. 85% of the students identified the outcome-properties of a photon prepared at \(45^{\circ}\) (probabilistic case) as \(0^{\circ}\) and \(90^{\circ}\). Most students (59%)
designed consistent experiments to prove their statement, using one filter on each channel, one with axis at \(0^{\circ}\) and the other at \(90^{\circ}\), either corresponding to the polarization associated with the channel (all photons pass the filters, \(33\%\)), or the opposite case (all absorbed by the filters, \(26\%\)). The experiment with absorbed photons is better designed than the other, as we get a transition in photon polarization directly on the filters, while in the other case the two filters are transparent to the photon and the transition takes place in the counters. Still, both lines of reasoning were productive.
All students but one (26) interpreted the situation described in item **C3** (determinate interaction) as a classical measurement, half of them explicitly adding we get to know the initial property according to the channel/detector in which the photon is counted.
In **C4**, concerning the revision of the concept of measurement, \(78\%\) of the students gave consistent answers, recognizing the conditionality of the active nature of quantum measurement and the nature of its constraints, e.g. "if the property does not coincide with the outcome-properties, the system collapses into one of them. If it coincides, measurement does not change it." Even more important, no student showed a reluctance to interpret the interaction as a measurement both in the determinate and the stochastic case, probably due to the productive framing promoted by item **C3**.
Given the high rate of success (see the progression in Fig. 16), in the following design experiments the items have been converted into oral questions, since we intended to focus on the refinement of later parts of the course.
#### v.2.2 Superposition
Quantum superposition is the subject of an entire unit. Hence, the development of the course was heavily influenced by the need to promote an effective revision of the representational properties of vector superposition. The design of these activities is based on a careful examination of the trajectory of this construct in theory change, which is displayed in Fig. 17. The attributes included in the figure identify the main representational features of vector superposition and the changes it undergoes in the transition to the new paradigm. Since interference and entanglement are addressed in Unit 4 together with propagation, the revision process related to these aspects (in the figure: _Ability to produce interference_ and _Factorizability into component vectors_) was postponed to the following unit. In Unit 3, we focused on the remaining features of superposition, where the pattern of _value disjunction_ (see Section II.1.1) occurs in two out of three cases. This patterns suggests a different strategy to address the revision process: starting from the development of an understanding of the new features of the construct, and then to contrast them with those of its classical counterparts, in order to identify which of the familiar features lead to unproductive reasoning in a quantum context [32]. Prior intuition is used as a contrast at the end of the instructional sequence on the topic.
The new constraints on the number of component vectors (maximum number equal to the dimensions of the state space) and on their directions (orthogonal to one another) were dealt with by means of interpretive tasks scheduled at the beginning of Unit 3 (Fig. 9, activity 3.1), while the procedure and the goal (decomposition of the state vector in a given basis to obtain info on the measurement of the corresponding observable) were discussed in worksheet items concerning the measurement of different polarization observables on the same state (Fig. 9, activity 3.4).
For the Summer School of Excellence 2017 (32 students), we structured a end-of-unit table in order to help students contrast the features of the familiar forms of vector superposition (of forces and waves) represented in Fig. 17 with quantum superposition. As a matter of fact, the representational role of this mathematical process in QM is very different from that of the most com
Figure 14: Figures added to the worksheet block in 2015 Summer School of Excellence, Udine. The diagrams depict a polarizing filter with vertical axis and the possible transitions in the polarization property. Left: two photons prepared respectively with horizontal and vertical polarization: determinate outcome. Right: one photon prepared with polarization at \(45^{\circ}\): stochastic outcome.
Figure 13: Worksheet block on quantum measurement: 2014 version. The correct answers are in green.
monly used forms of superposition in CM. In particular, a consistent interpretation of the fact that quantum superposition concerns the decomposition of one vector is a prerequisite for addressing the quantum notion of interference, which is totally internal to the individual system. From this derives the seemingly contradictory statement of Dirac: "Each photon then interferes only with itself" [79, p. 9]. Besides, since students found it difficult to discriminate between system properties and state vectors [3], one row of the table was devoted to this issue. Thus, we seized the occasion to reinforce the understanding of a fundamental difference between classical vectors (primarily used to represent physical quantities and lying in the Euclidean space) and the state vector (defined in an abstract Hilbert space), as displayed in Fig. 1.b.
The end-of-unit task used in 2017 is represented in Fig. 18, and corresponds to activity 3.9. At that stage of development of the course, no discussion of the superposition of states of the hydrogen-like atom had been designed yet. Therefore, the correct answers included in figure have been structured according to the learning goals of activities 3.1 and 3.4 on the superposition of polarization states.
As regards the first statement, all students identified the forces depicted in figure as physical quantities, and 88% did the same for (the amplitude of) waves. In the quantum box, 84% of them answered "no", sometimes adding consistent explanations: "state vectors are unit vectors with no measurement units" (15%), "they are dimensionless" (15%), "they express probability" (9%). A small minority gave inconsistent interpretations of the concept of state, seen either as "a percentage", "a set of values", a "dimensionless number."
Also the identification of constraints - or their absence -, discussed in the second statement, did not pose a serious challenge. All students but two agreed that superposition of forces and of waves has a physical meaning independently of the number of component vectors and of the angles between them. In the quantum case, a large majority of the students consistently identified at least one constraint (72%). Of them, 18% gave a complete answer (e.g.: "No, it is necessary that the vectors are 2 and are orthogonal, because they are the only mutually unacquirable conditions"), 36% focused only on orthogonality, another 18% stated there must be no more than two vectors, the rest mentioned the mutual unacquirability of the corresponding properties or a combination of two constraints. Other students did not identify any constraint, but recognized the importance of the angle and/or of the
Figure 16: Worksheet results in 2014-2016.
Figure 17: Vector superposition: concept trajectory from CM to QM [32].
Figure 15: Worksheet block on quantum measurement: 2016 version.
number of vectors in quantum superposition (16%). The rest gave irrelevant or inconsistent answers, e.g.: "it [the statement] has no physical meaning in the quantum case because a flux of photons cannot have different states."
The interpretation of the last two statements on the goal and the referent of superposition proved the most difficult for students. As to the third, while in the classical contexts all students agreed that "The goal of superposition is to determine the resultant", in the quantum one, only 43% consistently explained why this is not the case. Of them, 58% correctly stated that the resultant is decomposed to obtain information on measurement, 36% that the resultant is known from the start, the others that the goal is to calculate transition probabilities. Inconsistent explanations of the same answer were given by 16%, while another 19% did not assess whether finding the resultant is the goal or not. The remaining students either did not answer (9%) or agreed with the statement also in the quantum case, proposing explanations such as "yes, because in order to find the resultant, we need to superimpose the two components" (13%). This shows that the classical framing of superposition tends to be transferred to the quantum case, even after specific activities designed to promote a consistent interpretation of the goal of the procedure.
The fourth statement involved various aspects at the same time, i.e., the goal, the procedure, and the referent: "for obtaining physical information, the _only_ procedure is decomposing the resultant into orthogonal components." This task elicited substantial issues even in the classical contexts. We thought that, by emphasizing the term "only" and by giving this task after assessing whether the goal of each form of superposition is adding vectors to "determine the resultant", students would come to the conclusion that this statement does not fit the superposition of forces and waves. However, almost half of the students put a cross either in both boxes (16%) or in only one of them (22% forces, 6% waves). Our best guess on the reasons behind this issue is that students interpreted the statement as proposing a role _also_ for decomposition, and not an exclusive one.
The assessment of the quantum case showed that the activities included in Unit 3 had not been sufficient to promote a solid understanding of state superposition as orthogonal decomposition. Only 43% of the students agreed with the statement, providing a consistent explanation, e.g., "yes, its goal is to calculate the probability of alternative results." Another 22% also answered yes, but with inconsistent or irrelevant explanations. For instance: "yes, because the components are obtained from measurement" or "yes, because of the use of trigonometric functions." Others did not assess the statement (13%) merely saying that decomposition is a possible operation, or wrote "no", adding inconsistent statements such as "the decomposition is not sufficient to obtain info because QM is stochastic." The rest of the students left the item blank.
These results prompted us to revise both the previous activities of Unit 3 and the table. We modified activity 3.4, adding a section relying on embodied cognition described in Zuccarini and Malgieri [32], in which the perceptual experience of passive rotations (simulating the passage of the same state from superposition in the basis of one observable to that in the basis of another) was put in the service of promoting a correct interpretation of the conceptual referent of quantum superposition. Then, in order to integrate student knowledge, providing an additional context with more general and significant features, we included a discussion of the superposition of eigenstates of the hydrogen-like atom in terms of quantum numbers (see Section III.2). Finally, we revised the wording of the statements and their order (Fig. 19). We
Figure 18: End-of-unit table on superposition filled with correct answers: 2017 version.
moved the fourth statement to the first row and changed it radically in order to address only the referent of superposition (one physical entity in the quantum case), leaving the discussion of the goal to the third statement. The awareness of the subtle conceptual issues related to the dual role of superposition in classical contexts (determining the resultant and decomposing it) brought to a slight change in the third statement (from _the_ to \(a\) "goal of superposition is to determine the resultant"). We assumed that, by reinforcing the activities on the interpretation of the referent and the goal, and by separating the two aspects in the table (first the former, then the latter), we would support students in addressing both.
The new design was experimented in the Summer School of Excellence 2018 (30 students). The issues on the classical boxes were successfully solved: 87% recognized the vectors as physical quantities, and all students but one put a cross in the other boxes.
As regards quantum superposition, a comparison of the consistent results obtained in 2017 and 2018 is displayed in Fig. 20.
A strong improvement is evident in the answers in the two boxes on the referent (first on the right) and the goal (third on the right) of quantum superposition. In both boxes, 87% of the students answered correctly, providing consistent explanations. In the one on the referent, 53% did not limit themselves to state that quantum superposition concerns one entity, but interpreted the process as a decomposition of this entity. As to the goal, beside recognizing that determining the resultant is not among the goals of this form of superposition, most students identified its objectives as "calculating [transition] probabilities" (37%) or "decomposing the state vector" (30%).
With relation to the other two statements, the results of the two design experiments were comparable, with a slight decrease in performance in 2018. Students assessed the second statement with similar reasoning in both years. As regards the statement on constraints, we need to take into account the greater complexity introduced by the superposition of states of the hydrogen-like atom (where, for bound states, we can have a countably infinite number of components, all orthogonal to one another). Most students discussed the statement by referring to the context of polarization, e.g. "In polarization, for instance, we can have up to two components \(\rightarrow\) two values" (47%). However, some students gave global explanations by connecting the number of vectors to the spectrum of the measured observable: "No, it depends on the number of values that a property can assume" (13%), which is true in the absence of degeneracy.
### Knowledge organization activities
The framework of the relations between properties has been used from the start, initially limited to unacquiability and incompatibility, with the aim to promote the understanding of quantum uncertainty in the context of polarization. After 2017, while studying the knowledge fragmentation challenge in conceptual change [7; 8] and in the learning of QM (which has been the object of a specific work on the topic [39]), we realized that, by defining the relations in a well-formalized way, and by adding compatibility to the picture, it became possible to discuss physical situations that go beyond the case of photon polarization: measurements of position and velocity (at a qualitative level), and other scientifically significant contexts. Discrete state systems, for instance, could be studied both at a qualitative and quantitative level, by extending to them the discussion of the vector representation of the state, exploiting the transformation of the relations into algebraic constraints. In particular, mutual unacquiability of properties becomes orthogonality of the corresponding states, and paves the way for examining the superposition of a finite number of state vectors. In the course, we decided to include the context of the hydrogen-like atom. The main reasons behind this choice are that its bound states can be described in terms of quantum numbers, and therefore are suitable to be discussed at a quantitative level within the constraints of school mathematics, and that the context naturally lends itself to an interdisciplinary approach in collaboration with the chemistry teacher on topics such as orbitals and the atomic structure. Another educational opportunity offered by the complete picture of the relations is to use them as a further instrument for addressing student's need of comparability between CM and QM, since unacquiability and compatibility can be expressed also in classical terms (see Section II.1.2). With these tools
Figure 20: Boxes on quantum superposition: correct answers with consistent explanations in 2017 and 2018
Figure 19: Table statements: from 2017 to 2018. In red, the old wording that has been modified. In green, the new one
at our disposal, the _Principle of Knowledge Organization_ took the current form.
#### v.3.1 Introduction of the relations between properties
In accordance with the _Epistemic Principle_, we decided to introduce the relations by means of an interpretive activity that mirrors a practice of the theoretical physicists: "deepening the theoretical investigation of a phenomenology by adopting multiple perspectives." As a matter of fact, all that is needed to identify the relations of unaquirability and incompatibility is the work already done on photon polarization measurement: at a qualitative level, the discussion of the case of a photon prepared with a generic property at \(\theta\), the polarization of which is measured by a device composed of a calcite crystal with output channels at \(\phi\) and \(\phi+90^{\circ}\), and a detector on each channel; at a quantitative level, the probabilistic interpretation of the Malus's law for calculating the transition probabilities, i.e., \(p(\theta\to\phi)=\cos^{2}(\phi-\theta)\) and \(p(\theta\to\phi+90^{\circ})=\cos^{2}(\phi+90^{\circ}-\theta)\). Therefore, we are dealing with a change in perspective on the same phenomenon: from the revision of the concept of measurement to the analysis of what relation is established by measurement between two given properties. In accordance with the true meaning of the practice, such change of perspective significantly extends the scope of this trip through the quantum realm.
The first version of the activity (number 1.11 in Fig. 9) was administered during the Summer School of Excellence 2018 (30 students). Since both the stochastic and determinate cases are governed by Malus's law, this version relied exclusively on its interpretation in terms of photons, leaving out any details on the measurement device. Given a photon prepared with a property \(P_{a}\), and given a different property \(P_{b}\), students are asked to assess four statements concerning the retention, loss, acquisition or not of these properties in measurement, specifying whether the event occurs and, if it does, under which conditions (see Fig. 21).
The level of abstraction of this task is much higher than that of activities discussed in previous parts of the course, since the properties at hand are totally arbitrary. However, we assumed that the work done on transition probabilities and on the revision of measurement represented an adequate basis for the analysis of the statements. After the activity, each statement was identified by the instructor as the core of the definition of a relation between properties. The statements are four because, to complete the picture, we added a further relation: identity, embodying the fact that if the initial property (\(P_{a}\)) is also an outcome-property of the measurement, the result is certainly \(P_{a}\). This statement may seem trivial at first, but corresponds to an important feature of quantum systems: when the system has a property of an observable at a given instant (either as a result of preparation or acquired after a measurement), sufficiently rapid measurements of the same observable will certainly provide this property again. After defining all the relations, students were shown a summary table, reported also on the worksheet in a new sheet (see Fig. 22).
The results of the task are displayed in Fig. 23. An answer is considered _consistent_ if its content matches that of the correct one reported in Fig. 21, both in terms of outcome-properties and (if needed) of angular relations between \(P_{a}\) and \(P_{b}\). _Partial_ means that the student has identified one of the conditions for the occurrence of the event described in the statement but not all of them, or that has added unneeded conditions. For instance, in statement 3 a student may recognize that \(P_{a}\) is not an outcome-property, but neglect the fact that \(P_{b}\) must be one (e.g., "The system always loses \(P_{a}\), unless it is an outcome-property"), while in statement 1 may add an angular relation between \(P_{a}\) and \(P_{b}\), when all you need is that \(P_{a}\) is an outcome-property (e.g., "I must use a filter parallel to \(P_{a}\) and \(\perp P_{b}\)").
By looking at the histogram, we see that most students consistently discussed statement 1 and 4, half of them correctly identified the conditions for statement 2, while statement 3 on incompatibility was largely unsuccessful. From the content of inconsistent answers, we see that often students interpreted the task differently from what we intended: e.g., focusing on the mathematical description of Malus's law (e.g., "Yes, if \(p(P_{a}\to P_{b})=\cos^{2}(P_{a}-P_{b})\)") instead of specifying conditions on the outcome properties. This issue with the framing of the task is mirrored in the small number of partially correct answers. In addition, the lack of details on the measurement device, which was meant to guide the students towards an abstract and general perspective, did not discourage them to use concrete tools to support their reasoning. The relative majority of students mentioned polarizing filters (35%), others mentioned crystals (11%), while another 33% reasoned abstractly on outcome properties and angles, and the remaining 21% answered without giving explanations (e.g., in statement 4: "Never", "No", "Impossible"). Probably, the reference to Malus's law in the item text, which had been previously discussed in the context of the interaction with filters, activated the use of this resource. In the case of statements 1, 2, and 3 (statement 4 was not an issue), the reference to filters was mostly unproductive: only 44% of those who mentioned filters provided a consistent answer, 60% for crystals, 64% for abstract answers. Last, by reasoning on filters, an old issue reappeared: the difficulty to interpret the absorption of the photon as a consequence of a transition in property (see Section V.2.1). For instance, in response to statement 1: "The only outcome-property is \(P_{a}\)"; to statement 2: "Only \(P_{a}\) is an outcome property."
Consequently, we decided to revise the item by removing any reference to Malus's law, using calcite crystals as a context, and limiting the arbitrariness of the role of \(P_{b}\), by making it one of the two outcome-properties of the measurement. The statements assessed in the task were left unchanged, but the introduction was radically modified (see Fig. 24). The activity was experimented in Liceo Statale Corradini, Thiene, in noember 2018. It
was administered in two of the three classrooms involved in the course, for a total of 40 students. In the third classroom, the task was discussed orally for lack of time. The correct answers are exactly the same as before, except for statement 3, where there is no need to identify \(P_{b}\) as an outcome-property (detail specified in the introduction).
Results are displayed in Fig. 25.The rate of consistent answers is almost identical as in the Summer school, except for statement 3, in which it increases from 30% to 53%. This represents a definite achievement, if we consider that Summer School students are selected among a large number of applicants from all over Italy, while the design experiment at Liceo Statale Corradini involved regular classrooms. In addition, students from the Liceo Statale Corradini generally interpreted the task as intended by the researchers, which is mirrored in the much higher rate of partially correct answers, and practically all students adopted an abstract perspective. This shows that the context and the visual representation of polarization measurement by means of a crystal and two counters favored the use of a global approach to the task. Reasons of failure are the difficulty to identify all the conditions for the occurrence of an event, both as regards the identification of the outcome properties and of the possible angle between \(P_{a}\) and \(P_{b}\). Difficulties of both kinds are noticeable in this answer to statement 4: "Yes, \(P_{a}\) is an outcome-property, \(P_{a}\perp P_{b}\)." The improvement is even more evident if we compare the sum of consistent and partial answers (see Fig. 26).
#### v.3.2 Extending the use of the relations between properties to other physical situations
Understanding how the relations between properties could be adopted as the organizing principle of quantum knowledge on measurement, state and superposition at a point in time, is of paramount importance in this course. In the previous section we examined the activity designed to introduce the relations. Here, we describe the activity designed for extending their use to other physical situations. Namely, for making predictions on quan
Figure 23: Results of the task number 1.11: Summer School of Excellence, Udine, 2018 version.
Figure 21: Identifying the possible relations between properties: Summer School of Excellence, Udine, 2018 version.
Figure 22: Definition of the relations between properties: summary table.
Figure 24: Identifying the possible relations between properties, introduction of the worksheet block: Liceo Statale Corradini 2018 version.
tum measurement at a global level and in the context of the hydrogen-like atom. As before, also this activity is an epistemic practice of the theoretical physicist: "starting from results found in one context and extending or adapting them to other contexts." The task is designed in terms of a _structured inquiry_[74]. This means that the instructor defines the problem (extending the use of the relations) and the procedure (here: a sequence of inferential and interpretive questions), while students generate an explanation based on the theoretical knowledge they have at their disposal. This knowledge is the revision of the concept of measurement and the definition of the relations between properties (Fig. 22), that are initially presented to students as testable empirical regularities in the behavior of quantum systems in measurement.
The tasks are organized into two separate worksheet blocks, one on position and velocity measurements at a global level, the other on measurements in the context of the hydrogen-like atom. Here, we will discuss the results obtained in the Summer School of Excellence and in the three classrooms of Liceo Statale Corradini (61 students). The worksheet blocks used in the two design experiments are identical, except for a question added to the second block in the design experiment at Liceo Statale Corradini.
The set of questions on position and velocity is displayed in Fig. 27. Basically, students are required to deduce that all the properties of the same quantities are unacquirable, and to qualitatively determine the results of the measurement of an observable (position) on a system which has a property of an incompatible observable (velocity) in terms of change in properties and type of process, either determinate or stochastic. Students are informed in the item text that the properties of position are incompatible with those of velocity, but the definition of the relations alone does not offer a clear advice on the result of measurement, since it only mentions the _possible_ acquisition of a given property of the measured quantity. In order to come to the right conclusion, students must activate resources on the revision of measurement: a property of the measured quantity is always obtained also in QM.
The comparison of the rate of consistent answers given by students in the two design experiments is presented in Fig. 28. As we can see, while the student populations are very different from each other, results were very similar, with the notable case of the most difficult question (the fourth one, on changes in incompatible properties), in which regular classrooms performed slightly better than summer school students. A possible explanation of this result is the revision of activity 1.11 discussed in the previous section, which saw a high rate of success exactly on the behavior of incompatible properties (see Fig. 26, statement 3). Most students who consistently answered the fourth question interpreted the expression "how do the properties [...] change" in a position measurement exclusively in terms of loss and acquisition (60% in the summer school, 78% in the liceo), a tiny minority exclusively in terms of the determinate or probabilistic nature of the process (5% in the liceo), while 15% of students of regular classrooms reported the definition of incompatible properties. We decided to accept the latter as a consistent answer based on the instructor's diary. During the completion of the task, instructors asked privately those students who were reporting the definition: "what about position after the measurement? Does the system possess a property of not?", to which they replied that, yes, it was obvious because quantum measurement always gives one value.
While students performed very well in this activity, assigning a physical meaning to their own answers to the third and the fourth question was a totally different matter. Students belonging to regular classrooms were puzzled by a phenomenon they had not encountered in the context of polarization: the absence of any property of a physical quantity. Some of those attending the summer school (20%) suggested explanation in their answers to the task, in terms of a "perturbation" introduced by the measurement device. In that versions of the course, the next activities concerned the discussion of Heisenberg's microscope thought-experiment and Bohr's criticism of it (see Section V.4 for their refinement), followed by a revision of the concept of _system quantity_: classical quantities vs. the new concept of observable, which can be indefinite, and that of quantum parameter.
Moving on to the introduction of the hydrogen-like context, the worksheet block on the topic is displayed in Fig. 29. Question **F1.2** was added in the version for Liceo Statale Corradini. As a matter of fact, the situation described in the item can and will be discussed also in Unit 3 in the form of a superposition of bound states of the hydrogen-like atom. Therefore, we wanted to tighten the coherence of the course, proposing the same issue at both qualitative and quantitative level. In addition, we intended to investigate whether students were able to answer a question which is analogous to the last question of the block on position and velocity, but in the compatible case. In this block, students are required to perform the following tasks: 1) to recognize that, if a system can have properties of different observables at the same time, these properties need to be compatible; 2) to qualitatively determine the results of the measurement of an observable (\(L\)) on a system which has no property of that observ
Figure 26: Comparison of the results of the task number 1.11.
able, but possesses a property of a compatible observable (\(E\)); 3) in the case of multiple properties possessed by the system at the same time (\(E\), \(L\), \(L_{z}\), \(S_{z}\)), some of which are compatible and some incompatible with the observable to measure (\(x\)), determine whether compatibility or incompatibility prevails (actually the latter: the system cannot have also a position property); 4) to qualitatively determine the results of the measurement of \(x\) on the bound state at hand.
Results of the two design experiments in terms of rate of consistent answers are shown in Fig. 30. While in both experiments, more than half of the students consistently answered all of the questions, their results were generally worse than that obtained in the first block, with summer school students outperforming regular classrooms. As to the latter, we still get a very high percentage of consistent answers to question 1 (85%) and question 3 (81%), while 60% consistently answered question 4. Question 2 turned out to be the most difficult for them, with 53% of consistent answers. Many students found difficulties with associating the situation described to the relation of compatibility (28% of inconsistent answers, including "unaquirability", "incompatibility", and an innovative "direct relation"), even if most of them had correctly answered question 1. Others said that if the properties are compatible, nothing changes in measurement (18%). Explanations provided in response to the fourth question, instead, were quite similar the those reported for the fourth item on position and velocity. In general, the physical situations presented in question 2 and 4 were totally new to students, who had never encountered phenomena related to compatible observables in the previous parts of the course. Here, the carefully selected and motivated students of the Summer School of Excellence have been quicker to respond to the challenge posed by the new context.
In general, a positive remark on the two activities concerns the ease with which students replaced the concept of "outcome-property" with the more suitable expression "property of an observable." The former had been introduced in the peculiar context of the polarization of the photon, where it is not immediate to interpret interactions of photons with devices as measurements on the photon (as we have seen in Section V.2.1), and to describe the two possible outcomes as values of an observable of polarization. However, also thanks to the wording of the definitions of the relations between properties and of the items of the two blocks, all students referred to properties of observables, and none mentioned outcome-properties anymore. In addition, since relations between observables will be defined by referring to properties that are acquired in measurement (see Section II.1.2), the fact that students spontaneously interpreted the result of measurement as the acquisition of one property of the measured observable (and not the _possible_ acquisition of a generic property \(P_{b}\), as in the definition of the relations) represented a progress towards the upgrade to the relations between observables. Last, in the second block some students used the expression "indefinite" with relation to observables (e.g., in answering question 4: "\(E\), \(L\), \(L_{z}\) become indefinite while \(S_{z}\) is retained"), which means that this aspect of the revision of the concept of _system quantity_ has been internalized by them. After these activities, the work in Unit 1 is almost completed: all we have to do is applying the relations between properties to ideal classical measurements, which resulted trivial for students (virtually all identified unacquiability and compatibility, while incompatibility is out of the picture), defining the relations between observables, and administering the summary table on the revision of the concepts of measurement and _system quantity_.
Figure 28: Comparison of the results of the task number 1.13 in Fig. 9.
Figure 27: Using the relations between properties to discuss ideal measurement at a global level (position and velocity): worksheet block administered in the Summer School of Excellence, Udine, 2018 of Excellence and Liceo Statale Corradini, Thiene, 2018.
### Epistemic practices
In the previous sections on the DBR cycles, we already presented activities corresponding to theoretical practices for the construction of scientific knowledge: the elementary thought experiment in Section V.2.1, the change in perspective described in Section V.3.1 and the extension of results found in one context to other contexts in V.3.2. These activities are instrumental to the implementation of more than one principle of design at a time. Here we present activities that are exclusively designed to implement the _Epistemic Principle_, with a focus on thought experiments and mathematical modelling practices.
#### v.4.1 Interpreting already known laws within the framework of new models: Malus's law
The development of this inquiry activity was not only useful to mirror a basic practice of the theoretical physicist, but helped us also strengthen the coherence of the course, by making us aware of the importance of the language in the implementation of our interpretive proposal. The case at hand concerns the gradual transition from a mixed framing of quantum objects, oscillating between ensembles and individual systems, to a language carefully focused on the latter (see Section III.4.1).
Our initial intention was to guide students to actively develop a probabilistic perspective by means of a _structured inquiry_ strategy [74] in which we engage students in a predict-observe-explain sequence [78] with simulated experiments in the JQM environment (Fig. 9, activity 1.6), followed by an interpretive question on its results (Fig. 9, activity 1.7). The worksheet block used in the Summer School of Excellence 2014 (28 students) is displayed in Fig. 31.
All students but one answered **B1.1** by saying they expected half of the photons to reach the detector: 37% of them explicitly used the formula for macroscopic light beams (e.g., "5/10, because \(I=I_{0}\cos^{2}45\)"." The others based their prediction on the knowledge of the angle ("because the angle is 45\({}^{\circ\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
lay culture: "horoscope of the photon", and was clearly formulated as a prediction on a single photon. A further specification was added to the item for highlighting the global nature of the request: asking to take into account the polarization property of the photon and the axis of the filter. The number of previous tasks was reduced to focus on the last question (see Fig. 33).
**B1.** The single-photon source can be set to emits photons prepared with a polarization property of your choice. Choose 45', select the emission of 10 photons, one at a time, passing through a polarizing filter with horizontal axis. As did a detector.
**B1.** PICEDT: what fraction of photons do you expect to enter the detector? Explain the assumptions underlying the prediction.
**B1.2** OBSERVE: evaluate the source, write down the result and repeat the experiment with beams of 50 and 100 photons. Was your prediction correct? Epulign.
**B1.3** A probability for the PHOTON: in view of Malus’ law, what future
**B1.4** Repeat the experiment and write down the number of detected photons.
**B1.5** Now select the emission of a beam of 10 photons at a time (properties, option: beam). Repeat the experiment many times and write down the results.
**B1.6** Collect the different results obtained, adding those of your desk mate, and determine the average number of transmitted photons. What value have you found?
**B1.7** What conclusions can you draw on the meaning of Malus’s law for individual photons?
Malus' law acquires a probabilistic meaning: a photon prepared with polarization at 45' has a probability equal to \(\cos^{2}45^{*}=1/2\) to be transmitted by a filter with horizontal axis. In general, a photon with polarization at \(\theta\) incident on a filter with axis at \(\phi\) has a probability to be transmitted equal to \(\cos^{2}(\phi-\theta)\).
The results obtained in 2015 on the interpretation of Malus's law are displayed in Fig. 34. The majority of the students (61%) discussed the item in terms of probability: 52% of them in the case of a photon polarized at 45\({}^{\circ}\) passing through a filter with axis at 0\({}^{\circ}\), 48% writing the general formula (e.g., "The probability is \(cos^{2}\alpha\), which is the angle between the polarization property and the axis of the filter"). A significant minority of the students (17%) kept focusing on beams. The remaining answers were irrelevant.
In 2016 (Summer School of Excellence, 27 students), there was no change in the activity. However, as we saw in Section V.2.1, the subsequent worksheet blocks underwent a major revision: substituting calcite crystals for filters in order to introduce quantum measurement (Fig. 9, activities 1.8-1.10). While this revision represented a strong improvement as regards the understanding of quantum measurement, it had a significant impact on the activity concerning Malus's law, eliciting an issue that had not come to light before: the difficulty to transfer the calculation of the transition probability to the interaction between photons and crystals+counters. Evidence on this issue was provided by answers to item **C2**, administered in 2016, which is reported in Fig. 35.
Only 37% of the students wrote a consistent answer. The others either assumed that the photon was initially polarized at 45\({}^{\circ}\) (15%), wrote in both cases \(cos^{2}\theta\) (another 15%), or gave highly inconsistent answers (33%): e.g., "the probabilities are respectively 0% and 100%", or "79 and 21." A need emerged to encourage students to focus from the start on the unifying features of the interactions between the photon and filters plus counter or crystals plus counters. These are: 1) the fact that most interactions had uncertain outcome, but in two special cases the outcome was determinate; 2) the existence of a transition between the prepared angle of polarization and the resulting angle after the interaction (in the case filters, this was evident at least when the photon was transmitted).
The following year (2017) saw the last change in the
Figure 33: Results of item A7: 2014 version.
Figure 32: Results of item A7: 2014 version.
Figure 34: Results of item A1.3, the “horoscope of the photon”: 2015 version.
Figure 35: Transfer of Malus’s law to the context of birefringent crystals.
design of the worksheet block. First, reformulation of the introductive items in terms of situations concerning a single photon, focusing from the start the fundamental dichotomy between interactions with a determinate outcome/interactions with an uncertain outcome. Second, addition of a specific item designed to generalize the results of the horoscope of the photon to the case of an arbitrary initial angle of preparation (\(\theta\)) and resulting angle after the interaction (\(\phi\)). Last, rewording of item **C2**, on the calculation of transition probabilities in the context of calcite crystals: beams are replaced by a single photon. The new worksheet block is displayed in Fig. 36. The goal was twofold: increasing the number of students developing a probabilistic interpretation of the law of Malus; increasing the number of students consistently transferring the calculation of the transition probability from the context of filters to the context of crystals. We also highlight that, in this version of the worksheet, we eliminated the use of simulated experiments in JQM, converting this activity a into a purely theoretical form of inquiry.
The results of **B3** (horoscope of the photon) are displayed in two comparison tables reporting also those of equivalent items in the design experiments of 2014 and 2015. The tables discuss two different dimensions of student reasoning on Malus's law: first, the alternative between a probabilistic and a statistical interpretation, presented in Fig. 37; second, local reasoning (transition probability for an angle of \(45^{\circ}\) between the initial property and the outcome-property) vs. global reasoning (general formula), presented in Fig. 38. Since we promote a probabilistic interpretation and a global form of reasoning, we see that in both respects there has been a steady improvement over the years. After administering **B3**, answering **B4** was a trivial task: all but one student answered correctly. The only student giving a wrong answer put a plus instead of a minus between the angles: \(cos^{2}(\theta+\phi)\). This means that, if we consider **B3** and **B4** as components of a single task on the interpretation of Malus's law, virtually all students developed a consistent understanding of the subject. By using this sequence of items and, in particular, by discussing transition in terms of the angle between the initial property and the outcome-property, question **C2** on the transition probability in the context of crystals became straightforward. All students correctly answered the item, even if there was a long teaching/learning session in between (activity 1.8 and most of 1.9) before administering this question.
#### v.4.2 Thought experiment: Description of an unpolarized beam in terms of photons
This activity marks the start of student work in modelling and of the use of worksheets in the course. In this section and in the next one, we provide practical examples of the development of thought experiments designed to engage students in a theoretical form of testing experiment. Then, we discuss the refining process of these tasks, illustrating how data on student learning guided us to improve the structure and the wording of the worksheet activities.
It is important to highlight that thought experiment may play a variety of roles [80]. Beyond representing a method to facilitate a conclusion drawn from available experiences and sources (as in this section) or an argument against a given explanation (Section V.4.2), they can act as a tool for illustrating some of the counter-intuitive or unsatisfying aspects of a theory (Maxwell's demon, Schrodinger's cat), or for finding new constraints that help guide positive modifications of a theory (Einstein's elevator). Of the different roles of thought experiments we talk briefly in the _historical snapshot_ (see Section IV.1) that follows the task.
At this point of the course, students have explored the polarization of macroscopic light beams by means of cheap experimental materials and discussed evidence on the detection of the photon and on its polarization after the passage through a filter. The task students are asked to perform is extending the model, describing unpolarized beams in terms of photons (Fig. 9, activity 1.5). Since this is the beginning of the theoretical modelling cycle, its goals are manyfold: a) using the activity as an instrument to invite students to engage in modelling tasks, and to design thought-experiments as a theoretical form of testing experiments; b) using it as an opportunity to explain the basic heuristic principle that drives the modelling process: our hypotheses on the behavior of individual photons must be compatible with the classical quantitative laws for macroscopic beams; c) minimizing the axiomatic basis of the course; d) putting prior intuition in the service of learning QM.
The last goal leads us to the genesis of the activity. In the first stages of the development of this course, we decided to investigate spontaneous models of photon polarization. After exploring the interaction of light beams with polarizing filters and presenting some empirical evidence on the detection of light in discrete energy quanta, we asked students to draw a sketch of a vertically polarized beam in terms of photons, and then of an unpolarized beam.
About half of the students of the Summer School of Excellence 2015 (41 students) interpreted vertically polarized light as composed of photons uniformly polarized in the vertical direction. They represented them either as vertical segments or double arrows, with written answers reporting the content of their sketch: e.g., "photons polarized in the same direction are used for polarized light." This is coherent with the exploration performed on macroscopic light beams, in which students observed that, by rotating a filter of \(180^{\circ}\), the intensity of the transmitted light does not change, and therefore that its polarization property can be identified with a line in a plane perpendicular to the direction of propagation. These sketches also show that the representation of the photons used in JQM can be perceived as intuitive by students. The answers of the other half of the students proved that ascribing a polarization property to the indi
vidual photon is not the only natural solution: many of them interpreted polarization as a group property. For instance, vertically polarized photons were represented as balls or dots arranged in vertical rows.
As regards unpolarized light, those students who interpreted polarization as a property of the individual photon drew three different kinds of sketches (see Fig. 39): segments or double arrows oriented in different directions (79%), some of them explicitly adding "randomly oriented"; stars, that represented photons polarized in all directions (11%); empty circles or dots, that represented unpolarized photons.
In the Summer School of Excellence 2017 (32 students), we investigated how they interpret unpolarized light in terms of photons after learning that polarization is a property of the individual photon, and after looking in JQM at the visual representation of a beam of photons transmitted by a filter with vertical axis. We asked them to draw a sketch of an unpolarized beam by using a photon model of light.
Almost all students (88%) interpreted a beam of unpolarized light as made of photons polarized in different directions, drawing segments oriented at various angles. Almost half of them (46%) explicitly added that the angles are "randomly distributed." As in 2015, alternative interpretations included only unpolarized photons, represented by empty balls (7%), and photons polarized at all angles, represented by stars (2%).
Based on these results, we assumed that students following a similar learning trajectory would propose some or all of the explanations at hand. More important, we realized the possibility of settling the matter by means of a structured modelling process in which students are asked to rule out some hypotheses and identify the one that is compatible with the available evidence. This kind of activity can be described as a testing experiment of theoretical nature or, in short, as a thought experiment. As a first step, we verified whether experiences and sources were conceptually sufficient to run it, and concluded in the affirmative: at that point, students were expected to know that 1) polarization is a property of the single photon; 2) photons of a polarized beam are all polarized in the same direction; 3) the intensity of the light is related to the number of photons emitted at a given instant; 4) the intensity of a beam of unpolarized light passing through a filter is reduced to half (Malus's law for unpolarized light); 5) the intensity of a beam of polarized light passing through a filter with a different axis is reduced according to its polarization direction, and therefore some photons are absorbed (Malus's law for polarized light).
In order to structure a task in which students are actively engaged in running a testing experiment, we referred to the ISLE learning framework [57], in which the
Figure 37: Comparison table: statistical vs. probabilistic interpretations.
Figure 38: Comparison table: local vs. global reasoning.
Figure 36: Worksheet block on the interpretation of Malus’s law, including its transfer to the context of calcite crystals: 2017 version.
phases of this process are clearly identified and associated with scientific abilities. These abilities are described in rubrics introduced in Etkina et al. [81], which are to be used as self-assessment instruments. For scientific abilities related to testing experiments, see Etkina [57], Appendix B.
After examining the phases of a testing experiment, we decided to structure the following task: a form of _guided inquiry_[74], in which the instructor provides the issue to explore, encouraging students to generate different hypotheses in a whole class discussion (in this case: photons polarized in different directions, photon polarized in all directions at the same time, unpolarized photons). Then, based on each hypothesis, students are asked to make an assumption on the action of the device (a polarizing filter with vertical axis) on a photon beam (respectively: reducing the number of photons and polarizing the transmitted ones, eliminating all polarization properties that differ from 90\({}^{\circ}\), adding a polarization property at 90\({}^{\circ}\)). After that, for each hypothesis, students are asked to make a prediction on the transmission process (according to the first hypothesis, a possible reduction in the number of photons, according to the second and the third one, transmission with certainty). The final task is drawing an appropriate conclusion on each hypothesis by comparing the corresponding prediction with the known result, i.e., the reduction to half. Since in the case of stars and empty circles no photon is absorbed, the only hypothesis left is the first one. In this inquiry, the role of the observational experiment is played by the class discussion in which students identify different explanations, presumably some or all of the previously discussed hypotheses. In Fig. 40 is shown the worksheet block administered in Liceo Scientifico Statale Alessi, 2019 (39 students attending the lesson).
The analysis of the answers is structured according to the scientific abilities involved in the task, which are a subset of those described by Etkina for testing experiments. Based on the format of the activity, the abilities have been reformulated as follows:
1. Is able to identify the possible action of the experimental device (the filter with vertical axis) on the systems, based on the hypothesis;
2. Is able to make a reasonable prediction (on the number of transmitted photons) based on the assumption on the role of the experimental device;
3. Is able to decide whether the prediction on the beam of photons is compatible with the expected outcome prescribed by the macroscopic laws (reduction to half).
The order of the abilities corresponds to the sequence of steps needed to successfully run the experiment. The application of ability (a) is considered successful depending on the hypothesis under scrutiny: for stars and empty balls, if the answer is compatible with the hypothesis (respectively, elimination of all polarization properties that differ from 90\({}^{\circ}\), addition of a polarization property at 90\({}^{\circ}\)); for segments oriented in different directions, which are supposed to have one polarization property, if the answer is compatible with Malus's law for polarized beams. The application of the ability (b) is successful if it is coherent with the role ascribed to the filter (regardless of the consistency of the assumption). The application of the ability (c) is successful when the answer is coherent with those given before, and uses as empirical term of comparison either the reduction to half or its qualitative version (reduction of the number of photons).
The rate of consistent application of the abilities according to each hypothesis in Liceo Scientifico Statale Alessi is displayed in Fig. 41. Since only two alternatives are displayed in the worksheet block, students had to choose which hypotheses they wished to discuss. All of them opted for photons polarized in different directions (hypothesis 1) and photons polarized at all angles (hypothesis 2), that were proposed by them during the class discussion. The possibility that photons are not polarized was added by the instructor at the end of the discussion, but no student considered it. As expected, assessing hypothesis 1) was much harder for students than assessing hypothesis 2). The discussion of quantum uncertainty and the stochastic interpretation of Malus's law were scheduled only after the task. As a consequence,
Figure 40: Worksheet block: unpolarized light in terms of photons, Liceo Alessi, Perugia, 2019
Figure 39: Spontaneous models of unpolarized light in terms of photons: Summer School of Excellence, Udine, 2015
deciding whether the number of transmitted photons according to hypothesis 1) could be half the number of the incident ones was not an easy task. Only 21% of the students consistently applied ability (a) with relation to the first hypothesis. A qualitative approach resulted more productive than a quantitative one: 75% of consistent answers focused on the fact polarizing the light means/implies lowering the intensity or that the filter selects only some of the photons. Inconsistent answers revealed that the main issue with Malus's law is the idea that "the filter selects only photons oriented as its axis" (33%). Such a condition is way too restrictive (infinitesimal), but this pattern might explain why a further 15% of the students wrote that no photon is transmitted by the filter: e.g., "it blocks the photons", "it does not let any photon pass through." As to ability (b), students were generally able to formulate a prediction that was consistent with the hypotheses and with their assumption on the action of the filter. However, when it comes to assess the validity of hypothesis 1), further difficulties arose: 28% of the students left the item blank (only 12% in hypothesis 2), and another 28% wrote incoherent or irrelevant answers, sometimes trying to use Malus's law for polarized beams instead of the reduction to half. This shows that while 41% of the students where able to draw coherent conclusions on hypothesis 1) - often based on wrong premise -, the most significant difficulty concerned the use of Malus's law. In general, only two students consistently answered the whole task. Another aspect is worth to be mentioned: despite the good results obtained in the assessment of hypothesis 2), an issue arose in relation to it, i.e., the idea that "by removing all component but the vertical one, the intensity of the four transmitted photons is reduced by half" (15% of the students).
Short after the design experiment in Perugia, we held the course at Liceo Scientifico Galilei, Trieste. In view of the new design experiment, we revised the previous part of the course on the introduction of light quanta, adding that in all considered cases the intensity of a light beam is not dependent on the polarization of its photons. Since the task was considered a preliminary activity designed to show students how to run a theoretical testing experiment, we revised its structure by clearly articulating the phases of such a procedure. In addition, we weakened the conditions of acceptance of a hypothesis, replacing the mathematical expression "satisfy the experimental results" with a more qualitative one ("are compatible with empirical evidence"). The worksheet block is displayed in Fig. 42.
Also in this case, all students opted for discussing photons polarized in different directions (hypothesis 1) and photons polarized at all angles (hypothesis 2). However, the self-selected students of Liceo Galilei (18 attending the lesson) achieved much better results than regular classrooms. First, 66% of them were able to apply ability (a) with relation to the first hypothesis (21% in Perugia). Surprisingly, while we had not mentioned uncertainty and probability before, half of them spontaneously adopted a probabilistic approach to Malus's law: "if they are not vertical, there is a certain probability", "photons may be stochastically transmitted or not." Others, while not mentioning probability, still displayed a global approach to the application of the law in terms of photons: "photons pass or not based on the angular difference." The only issue with this part of the task, was the same as in Perugia: the idea that photons are transmitted exclusively if their polarization is identical to the axis of the filter. In general, Almost 30% of the students consistently completed the task. An additional 11% identified the portion of transmitted photons according to hypothesis 1) as half of the emitted ones, assigned the same value to the expected outcome, but wrote no conclusion. The issue with hypothesis 2) appeared to be completely solved: all students consistently assessed the hypothesis.
After this experiment, we revised the task, leaving out quantitative elements in favor of qualitative ones: we ask if, based on each hypothesis, there can be or not a reduction in the number of photons as a result of the transmission process. Since we wanted students to assess all three possible hypotheses, we added another space for hypothesis 3): unpolarized photons.
#### v.4.3 Thought experiment: Superposition as statistical mixture of component states?
While the thought experiment described in the previous section plays a platonic role [80], both destructive
Figure 41: Consistent application of the abilities, Liceo Alessi, Perugia, 2019
Figure 42: Worksheet block: unpolarized light in terms of photons, Liceo Galilei, Trieste, 2019
and constructive (ruling out some hypotheses and identifying the one that is compatible with available evidence), this thought experiment plays only a destructive role: excluding the possibility that quantum superposition can be interpreted as a statistical mixture. The activity has two goals: 1) helping students distinguish between superposition states and mixed states; 2) launching the discussion of the Einstein-Bohr debate on quantum uncertainty and the completeness of QM. The first goal corresponds to addressing one of the most persistent issues in the learning of the theory (see Section I). The claim we ask students to assess is quite similar to a statement used by Passante et al. [4] in an investigation on junior-level students, but is adapted to the context of polarization and to the educational level of secondary school students. The original statement is: "Consider the superposition state \(\psi=1/\sqrt{3}\psi_{1}+\sqrt{2/3}\psi_{1}\); the particle can be thought as coming from a procedure that produces \(\psi_{1}\) one-third of the time and \(\psi_{2}\) two-thirds of the time." Since this statement implies that a measurement on a single system is deterministic, that the observable to be measured is definite, and that the use of probability is due to lack of knowledge about the state of the system, its physical content lends itself also to our second goal. As a matter of fact, by analyzing and rejecting the claim, we pave the way for introducing one of the main problems with the standard interpretation of QM: how is it possible that identical systems interacting with the same measurement device in the same conditions give different and unpredictable results?
For discussing the topic, we propose to students a _guided inquiry_[74] of a different kind than that described in Section V.4.2. Here the hypothesis is provided by the instructor, pretending it has been advanced by students of the previous years. In order to make the most of it for our second goal, we guide students to unfold the physical consequences of the hypothesis with a series of questions. Then, we ask them to design a thought experiment to test the hypothesis, and to run the thought experiment. The version used in February 2019 in Liceo Scientifico Statale Alessi (33 students attending the lesson) is displayed in Fig. 43. As we see, the situation is isomorphic to that already presented in the worksheet block on the revision of measurement (see Section V.2.1, 2016 version). Therefore, we expected that students could easily answer question **A1.1.1**. The answer to the following item, **A1.1.2**, is a logical consequence of the former. **A1.1.3** requires students to shift focus from the single photon to the beam and, as the first question, has been discussed orally in the introduction to quantum measurement. The thought experiment is elementary, since all we need is to direct the beam to a filter with axis at \(30^{\circ}\). The prediction associated with the hypothesis is that some of the photons will be absorbed (on average \(3/8\)). However, we know that at a macroscopic level, all the light polarized at \(30^{\circ}\) will be transmitted by the filter. The hypothesis is false.
The results of the thought experiment are displayed in Fig. 44. Partially consistent and consistent answers are classified according to the scientific abilities that are associated with the conduction of the thought experiment:
1. Is able to design a reliable experiment that tests the hypothesis;
2. Is able to make a reasonable prediction based on a hypothesis;
3. Is able to decide whether the prediction and the outcome disagree.
Also in this case, the order of the abilities corresponds to the sequence of steps needed to successfully run the experiment. Here we did not consider the ability to make a reasonable judgment about the hypothesis since, in this case, the recognition of a discrepancy between the prediction and the outcome practically coincides with a rejection, the opposite in a confirmation. Moving on to discuss the results, we observe that only \(36\%\) of the students consistently ran the thought experiment. Among these students, most used a filter (\(82\%\)), the others a crystal. Most tested, as expected, the discrepancy in the transition probability (\(82\%\)). Others, interestingly, a logical consequence of the hypothesis: the fact that only photons polarized at \(0^{\circ}\) and \(90^{\circ}\) should exist. By using arbitrarily oriented filters, these three students came to a consistent conclusion: e.g., "direct the photon beam prepared in the state \(|30^{\circ}\rangle\) to a filter with axis at an angle \(\theta\) that is different from \(0^{\circ}\) and \(90^{\circ}\). Transmitted photons acquire a polarization property at \(\theta\), and the hypothesis is not satisfied." Some students gave partially consistent answers, correctly applying only one or two of the abilities needed. This shows that running a self-generated thought experiment, as simple as it can be, requires the coordination of different abilities, and that the previous thought experiments students ran during the course were not necessarily sufficient to develop an awareness of all the needed steps. Worse, many student did not even try to write anything and left the answer blank (\(30\%\)). A noteworthy aspect concerns \(12\%\) of the students, who either used a filter at \(0^{\circ}\) or a crystal at \(0^{\circ}\) and \(90^{\circ}\), thus coming to the conclusion that the hypothesis was confirmed: both the testing experiment and the prediction proposed by these students were identical to the hypothesis itself. A need emerged to give more content support in the item text, specifying that we intend to know whether the hypothesis is also valid for the measurement of different polarization observables.
As regards the previous questions, a large majority of students answered all three consistently (\(79\%\)). Inconsistent answers to **A1.1.1** were all due to the same issue elicited in sections V.2.1 and V.4.1: students focused on the beam instead of the single photon, thus coming to the conclusion that measurement is probabilistic and the observable is indefinite. Also \(35\%\) students who consistently answered **A1.1.1** and **A1.1.2** wavered when it came to decide between the two options. At first, they focused on the beam, only to change their mind later: from "the nature of the interaction is stochastic" to "it is certain", from "indefinite" to "definite." These students clearly understood the hypothesis, since in **A1.1.3** they
stated that the uncertainty was due to lack of knowledge on the single photons and not on the intrinsic stochasticity of quantum measurement, but were probably misled by something in the wording of item **A1.1.1.1**, or simply by the fact that the framing of the quantum objects has proved to be a tricky issue.
Consequently, we revised the task, following the guidelines derived from data. In particular, we added to the hypothesis its logical consequence (photons are polarized only at \(0^{\circ}\) or \(90^{\circ}\)), which has been productive for some students, and more specifications in the description of the task concerning the thought experiment. For the following design experiment, conducted one month later in Liceo Scientific Galilei (16 self-selected students attending the lesson), we did not change **A1.1.1** (now **A1.2.1**), but provided a verbal prompt for focusing on the single photon, asking students to think back to the initial task on measurements performed on mixtures of photons already prepared at \(0^{\circ}\) and \(90^{\circ}\). The new worksheet block is displayed in Fig. 45.
All the students of Liceo Scientific Galilei consistently answered the first three questions, and 75% of them successfully ran the thought experiment, most using a filter. The only student who designed an experiment with a crystal wrote a very clear and complete answer: "I rotate the crystal by \(30^{\circ}\), so that the channels are at \(30^{\circ}\), \(120^{\circ}\). In this case the property of a beam polarized at \(30^{\circ}\) is one of the outcome-properties \(\Rightarrow\) certain result. On the contrary, not all photons of the random mixture are transmitted, the process is stochastic. False." Of the 4 students who did not answer consistently, two designed reliable experiments that tested the hypothesis, but did not provide a correct prediction; one proposed an experiment for measuring the 0,90 observable, thus confirming the hypothesis. The last one left the answer blank.
v.4.4 Identifying and interpreting mathematical constructs for describing physical situations and deriving new results: Entangled superposition
The refinement of the first mathematical modelling activity included in the course, i.e., the introduction of the vector representation of the quantum state, has been briefly described in a previous work [3]. The main issue concerned the discrimination between quantum states and measurable properties, which was challenging in the context of linear polarization. As a matter of fact, the correspondence between polarization properties, that are represented by directions in the plane of polarization, and quantum states, that are represented by vectors with the same angular relations as the properties (e.g. \(0^{\circ}\rightarrow|0^{\circ}\rangle\), \(90^{\circ}\rightarrow|90^{\circ}\rangle\)), hinders the recognition of the abstract nature of this vector and suggests its identification with the property or, in general, with a physical quantity. The issue was solved by adding an interpretive question on the physical dimensions of the state vector and by asking about its nature only after introducing the state vector of the hydrogen-like atom, which breaks the one-to-one correspondence between properties and states and the relation between the directions of the properties in the physical space (which is not relevant to scalar observables) and those of the corresponding vectors in the state space.
Here we illustrate the development of the activity designed for the identification and interpretation of an entangled superposition of modes (spatial and polarization mode of the photon), which lays the basis for the physical and mathematical description of a new situation: the entanglement of the polarization states of two photons emitted by parametric down-conversion. The activity we propose to students is a _structured inquiry_[74] which is very similar in format to that used in Unit 1 for applying the relations between properties at a global level
Figure 44: Results of the thought experiment: Liceo Alessi, Perugia, 2019
Figure 43: Worksheet block on the interpretation of quantum superposition: Liceo Alessi, 2019
and in the context of the hydrogen-like atom (see Section V.3.2). The difference is that, while in Unit 1 the tasks were of qualitative nature, the present activity involves the building of a mathematical construct.
In order to describe its development, we need to consider the placement of the activity within the course. Students have just concluded the part on propagation, establishing that the position of a photon between a direct and reversed calcite crystal (see Fig. 8) is indefinite, identifying a new form of interference, and building a full quantum model of a system for measurement and propagation. They have access to the mathematical representation of the polarization state of the photon, of the state of the hydrogen-like atom (in terms of quantum numbers), and to their superposition, which were addressed in Unit 2 and 3. They are also expected to know that a state written as \(|n,l,m,s\rangle\) is the composition of two states, \(|n,l,m\rangle\) and \(|s\rangle\), and that the first expression is the contracted form of \(|n,l,m\rangle|s\rangle\). In the course, we leave out any reference to the mathematical construct known as tensor product, but explain that the last expression is a way to denote a state (the global state of the atom) that depends on two component states (its spatial state and its spin state).
The key for a smooth and compact discussion of entanglement is the usual situation of a photon incident on a calcite crystal followed by two detectors (an apparatus that is isomorphic to a Stern-Gerlach device followed by a screen). The only difference from the cases discussed in units 1-3 is that here we do not focus on preparation or measurement (see Fig. 7), but on the properties of the particle beyond the crystal and just before the measurement. It is worth noting the richness of this simple context: if we choose to discuss position, we address the wave-particle duality, if we discuss polarization, we are led to the entanglement of modes.
In order to activate the modelling cycle for building the superposition of entangled states, we need to specify the relevant experiences - concerning the description of photon polarization after the crystal and of its measurement - and the sources - concerning the basic ingredients of the formal representation:
* Experience 1: the knowledge of the fact that polarization is indefinite after the crystal;
* Experience 2: the knowledge of the fact that by measuring position you also measure polarization and vice versa;
* Source 1: the mathematical representation of the spatial state of the photon in an elementary form;
* Source 2: the product of spatial and polarization states;
* Source 3: the physical interpretation of the component vectors and of the coefficients of a superposition state.
The two experiences need to be provided to students. Source 3 is already available from the beginning of Unit 3, analogues of source 1 and 2 are available as regards the hydrogen-like atom and need to be applied to the photon. The modes of representation are available to students from the first units of the course, and are the iconic language of JQM (Unit 1) and the ket representation of product vectors (Unit 2).
The sequence of the activities is dictated by the structural chain of activation described in Section III.3.1:
1. exploration of the physical situation, highlighting those aspects that are relevant to the issue at hand (experience 1 and 2);
2. introduction of the mathematical ingredients needed to derive the new construct (sources 1 and 2);
3. mathematization: task for supporting the identification of the construct (source 3);
4. interpretation: task for analyzing the new construct (rediscovering and deepening the content of the qualitative experiences).
Experience 1: just before starting the part on entanglement, we use the definition of the possession of a property (a system possesses a property if and only if the probability to measure it is 1) to guide students to determine again, from this perspective, that after the crystal, the position of a photon prepared in the superposition state \(|\psi\rangle=1/\sqrt{2}|0^{\circ}\rangle+1/\sqrt{2}|90^{\circ}\rangle\) is indefinite, since the photon will be stochastically collected by one or the other
Figure 45: Worksheet block on the interpretation of quantum superposition: Licoo Galilei, Trieste, 2019
detector. The same criterion is applied to the horizontal-vertical polarization observable, leading to the conclusion that also this observable is indefinite after the crystal. We propose additional tasks to determine that no other observable of polarization is definite: after the crystal, the photon possesses no polarization property,
Experience 2: we direct student attention to a fact that has been observed from the start, but that was not emphasized until now, i.e., a measurement of position after the crystal coincides with a polarization measurement. We also discuss the opposite situation: by replacing the detectors with filters as in Fig. 46, and by adding calorimeters, we determine which filter absorbs the photon, thus showing that, if we measure polarization, we also measure position.
Experience 1 and 2 are represented in Fig. 9 as activity 4.6.
Sources 1 and 2: after the second experience, position has clearly come into play. We therefore propose students to analyze the global state of the photon, using as a reference the description of the hydrogen-like atom. For this purpose, we introduce the spatial state of the photon in terms of three position (eigen)states:
* localized immediately after the source: \(|x\rangle\)
* localized at the entrance of the detector on the ordinary channel at \(0^{\circ}\): \(|x_{1}\rangle\)
* localized at the entrance of the detector on the extraordinary channel at \(90^{\circ}\): \(|x_{2}\rangle\)
With these tools available, we ask students about the global state of the photon at the time of its collection by a detector, if its polarization is prepared in the basis states: \(|x\rangle|0^{\circ}\rangle\Rightarrow|x_{1}\rangle|0^{\circ}\rangle\); \(|x\rangle|90^{\circ}\rangle\Rightarrow|x_{2}\rangle|90^{\circ}\rangle\).
The worksheet block displayed in Fig. 47 reports the mathematization and interpretation tasks, corresponding respectively to activities 4.7 and 4.8 of Fig. 9. The worksheet was administered for the first time in Liceo Scientifico Galilei, Trieste, 2019 (17 students attending the lesson).
Most students consistently answered the mathematization task (**C5**), proposing a superposition state that is compatible with the situation at hand: \(a|x_{1}\rangle|0^{\circ}\rangle+b|x_{2}\rangle|90^{\circ}\rangle\) (76%). Another student wrote a similar expression, but using the square of the coefficients: \(a^{2}|x_{1}\rangle|0^{\circ}\rangle+b^{2}|x_{2}\rangle|90^{\circ}\rangle\). More than half of them added consistent explanations, either focusing on the interpretation of superposition or on the change in state from the initial situation to the final one. For the first line of reasoning, "the state of the photon is a superposition of the state corresponding to \(0^{\circ}\), that is \(|x_{1}\rangle|0^{\circ}\rangle\) and the state corresponding to \(90^{\circ}\), that is \(|x_{2}\rangle|90^{\circ}\rangle\). The probability to find the photon in that states are \(a^{2}\) and \(b^{2}\)", for the second one, "we do not have \(|x\rangle|\theta\rangle\) anymore because the photon is after the crystal, and since it is only probabilistic, both outcomes must be included [in the superposition]." The others wrote a consistent formula without adding any explanation. Inconsistent answers included one student who wrote a separable state with equal coefficients: \((1/\sqrt{2}|0^{\circ}\rangle+1/\sqrt{2}|90^{\circ}\rangle)(1/\sqrt{2}|x_{1} \rangle+1/\sqrt{2}|x_{2}\rangle)\). Another student wrote the initial expression of the global state, just replacing the arbitrary coefficients with the square of the usual ones: \(|x\rangle(1/2|0^{\circ}\rangle+1/2|90^{\circ}\rangle)\). The third student inappropriately transferred the knowledge acquired in the context of the hydrogen-like atom, writing a very inconsistent expression in terms of quantum numbers: \(a^{2}b^{2}|n_{1}+n_{2},l_{1}+l_{2},m_{1}+m_{2},s_{1}+s_{2}\rangle\). As we will see, negative transfer from this context represented a serious issue in the last question of the worksheet (**C6.2**). A concluding remark on this item: all students used the plus sign in the superposition, which mirrors the sign of the initial superposition. However, in QM it is not possible to reconstruct a state by means of a measurement, as we do not get information on the phases [82]. Given that we focused on the identification of the superposition of entangled states, this issue was left out from the discussion.
Moving on to examine the interpretive task (**C6.1**) on the possession of properties of position and polarization before measurement, also in this case a large majority of students gave consistent answers (71%). These students displayed different forms of productive reasoning: some by using the definition of the possession of a property ("no, since I do not have a probability equal to 1 to measure them"), or by linking superposition to uncertainty ("no, given the fact that it does not possess any property for sure, it is represented by a superposition") or focusing on the change in state from the initial situation to the final one ("at the beginning it possesses them, but at the end it does not for a state at an arbitrary angle"). Some students also added that the two properties are compatible and correlated. As we talked about compatibility between position and spin properties only in Unit 1 (see Section V.3.2), this is a case of productive transfer from the context of the hydrogen-like atom. The remaining students either said that the system possesses one or both of the involved properties, or did not assess the question, focusing instead on the relation between the two observables: "not compatible before measurement." Again, this is a knowledge transfer from the context of the hydrogen-like atom, this time a negative one.
The last item (**C6.2**) asked how system properties change if we measure one of the two observables at hand. This question was by far the least successful of the three: only 47% of the students gave consistent answers. In general, these answers merely included the essential physical content: by measuring one observable, we acquire both
Figure 46: (a) emphasis on position measurement; (b) emphasis on polarization measurement.
properties. One students highlighted the correlation between the two observables: "I acquire a property and the corresponding property of the other observable." This is notable, since we had not emphasized this aspect of entanglement before. Another one productively referred to the discussion of experience 2, quoting a sentence we used in the slide presentation: "if I measure position, I also measure polarization and vice versa." Except for one student, who left the answer blank, the others gave inconsistent answers, 75% of them by inappropriately transferring their knowledge on the hydrogen-like atom. The majority wrote sentences such as "if I measure \(x\), I certainly lose \(E\), \(L\), \(L_{z}\), and retain spin, if I possess it." This is perfectly in line with the answer to item **F2.2** (see Fig. 29) on the measurement of position on an atom in a bound state. After all, the wording of this question was very similar to that of **F2.2**. Others claimed that nothing changes in measurement, "since the properties of position and spin are compatible", another statement that was used only with relation to the atom.
In general, while the mathematization task was very successful, the interpretive ones require some revision. As a matter of fact, the issue concerning negative transfer from the context of the hydrogen-like atom heavily affected the results of **C6.2**. The main suggestions come from productive lines of reasoning used by consistently answering students. In both **C6.1** and **C6.2**, we added a content support, guiding students to activate productive knowledge elements. In the first one, we suggested them to refer to the definition of the possession of a property by a system. In the second one, we added a reference to the work made on experience 2, which does not involve the hydrogen-like atom.
We conclude the section by observing that these activities allow us to immediately apply the conceptual and mathematical discussion of the entanglement of modes to a new physical situation: the purely quantum entanglement of different systems. A new mathematical modelling activity can be implemented by describing the physical situation of two photons emitted by parametric down-conversion. Information provided to students is the following: the possible results of polarization measurements on one of the photons, the effect of this measurement on the other photon, and the transition probability. Based on these elements, students can pass from an expression like \(\frac{|x_{1}\rangle|0^{\circ}\rangle\pm|x_{2}\rangle|90^{\circ}\rangle}{\sqrt{2}}\) to a structurally identical formula such as \(\frac{|0^{\circ}_{1}\rangle|90^{\circ}_{2}\rangle\pm|90^{\circ}_{1}\rangle|0^{ \circ}_{2}\rangle}{\sqrt{2}}\) (see Fig. 9, activity 4.9).
### Epistemological debates
Since epistemological debates are addressed in a whole class discussion without the use of worksheets or other written assignments, their refinement could be based only on the instructor's diary.
Here we limit ourselves to report on the revision of the first debate on the problem of indefiniteness and uncertainty. As described in Section III.4.2, in the initial versions of the course, we discussed the Heisenberg's microscope thought-experiment and Bohr's criticism of it in Unit 1, after the application of the relations between properties to the case of position and velocity. Students were generally at ease with an interpretation of uncertainty as caused by measurement disturbance, that they could reconcile with their classical intuition on point-like particles. However, when this view was questioned, raising the possibility that uncertainty is an intrinsic property of quantum systems, some students clearly showed their discomfort. In Thiene, one of the best performing students explicitly complained that, if that was the case, we should conclude that QM is an absurd theory and makes no sense. Up until then, she had taken active part to all the worksheet activities and to the whole class discussions that ensued. After that, and for the rest of the lesson, the level of her engagement significantly declined.
In the retrospective analysis, we considered the possibility to add more content support to this activity, including an anticipation of the discussion on the wave-particle duality. However, this would have subverted the structure of the course which, in accordance with the gradual construction of content in spin-first approaches [39] and recent textbooks written in collaboration with physics education researchers [37], scheduled the discussion of propagation only after a careful examination of the system at a point in time and of its behavior in measurement.
For this reason, we opted for providing students with an operative idea of indefinite quantity, that could give empirical meaning to this situation and be immediately connected with the now familiar context of polarization: "a quantity of a system is called indefinite when the ideal measurement of this quantity on a large ensemble of identical systems gives different results according to a proba
Figure 47: Worksheet block on the derivation and interpretation of entangled superposition: Liceo Galilei, Trieste, 2019
bilistic distribution" (see Fig. 9, activity 1.14). Of course, this begged the question of how to establish whether two systems are identical according to QM. Therefore, we told students that this would have been the driving question of the next unit since, in order to give a reasonable answer, we would have needed the concept and the formal representation of the quantum state.
The discussion of Heisenberg's microscope was moved to Unit 4, activity 4.5, where, based on the adoption of a field ontology, it was possible to contemplate the idea that quantum uncertainty is due to a measurement disturbance, and to reject it without regret.
With this revision, the discussion of the issue at hand did not cause any visible discomfort.
## VI Conclusions
As shown in comprehensive reviews on learning difficulties [24; 25], the shift from the classical picture of the world to the quantum one is a central element behind the strong challenges students face in learning QM. In order to deepen the interpretation of empirical results and to help students overcome these challenges, there is a need to identify how they are connected with specific aspects of the paradigm change. Multiple links are suggested by an analysis that fed into a model of conceptual change in the learning of successive theories [32]. In this article, we describe the development of a course for secondary school that is based on this analysis, with the aim to address the challenges related to the revision of classical knowledge, to the building of a well-organized knowledge structure on QM, and to the building of a plausible and reliable picture of the quantum world.
The design principles that guide the development of the course are generated by a coordinated application of the analysis of conceptual change, of a framework describing the epistemic practices of theoretical physicists, and of a careful approach to interpretive themes. They are called _Principle of Knowledge Revision_, _Principle of Knowledge Organization_, _Epistemic Principle_, and _Epistemological Principle_.
The first one relies on the examination of continuity and change in basic concepts and constructs to promote the understanding of their quantum counterparts and the ability to discriminate between aspects of the old and the new notions, thus identifying their correct context of application. The instruments used in this process suggest strategies to leverage student resources according to specific patterns of change in the trajectory of each notion.
The second principle concerns the development of conceptual tools denoted as _relations between properties_, designed to promote the construction of a unifying picture of quantum measurement across contexts.
The third one proposes to design the course around a modelling process that includes epistemic practices of the theoretical physicist, with the goal to help students accept the quantum description of the world as a plausible and reliable product of their own inquiry.
The last principle proposes to design the course around a clearly specified form of interpretation, so as to identify and discuss the facets of the foundational debate that are triggered by each choice, with the aim to help students develop an awareness of the cultural significance of the debate, of the limits the chosen stance, of the open issues.
In order to structure the content and the modelling process, the first three principles have been blended in the template of the model of modelling [50], a framework devised to examine the process of modelling in science education. The result is a model that starts from the description of a property of an object (photon polarization), and is developed and revised through a process conducted by means of theoretical epistemic practices (_Epistemic Principle_), gradually incorporating quantum measurement, state, superposition, propagation and entanglement (_Principle of Knowledge Revision_). Thanks to the _Principle of Knowledge Organization_, each step allows students to advance in parallel in the development of an elementary model of the hydrogen-like atom.
A special attention is devoted to the conversion of epistemic practices of the theoretical physicist into active learning strategies: different perspectives on the role of mathematics in physics such as that of Uhden et al. [58] and of Redish and Kuo [59] converge to structure the chain of activation used for mathematical modelling in a purely theoretical context, while the ISLE learning framework and the rubrics of scientific abilities [57; 81] are adopted as guidelines to convert thought experiments into theoretical testing procedures.
Then, we show how the _Epistemological Principle_ guides us to strengthen the coherence of the course and to design the discussion of epistemological themes.
The course is presented in Section IV, which includes an outline of its structure, of the types of activities that are designed to implement the principles, of the instruments and methods. A bird's eye view of the sequence and the types of activities is provided in Fig. 9.
The second part of the article describes the cycles of refinement of a set of activities chosen to illustrate the implementation of each of the four principles of design. During the analysis, the frequent references to the previous sections illustrate also the compactness of the proposal and the process by which the revision of the activities contributed to shape the initial guidelines.
In this work, we do not test the global effectiveness of the course, but show how the derivation of the design principles that are aimed to address the challenges in learning QM can be driven by a coordinated application of different frameworks, how these principles guided the development of the instructional sequence and of its strategies, how their implementation required a coordination of different research perspectives, and how the refinement of the activities influenced in turn the development of the guidelines. In particular, we describe the conversion of theoretical epistemic practices into innovative forms of inquiry for engaging students in the development of theoretical skills (e.g., generating and/or running thought experiments).
Future directions include the analysis of a pre-post-test administered in regular classrooms, in order to evaluate the effectiveness of the course.
|
2309.17167 | DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks | Large language models (LLMs) have achieved remarkable performance in various
evaluation benchmarks. However, concerns are raised about potential data
contamination in their considerable volume of training corpus. Moreover, the
static nature and fixed complexity of current benchmarks may inadequately gauge
the advancing capabilities of LLMs. In this paper, we introduce DyVal, a
general and flexible protocol for dynamic evaluation of LLMs. Based on our
framework, we build graph-informed DyVal by leveraging the structural advantage
of directed acyclic graphs to dynamically generate evaluation samples with
controllable complexities. DyVal generates challenging evaluation sets on
reasoning tasks including mathematics, logical reasoning, and algorithm
problems. We evaluate various LLMs ranging from Flan-T5-large to GPT-3.5-Turbo
and GPT-4. Experiments show that LLMs perform worse in DyVal-generated
evaluation samples with different complexities, highlighting the significance
of dynamic evaluation. We also analyze the failure cases and results of
different prompting methods. Moreover, DyVal-generated samples are not only
evaluation sets, but also helpful data for fine-tuning to improve the
performance of LLMs on existing benchmarks. We hope that DyVal can shed light
on future evaluation research of LLMs. Code is available at:
https://github.com/microsoft/promptbench. | Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, Xing Xie | 2023-09-29T12:04:14Z | http://arxiv.org/abs/2309.17167v3 | # DyVal: Graph-informed Dynamic Evaluation of Large Language Models
###### Abstract
Large language models (LLMs) have achieved remarkable performance in various evaluation benchmarks. However, concerns about their performance are raised on potential data contamination in their considerable volume of training corpus. Moreover, the static nature and fixed complexity of current benchmarks may inadequately gauge the advancing capabilities of LLMs. In this paper, we introduce **DyVal**, a novel, general, and flexible evaluation protocol for dynamic evaluation of LLMs. Based on our proposed dynamic evaluation framework, we build graph-informed DyVal by leveraging the structural advantage of directed acyclic graphs to dynamically generate evaluation samples with controllable complexities. DyVal generates challenging evaluation sets on reasoning tasks including mathematics, logical reasoning, and algorithm problems. We evaluate various LLMs ranging from Flan-T5-large to ChatGPT and GPT-4. Experiments demonstrate that LLMs perform worse in DyVal-generated evaluation samples with different complexities, emphasizing the significance of dynamic evaluation. We also analyze the failure cases and results of different prompting methods. Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks. We hope that DyVal can shed light on the future evaluation research of LLMs.
## 1 Introduction
Large Language Models (LLMs) have recently achieved unprecedented performance across diverse tasks (OpenAI, 2023; Bubeck et al., 2023). Such great performances have led to positive speculation on the possibility of LLMs being precursors of artificial general intelligence, necessitating the creation of nuanced evaluations. By pinpointing gaps for improvements, evaluation becomes the bedrock that enhances the understanding of current models and ensures AI's continued progression.
Efforts to evaluate LLMs have become intensified significantly. Liang et al. (2023) introduced HELM, which offers a holistic assessment of LLM in various scenarios. Similarly, Chatbot Arena (Zheng et al., 2023) evaluates LLMs by contrasting their generated outputs. Other benchmarks that have set the standard in the realm of LLM evaluations include AlpacaEval (Li et al., 2023c), C-Eval (Huang et al., 2023), ARB (Sawada et al., 2023), API-Bank (Li et al., 2023a), Socket (Choi et al., 2023), and Big-Bench (bench authors, 2023). Moreover, manual experiments have emerged as a complementary approach to these benchmarks, with works such as Bubeck et al. (2023) and Bang et al. (2023). Complementing these, human evaluators have also been instrumental in gauging the prowess of LLMs, as discussed by Ziems et al. (2023) and Zeeevic et al. (2023).
Despite the proliferation of LLMs evaluations, current evaluation benchmarks face two fundamental challenges. First, **data contamination.** Many benchmarks source their data from the Internet, causing potential overlap with the vast corpus on which LLMs are trained, leading to the debate of "Generalization vs. Memorization" (Bender et al., 2021; Magar & Schwartz, 2022; Carlini et al., 2023; Biderman et al., 2023): _Are the model's results stemming from genuine ability or just memorization of the training data?_ A recent example is provided by Zeeevic et al. (2023): LLMs can deduce the conclusion that _altitude influences temperature_ based on given data. However, since such conclusion also frequently appears online like Wikipedia, it is ambiguous whether LLMs truly exhibit causal reasoning capabilities or they are merely regurgitating pre-trained knowledge. Similarly,
Berglund et al. (2023) found that LLMs trained on "A is B" fail to infer "B is A", which doubts the abilities of LLMs might come from memorization. Second, **static dataset and fixed complexity.** As LLMs progress at a rapid pace, existing datasets usually fail to match the models' ever-evolving capabilities, because the _complexity_ level of existing benchmarks is usually static and fixed. As Dziri et al. (2023) demonstrated, while handling simple problems pretty well, LLMs fail to solve complex problems. The inability to automatically and dynamically increase the complexity levels based on existing data prohibits the static benchmarks from being adapted to accurately select, compare, and advance LLMs. Even though there are a few existing dynamic benchmarks like DynaBench (Kiela et al., 2021) and DynaBoard (Ma et al., 2021). They rely on crowd-sourcing efforts for continuous evaluation data collection, which might be expensive and tedious.
In this paper, we introduce **DyVal**--a novel, general, and flexible evaluation protocol for the _dynamic_ evaluation of LLMs (Sec. 3.1). The core of DyVal is to dynamically _generate_ evaluation samples on the fly instead of collecting a fixed set of data. DyVal consists of three components: 1) the generation algorithm \(\mathcal{G}\) to generate test samples with diversities; 2) the constraint \(\mathcal{C}\) to modulate sample complexity and validity; and 3) the description function \(\mathcal{F}\) to translate the generated samples into natural languages. Based on this framework, we propose a graph-informed DyVal (Sec. 3.2, Figure 1) to generate data using graphs. Specifically, inspired by techniques such as the compiler principle (Alfred V et al., 2007) and parsing trees which decompose complexities (Klein & Manning, 2003; Vinyals et al., 2015), we employ directed acyclic graphs (DAG) (Thulasiraman & Swamy, 2011) to _compose_ fundamental elements into more intricate problems, with each unit symbolized as a graph node. The extendable, stochastic nature of graph generation effectively regulates the complexity levels. Additionally, the hierarchical attributes of graphs suit them for multi-step inferential tasks like arithmetic and logics. Problems generated by DyVal not only require profound understanding of problem solving rather than simple memorization but also echo the human approach to incremental problem-solving and solution derivation. Being general and flexible, DyVal co-exists and co-evolves with existing benchmarks for better LLMs evaluation and evolution.
We leverage DyVal to synthesize \(7\) reasoning tasks as case studies 1, encompassing: (1) Mathematics: arithmetic and linear equations; (2) Logical reasoning: boolean, deductive, and abductive logic; (3) Algorithm: reachability and maximum sum path problems. We then re-examine the state-of-the-art LLMs ranging from Flan-T5-large (Chung et al., 2022), phi-1.5 (Li et al., 2023d), Xwin-13B (Team, 2023), Llama2-13B-chat (Touvron et al., 2023), Vicuna-13B-v1.3 (Chiang et al., 2023), WizardMath-13B (Luo et al., 2023), to ChatGPT (OpenAI, 2023a) and GPT-4 (OpenAI, 2023b) with DyVal. We also test with recent prompting techniques including Few-shot (Brown et al., 2020), CoT (Wei et al., 2022), Least to Most prompting (Zhou et al., 2023b), Automatic Prompt Engineering (Zhou et al., 2023c), and Skills-in-Context prompting (Chen et al., 2023). Finally, we perform human study involving \(82\) human evaluators for comparison and fine-tuning experiments using DyVal-generated evaluation samples. Furthermore, experiments on existing benchmarks also show that fine-tuning LLMs with data generated by DyVal could directly improve models' abilities without extra careful collection of training data (Zhou et al., 2023a). Our key findings are:
Footnote 1: We choose reasoning tasks mainly due to (1) the intrinsic connection between reasoning proficiency and intelligence; (2) the notable progress LLMs have achieved in reasoning-centric tasks (Sawada et al., 2023).
* **Results on DyVal evaluation are not always consistent with those on existing benchmarks, indicating possible low training data quality and/or data contamination of existing LLMs (Sec. 4.2). For instance, WizardMath-13B, phi-1.5, and Xwin-13B perform poorly on DyVal while claiming huge improvements on existing benchmarks.**
* **As difficulty increases, LLMs tend to perform worse and their performance gap becomes larger, emphasizing the lack of compositionality of current LLMs and the importance of evolving complexity evaluations (Sec. 4.2).**
* **Our error analysis based on DyVal evaluation exhibits various failure patterns which shed light on how to further improve LLMs. (Sec. 4.3).**
* **No prompt engineering methods can perform best in all of our evaluation sets; and larger model sizes tend to achieve better performances (Sec. 4.4).**
* **DyVal can further be utilized to generate training data to improve the abilities of LLMs.** (Sec. 5). For instance, fine-tuning the Llama2 models with our DyVal generated data demonstrates enhanced results on \(6\) existing benchmarks.
To sum up, this paper makes the following contributions:
* **A novel evaluation protocol.** We introduce **DyVal**, a novel, general, and flexible LLMs evaluation protocol designed to generate test samples dynamically, mitigating the issues of data contamination and static complexity.
* **A graph-informed DyVal algorithm for evaluation of the reasoning abilities of LLMs.** We use DAGs to compose \(7\) reasoning problems from mathematics, logical reasoning to algorithms.
* **Extensive experiments and analysis.** We conduct extensive experiments to provide insights for evaluating and improving LLMs.
## 2 Related Work
Evaluating LLMs.While neural networks are recognized as the universal function approximators (Cybenko, 1989) with remarkable data fitting capabilities (Zhang et al., 2021; Arpit et al., 2017), debates (Bender et al., 2021; Zhang et al., 2021; Tanzer et al., 2022; Magar and Schwartz, 2022; Carlini et al., 2023; Wu et al., 2023; Tang et al., 2023; Zecevic et al., 2023; Kocon et al., 2023; Schaeffer, 2023; Biderman et al., 2023; Zhu and Li, 2023) persist regarding the true nature of LLMs' generalization abilities: _do they genuinely generalize across diverse tasks or predominantly draw from their extensive memorized datasets?_. The growing prominence of LLMs necessitates rigorous benchmarks (Hendrycks et al., 2021; Li et al., 2023; Zhong et al., 2023; HuggingFace, 2023). Recent benchmarking trends include: (1) human-centric evaluations (Gao et al., 2022; Ribeiro and Lundberg, 2022), (2) crowd-sourced testing (Kiela et al., 2021; Ma et al., 2021), and (3) specialized task challenges (Liang et al., 2023; Tian et al., 2018; Ribeiro et al., 2020; bench authors, 2023). Complementing with these, our DyVal introduces a dynamic evaluation system, consistently relevant in the swiftly evolving landscape of AI. Although Krause et al. (2018) introduced the term "dynamic evaluation", our DyVal differs considerably in its approach and goals.
Complex-to-simple problem decomposition and evaluation set construction.Employing _graphs_ to deconstruct complex tasks has been an enduring and effective strategy across domains. Compilers, as seen in computational theory (Alfred V et al., 2007), effectively break down high-level constructs, while in NLP, parsing trees bring clarity to intricate syntactic and semantic structures (Klein and Manning, 2003; Vinyals et al., 2015). Roy and Roth (2015) displayed the potency of this method in arithmetic, using trees for solving multi-step problems. Additionally, several contemporary techniques have implored LLMs to decompose complex problems (Wei et al., 2022; Zhou et al., 2023; Khot et al., 2022; Zhang et al., 2023). Several studies have leveraged graph-based approaches for constructing compositional tasks, particularly in the domains of first-order logic (Sinha et al., 2019; Clark et al., 2020; Tian et al., 2021) and causal reasoning (Jin et al., 2023). DyVal presents notable distinctions in the following aspects: (1) _Objective and scope of applications:_ We focus on dynamic evaluations that are uniquely generated and evolve in tandem with the progression of LLMs. DyVal is flexible and works for various tasks. (2) _Methodology:_ While all methodologies employ graph representations, our approach emphasizes adaptability during the evaluation phase rather than just sophisticated dataset creation.
## 3 DyVal
In this section, we first elucidate our general dynamic evaluation protocol to address the challenges of data contamination with dynamic data generation and controllable complexity in Sec. 3.1. We then adapt this general protocol for reasoning tasks by leveraging the Directed Acyclic Graphs (DAGs) in Sec. 3.2. More analysis on the flexibility of DyVal is in Sec. 3.3.
### General Dynamic Evaluation Description Language
Before delving into our graph-informed DyVal, we first introduce the general description language of the dynamic evaluation protocol. Given a task \(T\), a dynamic evaluation algorithm is formulated as \(\mathcal{A}_{T}=\mathcal{F}(\mathcal{G}(\mathcal{C}))\), where **(1)**\(\mathcal{G}\) is the **sample generation algorithm**, incorporating randomness to guarantee the uniqueness of each sample. The randomness may vary on different tasks such as the numbers in math problems and the logic chains in a logic reasoning task. **(2)**\(\mathcal{C}=\{\mathcal{C}_{T},\mathcal{C}_{\mathcal{G}}\}\)
denotes **constraints** on \(\mathcal{G}\), where \(\mathcal{C}_{T}\) is the task constraint for task \(T\) such as the legality guarantee of the generated samples in the context of the task. \(\mathcal{C}_{\mathcal{G}}\) is the complexity constraint for generation process such as the sampling strategy for the value in each node and the number of perturbations added into the evaluation samples. **(3)**\(\mathcal{F}=\{\mathcal{F}_{T},\mathcal{F}_{\mathcal{G}}\}\) is the **description function** to translate the raw evaluation samples generated by \(\mathcal{G}\) into natural language descriptions. \(\mathcal{F}_{\mathcal{G}}\) elucidates the characteristics and properties of samples generated by \(\mathcal{G}\). \(\mathcal{F}_{T}\) is the description for task \(T\) such as task objective and expected outcomes.
Overall, an evaluation sample can be represented as \(d_{\text{eval}}=\mathcal{F}_{T}(\mathcal{F}_{\mathcal{G}}(\mathcal{G}( \mathcal{C}_{\mathcal{G}},\mathcal{C}_{T})))\) using the above description language. \(\mathcal{G}\) first produces a sample adhering to complexity constraint \(\mathcal{C}_{\mathcal{G}}\) and task constraint \(\mathcal{C}_{T}\). Then it undergoes transformation by description function \(\mathcal{F}_{\mathcal{G}}\) into a natural language format and finally goes through the task description function \(\mathcal{F}_{T}\). The above description language naturally (1) avoids data contamination by dynamic generation via \(\mathcal{G}\), and (2) promises dynamic datasets and controllable complexity by \(\mathcal{C}\). Specifically, by varying constraints in \(\mathcal{C}\), we can generate evaluation samples of different difficulties, allowing "co-evolution" of both the LLMs and the evaluation process. The description language is flexible since it allows for different generation algorithms and complexity control by changing \(\mathcal{G}\) and \(\mathcal{C}\) accordingly.
### Graph-informed Dynamic Evaluation for Reasoning Tasks
In this section, following the general evaluation description language, we implement DyVal for reasoning tasks by taking inspiration from the graph structure. Given the intrinsic multi-step inferential nature of reasoning tasks, they inherently exhibit structural characteristics, making directed acyclic graphs (DAGs) a natural choice for modeling these tasks. DAGs also facilitate dynamic samples generation by modulating the internal structure and a fine-grained control over problem difficulty by adjusting the structural complexity. More background of DAGs can be found at Appendix A.
#### 3.2.1 Generation Algorithm \(\mathcal{G}\): DAG Construction
The generation algorithm is established upon the graph construction process. We categorize DAGs as Tree-based DAGs (T-DAGs) and General DAGs (G-DAGs), illustrated in Figure 1. T-DAGs are inherently hierarchical, making them apt for tasks that proceed from a set of initial premises to a final inference, such as arithmetic problems and logical reasoning tasks. Each node in T-DAGs represents a foundational subproblem. These subproblems are chained by the links between nodes and finally form a complex problem. Conversely, G-DAGs excel in mapping intricate relationships, especially in tasks demanding understanding of non-linear interactions. They are ideal for algorith
Figure 1: The pipeline of the graph-informed DyVal. Up: the general evaluation framework; down: an arithmetic example. More details can be found at Sec. 3.2 and Appendix B.
mic challenges involving complex dependencies for instance, imagine modeling a system where a change in one entity might impact multiple others in a cascading fashion, or tasks require to find different potential pathways between entities. The generation process for these two types of DAGs are presented in Appendix B.1.
**Randomness in DAGs generation process.** T-DAG randomness arises from operations assigned to nodes and initial values of leaf nodes. For instance, in arithmetic, the operation can be "\(+\)", with leaf nodes receiving random numbers. On the other hand, for G-DAGs, each node is endowed with a random value (if needed for certain problem). For every node, the number of children is determined randomly, and the maximum number of children depends on the input. We then establish the links by selecting target child nodes at random.
Theorem 3.1 and 3.2 formally guarantee the dynamic generation process by exploring the probability that two samples generated by T-DAG and G-DAG are identical. We focus exclusively on the base case, setting aside additional complexities like the integration of random links or the embedding of random descriptions, which would further diminish the likelihood of two DAGs being identical.
**Theorem 3.1**.: Given a tree-based DAG with depth \(d\) and width \(w\), if the operation set for non-leaf nodes has \(k\) distinct operations and the value set for leaf nodes contains \(n\) distinct values, the probability that two independently generated DAGs are identical is: \(P=\left(k^{\frac{w^{d-1}-1}{w-1}}\times n^{w^{d-1}}\right)^{-1}\).
**Theorem 3.2**.: Given a general DAG with \(n\) nodes where each node has a minimum of \(l\geq 1\) links, the probability that two randomly selected DAGs are identical is bounded by \(\frac{1}{(n-1)!}\).
Proofs can be found at Appendix C. These theorems guarantee that the odds of producing identical evaluation samples are considerably low. For instance, in the arithmetic task (where \(k=6,n=10\)) with \(d=4\) and \(w=2\), the odds that two DAGs are identical hover around \(1e^{-15}\).
#### 3.2.2 Constraints \(\mathcal{C}\) for Graph Generation
**Task constraint \(\mathcal{C}_{T}\).** Task constraints vary for tasks. Take the node creation for instance: 1) What distribution should the node value adhere to? 2) What set of operations is permissible? 3) How should a node's value be computed from its children's values? In arithmetic task, \(\mathcal{C}_{T}\) includes ensuring a dividend is nonzero, avoiding overflow, etc. Here we concentrate on two general task constraints: (1) _Value distribution \(\mathcal{V}\)_: Specifies the permissible range or distribution from which leaf node values can be assigned. For example, in logic reasoning tasks, the premises (leaf nodes) are either assigned as \(\mathrm{True}\) or \(\mathrm{False}\). (2) _Operation set \(\mathcal{O}\)_: Lists the operations allowed within the DAG. The operation set constraint is usually used for tree-based DAGs. For example, in arithmetic task, the allowed operation set can be defined as the basic arithmetic operations \(\{+,-,\times,/\}\).
**Complexity constraint \(\mathcal{C}_{\mathcal{G}}\).** We investigate \(4\) techniques to inject complexity into DAGs (Figure 5): (1) _Change width and depth for T-DAGs:_ The natural way to control the tree complexity. (2) _Change number of nodes and links for G-DAGs:_ We control the overall numbers of nodes in G-DAGs. The number of links of each node is selected randomly from a predefined range, e.g., \([1,5]\). (3) _Add extra random links:_ For each node, we may introduce an additional link to another random node. (4) _Embed random descriptions:_ Add random descriptions to the primary DAG's descriptions. More details of complexity can be found at Appendix B.2 with Figure 7 as illustrations.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Field & \multirow{2}{*}{Tack} & \multirow{2}{*}{
\begin{tabular}{c} Classification \\ algorithm \(\mathcal{C}\) \\ \end{tabular} } & \multicolumn{2}{c}{Constraint \(\mathcal{C}\)} & \multirow{2}{*}{\# Classes} & \multirow{2}{*}{Description \(\mathcal{F}\)} \\ \cline{3-4} \cline{6-7} \multirow{3}{*}{Mathematics} & & & & \(C_{T}\) & & \(\mathcal{C}_{Q}\) \\ \hline \multirow{3}{*}{Mathematics} & Arithmetic & Tree-based & \(\mathcal{V}\) (\(1,2,\ldots,10\)) & Depth, Width, & & \(\mathrm{Dist.}\) in the \\ & & & \(\mathcal{V}\) (\(1,\ldots,\times,\times\)) & Extra links, Random desc & - & value of (\(\mathrm{Do
#### 3.2.3 Description Function \(\mathcal{F}\)
After constructing DAGs with certain constraints, we then need to convert them into comprehensible natural language descriptions using the description function \(\mathcal{F}\).
**DAG description function \(\mathcal{F}_{\mathcal{G}}\).** We describe the DAG node by node and then form the description of nodes into sequences. The interpretation of each node in natural language depends on its position and the task. For leaf nodes that represent primary inputs or premises, they can be described as: "The value of [Name] is [Value]." For instance, a node denoting number 5 could be expressed as: "The value of node A is 5." For T-DAGs, the intermediate nodes that typically denote operations performed on their child nodes, the description can be formulated as: "The value of [Name] is derived by [Operation] the values of [Children's Names]." For G-DAG, the intermediate nodes are usually described as the connections between nodes: "The [Name] points to [Children's Names]". Note that the natural language descriptions can be replaced according to custom needs and can be further incorporated with textual adversarial attacks (Li et al., 2019; Gao et al., 2018; Jin et al., 2020; Li et al., 2020).
Moreover, complexity is also influenced by the _order_ that nodes are described. We design three orders: _topological_, _reversed topological_, and _random_ orders, where each offers a unique challenge in comprehending the DAGs. The details of these orders are presented in Appendix B.4.
**Task description function \(\mathcal{F}_{\mathcal{F}}\).** The construction of \(\mathcal{F}\) highly depends on the context of tasks. Notebly, this construction is also highly flexible. For instance, incorporating adversarial prompts (Zhu et al., 2023) to the task description can make the problems more challenging. Here we present the task description function for arithmetic and reachability tasks that are representative for T-DAG and G-DAG, respectively. Appendix B.3 presents the details and examples for the remaining \(5\) tasks.
_Arithmetic:_ Given a T-DAG, the DAG description function has already demonstrated the premise: the leaf nodes and the intermediate steps of inference: non-leaf nodes. Next, we select the root node as the variable required to solve, we append the question "What is the value of [Root]?" to the description where [Root] is filled with the name of the root variable (Figure 8).
_Reachability:_ The reachability task aims to model if two nodes are connected in a graph. For a G-DAG, the DAG description function has demonstrated the connections between nodes. The task description for reachability task is: "Can the [Node _i_] be reached by [Node _j_]" where Node \(i\) and Node \(j\) are randomly selected from the nodes in G-DAG (Figure 9).
### DyVal coexists and co-evolves with existing benchmarks.
DyVal is complementary to existing benchmarks. First, tasks with an intrinsic structure benefit significantly from DyVal since it can modulate complexity and randomness by adjusting the generation process. Efforts such as CheckList (Ribeiro et al., 2020), data augmentation (Andreas, 2020; Zhang et al., 2022), and reasoning dataset synthesis (Sinha et al., 2019; Zhao et al., 2019; Clark et al., 2020; Tian et al., 2021; Jin et al., 2023) can be easily integrated into DyVal. Conversely, tasks without a well-defined structure may present challenges for DyVal's implementation. For example, narrative generation tasks which require the crafting of coherent stories, might not be an ideal fit for DyVal. Second, DyVal can be enhanced by existing benchmarks to formulate more challenging scenarios. For instance, the description function \(\mathcal{F}\) is all about natural language texts, so it can be easily combined with adversarial attacks (Li et al., 2019; Jin et al., 2020; Zhu et al., 2023) or out-of-distribution prompts (Yang et al., 2023) to assess the robustness of LLMs.
## 4 Experiment
### Setup
**Tasks and complexity level.** We mainly discuss the constraint used in each task. Test set accuracy might differ as it is generated dynamically. To balance test time and discrepancy, we produce 500 samples for each dataset. To mitigate the impact of randomness on evaluation results, we assess each dataset three times. We define \(4\) complexity levels (D1\(\sim\)D4) for each task. For tasks that use general DAGs, the number of nodes is set to be \(\{7,10,15,20\}\) with each node having \(\{3,4,6,8\}\)
maximum links and \(1\) minimum link. For tasks that use tree-based DAGs, tree depths and widths are \((2,2),(3,2),(3,3),(4,2)\), respectively. More details of D1\(\sim\)D4 are presented in Appendix D.
**Evaluation metric.** Our primary evaluation metric is accuracy. For tasks where answers are numerical, we employ relative precision (Burden et al., 2015) to determine the correctness of a prediction, i.e., an answer is deemed correct if its relative precision is within a specified threshold, \(\sigma\) (e.g., \(0.01\%\)), in relation to the ground truth value. The relative precision is computed as \(\lvert\mathrm{pred}-\mathrm{gt}\rvert/(\mathrm{gt}+\epsilon)\leq\sigma\) where \(\mathrm{gt}\) represents the ground truth value, \(\mathrm{pred}\) is the model's prediction, \(\lvert\cdot\rvert\) is the absolute value function, \(\sigma\) is the desired relative precision threshold, and \(\epsilon\) is a small value introduced to prevent division by zero.
**LLMs.** Our evaluated LLMs include Flan-T5-large (Chung et al., 2022), phi-1.5 (Li et al., 2023d), WizardMath-13B (Luo et al., 2023), Xwin-13B (Team, 2023), Llama2-13B-chat (Touvron et al., 2023), Vicuna-13B-v1.3 (Chiang et al., 2023), ChatGPT (OpenAI, 2023a), and GPT-4 (OpenAI, 2023b). Temperature is set to \(0\) to avoid randomness. We set the generation length to be directly proportional to the input length. Specifically, for ChatGPT and GPT-4, the generate length is set to be twice the input length; for the remaining models, it is set to be five times of the input length. We designed prompts for each task, incorporating demonstrations of rules, particularly for reasoning and algorithm tasks. To ensure formatted output, we further ask LLMs to explicitly output their predictions between "\(\langle\langle\)" and "\(\rangle\rangle\rangle\)". All implementations are based on Huggingface.
### Results for Math, Logical reasoning, and Algorithm tasks
Before presenting the main results, note that **the results of Flan-T5-large, phi-1.5, WizardMath-13B, and Xwin-13B on all tasks are 0**, so we no longer report them. We run experiments using three random seeds. Figure 2 shows the results of all tasks averaged in three generation orders and three random seeds (full results in Appendix D.4). GPT-4 performs best, followed closely by ChatGPT. Llama2-13B-chat's performance is subbar, with Vicuna-13B-v1.3 occasionally outperforming Llama2-13b-chat. More findings are as follows.
**Inconsistent performance between existing static benchmarks and DyVal:** Despite the excellent results of phi-1.5, Xwin-13B and WizardMath-13B on existing benchmarks, their poor performance in our evaluations highlights the potential issues when evaluating LLMs solely on static benchmarks and possible low training data quality or data contamination issue.
**Difficulty with complex datasets:** Performance mostly declines sharply from D1 to D4, highlighting LLMs' struggles with increasing complexity. For example, ChatGPT's performance drops by 23% for arithmetic task as complexity increases. Notably, performance in inductive logic (inferioring premises from conclusions) is much lower than in deductive logic (deriving conclusions from premises), as supported by Berglund et al. (2023), which shows LLMs excel more in "A is B" than "B is A". Further, the performance differential between GPT-4 and ChatGPT, while subtle in simpler tasks like D1, becomes prominent in complex tasks. These observations indicate the value of intricate and evolving tasks to effectively differentiate and evaluate models. We also present more interesting observations in Appendix D.4.
Figure 2: Results on 7 tasks with complexity from D1 to D4 (averaged on 3 description orders and 3 seeds). Xwin-13B, phi-1.5, and WizardMath-13B are not shown as their results are all 0.
**Human study:** We recruited 82 human evaluators with at least a bachelor's degree2, to gauge their skills against the LLMs on the most complex dataset (D4) for mathematical and logical reasoning tasks. Every participant tackled 5 problems from each dataset. As depicted in Figure 3, both GPT-4 and ChatGPT consistently showed high competence in most tasks, surpassing average human results. The reason could be that the generated problems are generally harder for humans but easier for LLMs. Nevertheless, GPT-4 struggled in areas like linear equations and abductive logic. This indicates that future development could involve more data from specific domains.
Footnote 2: The results may not represent the highest level of human performance. Demographics are in Appendix D.8.
### Case Study
In an endeavor to comprehensively understand the behavior of LLMs, we meticulously examined the failure modes. Our focus is especially on the most challenging datasets of arithmetic, deductive logic, abductive logic, and reachability tasks according to the performance of GPT-4. We randomly selected \(20\) failure samples for each task and summarized the failure modes in Figure 4. The detailed failure cases are presented in Appendix D.5. The error types vary, indicating there is large room for improvement.
**Partial calculation error:** GPT-4 occasionally errs in intermediate steps, while keeping the remaining steps correct. We emphasize that the errors may be as simple as \(20/7=37.28\). This aligns with (Dziri et al., 2023) noting LLMs sometimes give partially correct multi-digit multiplication results. **Incorrect reasoning and self contradiction:** In reasoning tasks, GPT-4 may misinterpret rules. Given an abductive logic \(A\lor B\to C\) with \(C\) is False, the premise \(A,B\) must be False. However, GPT-4 inaccurately abduced that either A or B _might_ be False. Further, GPT-4 occasionally contradicts itself in its assumptions for the same inference in abductive logic task. **Unsubstantiated response:** In reasoning tasks and algorithm tasks, GPT-4 often answers without any inferences or justifications. Its answer-only responses suggest possible memorization or shallow understanding. **Instructional oversight:** Occasionally, GPT-4 adeptly arrives at the correct computation but stumbles when it comes to adhering to the output instructions laid out in prompts, for example, the required relative precision of mathematic calculation.
### Ablation Study
**Impact of complexity constraints \(\mathcal{C}_{\mathcal{G}}\):** In Figure 5, we vary complexity in ChatGPT by adjusting constraints as described in Sec. 3.2.2 and observe how LLMs performance shifts across arithmetic, boolean logic, and deductive logic tasks. Notably, as task intricacy rises due to augmented complexity parameters, LLMs' performance diminishes. Depth emerges as the predominant challenge in tree-based DAGs, emphasizing the LLMs' difficulty with extended inference steps.
Figure 4: Failure modes distribution.
Figure 5: Comparison results across different complexity constraints.
Figure 3: Human vs. LLMs results.
**Prompt engineering:** We evaluate five prompting techniques (PE) on our toughest datasets, as outlined in Table 2 and Appendix D.7. No PE methods can perform best in all tasks. While APE notably boosts the Linear Equation task by 10%, it negatively impacts reductive and abductive logic. These varied outcomes highlight the importance of task-specific PE selection and development.
**Influence of model size:** We further evaluate the performance of Llama2 with different model sizes of arithmetic, boolean logic and reachability tasks on their simplest dataset D1. Table 3 shows that larger sizes produce better results, but mostly still not surpass GPT-4 and human.
## 5 DyVal Helps Fine-tuning
In this section, we show that DyVal-generated data can further be utilized to fine-tune LLMs to improve their capabilities of solving complex tasks. Specifically, we generate training data of 7 tasks to fine-tune Llama2-13B-chat. The details of fine-tuning and training sample generation are at Appendix E. We then test the model with different settings: (1) _in-distribution_ samples with the same difficulty as the training data; (2) _out-of-distribution_ samples, whose difficulty levels are higher than the training data. To further demonstrate the effectiveness of our generated data, we test the models with few-shot examples on **existing benchmarks** including GSM8K (Cobbe et al., 2021) and SVAMP (Patel et al., 2021) to evaluate the math abilities, FOLIO (Han et al., 2022) and RACO (bench authors, 2023) to evaluate the logical reasoning abilities, and DP (Dziri et al., 2023) and LCS (bench authors, 2023) to evaluate the algorithm abilities. Results in Figure 6 and 10 show that the performance of fine-tuned model increases in all tasks. It shows that DyVal is effective not only as a benchmark but also in enhancing the performance of LLMs on existing benchmarks via fine-tuning on its generated samples. The improvement might stem from the similarities between various benchmarks and DyVal-generated samples. For instance, GSM8K samples can be interpreted as trees of depth \(2\) or \(3\). Interestingly, even no dynamic programming tasks in our fine-tuning, the fine-tuned model also showed improved performance on the DP and LCS datasets. This underscores the potential learning capability of LLMs and the efficacy of training samples generated by DyVal.
## 6 Conclusion and Discussion
We proposed DyVal, a dynamic LLMs evaluation protocol to mitigate the data contamination and static complexity of existing benchmarks. We designed the graph-informed DyVal for reasoning tasks. The strength of DyVal lies in its dynamic generation of samples, with inherent flexibility for difficulty adjustment. We observed several interesting findings in experiments using our benchmark. More importantly, DyVal-generated samples can not only be used as evaluation samples, but also act as fine-tuning data for LLMs to enhance their performance in existing benchmarks.
Our work has several limitations. (1) Tasks: We currently focused on reasoning tasks. While DyVal supports other tasks, it requires to design the generation algorithm \(\mathcal{G}\). We are optimistic that DyVal will pave the way for further explorations across various tasks. (2) Samples: Our experiments utilized a limited set of test samples due to resource constraints. Evaluations on larger sets may help observe more findings. (3) Fine-tuning: We only fine-tuned Llama2-13b models and further investigations with diverse models on more datasets could offer deeper insights of DyVal.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Pro
## Disclaimer
The purpose of this research is to present a dynamic and evolving evaluation protocol in response to the rapid development of LLMs. We have the following claims. First, the generation mechanism of DyVal does not contain any potentially harmful words or expressions but only mathematical, logical, and algorithmic descriptions. In the future, the usage of DyVal on other natural language tasks should be dealt with cautions to not include any harmful or irresponsible languages. Second, human subjects are involved in this study to act as LLMs' competitors for performance comparison and analysis. All human studies are conducted obeying laws and regulations in certain countries. Third, the experiments on ChatGPT and GPT-4 conducted in this paper are based on their latest version in September, 2023. Authors recommend using the same version of these services for reproducibility. As we tried our best to tune the best prompts for our experiments, it is, however, well-known that LLMs are highly sensitive to prompts. Therefore, the experiments in this paper are only based on our prompt design and codebase. Finally, we may have concluded that some LLMs in this paper achieved poor performance in our benchmark, but this does not mean these models are not good or cannot be used in practice. Authors remain positive and optimistic to all evaluated LLMs that they will further be stronger.
|
2309.07374 | Beta quantile regression for robust estimation of uncertainty in the
presence of outliers | Quantile Regression (QR) can be used to estimate aleatoric uncertainty in
deep neural networks and can generate prediction intervals. Quantifying
uncertainty is particularly important in critical applications such as clinical
diagnosis, where a realistic assessment of uncertainty is essential in
determining disease status and planning the appropriate treatment. The most
common application of quantile regression models is in cases where the
parametric likelihood cannot be specified. Although quantile regression is
quite robust to outlier response observations, it can be sensitive to outlier
covariate observations (features). Outlier features can compromise the
performance of deep learning regression problems such as style translation,
image reconstruction, and deep anomaly detection, potentially leading to
misleading conclusions. To address this problem, we propose a robust solution
for quantile regression that incorporates concepts from robust divergence. We
compare the performance of our proposed method with (i) least trimmed quantile
regression and (ii) robust regression based on the regularization of
case-specific parameters in a simple real dataset in the presence of outlier.
These methods have not been applied in a deep learning framework. We also
demonstrate the applicability of the proposed method by applying it to a
medical imaging translation task using diffusion models. | Haleh Akrami, Omar Zamzam, Anand Joshi, Sergul Aydore, Richard Leahy | 2023-09-14T01:18:57Z | http://arxiv.org/abs/2309.07374v1 | # Beta quantile regression for robust estimation of uncertainty in the presence of outliers
###### Abstract
Quantile Regression (QR) can be used to estimate aleatoric uncertainty in deep neural networks and can generate prediction intervals. Quantifying uncertainty is particularly important in critical applications such as clinical diagnosis, where a realistic assessment of uncertainty is essential in determining disease status and planning the appropriate treatment. The most common application of quantile regression models is in cases where the parametric likelihood cannot be specified. Although quantile regression is quite robust to outlier response observations, it can be sensitive to outlier covariate observations (features). Outlier features can compromise the performance of deep learning regression problems such as style translation, image reconstruction, and deep anomaly detection, potentially leading to misleading conclusions. To address this problem, we propose a robust solution for quantile regression that incorporates concepts from robust divergence. We compare the performance of our proposed method with (i) least trimmed quantile regression and (ii) robust regression based on the regularization of case-specific parameters in a simple real dataset in the presence of outlier. These methods have not been applied in a deep learning framework. We also demonstrate the applicability of the proposed method by applying it to a medical imaging translation task using diffusion models.
Haleh Akrami*\({}^{1}\), Omar Zamzam*\({}^{1}\), Anand Joshi\({}^{1}\), Sergul Aydore\({}^{2}\), Richard Leahy\({}^{1}\)\({}^{1}\)Department of Electrical Engineering, University of Southern California, USA
\({}^{2}\) Amazon Web Services, New York, USA
Footnote *: These authors contributed equally to this work.
Quantile regression, Diffusion models, Robust divergence
## 1 Introduction
Quantile regression offers an alternative to mean regression in various applications where accurate predictions and their associated reliability are crucial. For instance, in clinical diagnosis, a realistic assessment of prediction uncertainty is essential for determining disease status and planning appropriate treatment. In the context of deep learning, two types of uncertainties are encountered: aleatoric and epistemic. Aleatoric uncertainty arises from the inherent stochasticity of the data, while epistemic uncertainty -- often referred to as model uncertainty -- is due to limitations in the model itself. It's worth noting that an infinite amount of training data would not reduce aleatoric uncertainty, although it could mitigate epistemic uncertainty. A multitude of methods exist for estimating these uncertainties, including Gaussian process regression, uncertainty-aware neural networks, Bayesian neural networks, and ensemble methods[1, 2, 3].
Recent studies have proposed to use conditional quantile regression to estimate aleatoric uncertainty in neural networks [2, 4, 5, 6, 7] and showed that it can compute well-calibrated intervals. The most common application of quantile regression models is in cases where parametric likelihood cannot be specified [8]. Similar to the classical regression analysis which estimates the conditional mean, the \(\alpha\)-th quantile regression \((0<\alpha<1)\) seeks a solution to the following minimization problem [8]:
\[\operatorname*{arg\,min}_{\theta}\sum_{i}\rho_{\alpha}(y_{i}-f_{\theta}(x_{i})), \tag{1}\]
where \(x_{i}\) are the inputs, \(y_{i}\) are the responses, \(f\) is the model paramaterized by \(\theta\), and \(\rho_{\alpha}\) is the _check function_ or _pinball loss_[8] defined as:
\[\rho_{\alpha}(y_{i}-f_{\theta}(x_{i}))=\begin{cases}(y_{i}-f_{\theta}(x_{i}) )\alpha,&\text{if }y_{i}\geq f_{\theta}(x_{i})\\ (f_{\theta}(x_{i})-y_{i})(1-\alpha),&\text{if }y_{i}<f_{\theta}(x_{i})\end{cases}\]
It has been shown that minimization of the loss function in (1) is equivalent to maximizing the likelihood function formed by combining independently distributed asymmetric Laplace densities [8],
\[\operatorname*{arg\,max}_{\theta}L(\theta)=\frac{\alpha(1-\alpha)}{\sigma} \exp\left\{\frac{-\sum_{i}\rho_{\alpha}(y_{i}-f_{\theta}(x_{i}))}{\sigma} \right\}.\]
where \(\alpha\) is the quantile and \(\sigma\) is the scale parameter.
Recently, quantile regression has been employed for uncertainty estimation in regression tasks such as image translation [9] and anomaly detection [7] in medical imaging. In these domains, obtaining a reliable uncertainty estimate is of critical importance. Compared to alternative methods for uncertainty estimation, such as sampling using generative models [7] or Bayesian uncertainty estimation, quantile regression offers computational efficiency and speed, and does not require sampling.
Statistical machine learning models that involve maximizing likelihood are particularly sensitive to outliers [10]. Although quantile regression is quite robust to outlying response observations, it can be sensitive to outlier covariate observations (features). It has been shown that perturbing a single (\(x_{i}\), \(y_{i}\)) data point in an arbitrary manner can force all quantile regression hyperplanes to intersect at the perturbed point [11]. Despite that, it's important to highlight that only a limited number of papers have explored the robustness of quantile regression in the context of covariate observations, particularly within deep learning frameworks.
We outline our contributions in this paper as follows: (i) We propose a robust quantile regression approach that leverages concepts from robust divergence. (ii) We compare the performance of our proposed method, particularly in the presence of outliers, to existing techniques such as Least Trimmed Quantile Regression [11], which serves as the only available baseline, and robust regression methods that rely on the regularization of case-specific parameters, in both a simple dataset and a simulated dataset. (iii) Finally, to illustrate the practical utility of the proposed method, we apply it to a medical imaging translation task, employing state-of-the-art diffusion models.
## 2 Method
We start by briefly explaining the formulation of least-trimmed quantile regression [11] and robust regression based on the regularization of case-specific parameters.
### Least Trim Quantile Regression (TQR)
The objective function for TQR is defined as:
\[\operatorname*{arg\,min}_{\theta}\sum_{I_{C}}\rho_{\alpha}(y_{i}-f_{\theta}(x _{i})) \tag{2}\]
Where \(I_{C}\) is the subset of samples with C samples from the training dataset that generates the smallest error. The optimization is similar to quantile regression with an additional iterative process. After initializing with C random samples, at each iteration, the samples with the smallest error are chosen for training in the next iteration and the process is repeated until there are not any significant changes in loss value comparing to that of the previous iteration. We utilized TQR within a gradient descent optimization framework, where we used only the subset of the batch with the lowest error for backpropagation.
### Robust regression based on regularization of case-specific parameters (RCP)
She and Owen [12] proposed a robust regression method using the case-specific indicators in a mean shift model with the regularization method. By generalizing their method to quantile regression, the final loss can be simplified to:
\[\operatorname*{arg\,min}_{\theta}\sum_{i}\rho_{\alpha}(y_{i}-f_{\theta}(x_{i}) -\gamma_{i})+\lambda\sum_{i}|\gamma_{i}| \tag{3}\]
This optimization can be solved using an alternative approach and soft margin thresholding. RCP can be used for any likelihood based model.
### \(\beta\)-quantile regression (\(\beta\)-QR)
For parameter estimation, maximizing the likelihood is equivalent to minimizing the KL-divergence between the empirical distribution of the input and statistical model \(q(\phi)\). Similarly a robust \(\beta\)-loss (\(L_{\beta}\)) can be derived by replacing the KL-divergence with the \(\beta\)-divergence \(D_{\beta}\)[13, 14, 15].
\[D_{\beta}(f(x)||g(x)) =\frac{1}{\beta}\int(f(x)^{\beta}g(x)^{\beta})f(x)dx\] \[-\frac{1}{\beta+1}\int(f(x)^{\beta+1}g(x)^{\beta+1})dx\]
\[L_{\beta}=\frac{1}{N}\sum\frac{\exp(\beta l(x_{i},q(\phi)))-1}{\beta}+\frac{1 }{\beta+1}\int q(\phi)^{\beta+1}\]
where \(l(x_{i},q(\phi))\) denotes the log-likelihood of observation \(x_{i}\). This loss assigns a weight to each observation based on the likelihood's magnitude, mitigating the influence of outliers on model training [13]. In the case of quantile regression, the loss can be simplified to:
\[L_{\beta\alpha}=\frac{1}{N}\sum\frac{\exp(-\beta\rho_{\alpha}((y_{i}-f_{ \theta}(x_{i}))/\sigma)-1}{\beta} \tag{4}\]
The hyperparameter \(\sigma\) can be assumed to be 1 for simplicity. This loss can be interpreted as an M-estimate. The hyperparameter \(\beta\) specifies the degree of robustness.
### Quantile regression for diffusion models for regression tasks
Diffusion probabilistic models [16] are primarily composed of two essential processes: a forward process that gradually adds Gaussian noise to a data sample, and a reverse process that transforms Gaussian noise to an empirical data distribution through a gradual denoising process. Conditional diffusion models [17] incorporate input samples to condition the denoising process. Image translation problems can be modeled as conditional diffusion models, represented as: \(p(y|x)\), where \(y\) is the target image and \(x\) is the input conditioning image. In this paper, we deal with image translation problems where the input images \(x\) are T1-weighted brain MRI images and the targets \(y\) are the corresponding T2-weighted images. The diffusion model \(f_{\theta}(x)\) is trained to recover T2-weighted
images \(y\) from Gaussian noise \(\epsilon\sim\mathcal{N}(0,I)\) conditioned on the input T1-weighted images \(x\). For the details of the diffusion and conditional diffusion models, we refer to multiple works that provide a complete treatment of the mathematical formulations [18, 19, 17, 16, 20]. Instead of minimizing the mean squared error loss between the targets \(y\) and the estimates \(f_{\theta}(x)\) that yields a mean regression problem, the minimization problem in (1) is adopted to predict the \(\alpha\) quantiles of the target images. We show that in the presence of outliers in the training set, replacing the loss function in (1) with the proposed loss function in (4) yields a model that is minimally affected by the outliers, making it closer to a model trained only using inlier samples. The details of the conducted experiments are presented in the following section.
## 3 Experiments and Results
In this section, we evaluate our proposed method on a simple real dataset, a simulation-based dataset, and a medical image translation problem.
### Star cluster CYB OB1
First, we start with a simple dataset on the star cluster CYB OB1 which was analyzed in [11]. This dataset consists of 47 observations from which four points with high leverage do not follow the trend of the rest of the data. It has one explanatory variable which is the logarithm of the effective temperature at the surface of stars. The independent variable is the logarithm of its light intensity. The authors have shown the efficacy of least trimmed quantile regression compared to quantile regression using linear programming optimization to find the model's parameters. However, our goal is to investigate robustness in neural networks where the solution will be calculated using stochastic gradient descent (SGD). We estimate 0.25, 0.5, and 0.75 quantiles with a neural network.
We implement the linear quantile regression problem with a one-layer neural network with linear activation. Then we applied the three suggested robust methods TQR, RCP, and \(\beta\)-QR. We used GD with the ADAM optimizer to train the network. We chose the hyperparameters for each model (trimming percentage, L and \(\beta\)) using a grid search. We used a batch size of 47 and performed 5000 iterations.
The results are shown in Fig. 1. For a quantitative comparison of the models We calculated the Frobenius norm between each estimated quantile and the solution, which was learned only using the inliers (Table 1 and Fig. 1). The \(\beta\)-QR method shows the best performance among the methods. For optimizing RCP cost, we used the Alternating Direction Method of Multipliers (ADMM) in which we split the objective into \(\sum_{i}\rho_{\alpha}(y_{i}-f_{\theta}(x_{i})-\gamma_{i})\) and \(\lambda\sum_{i}|\gamma_{i}|\). We optimized the former using GD with the ADAM optimizer, and for the latter, we used a proximal method for the \(L_{1}\) objective:
\[proxy_{\lambda,l_{1}}(x_{i}):=\begin{cases}x_{i}-\lambda&\text{if }x_{i}> \lambda\\ x_{i}+\lambda&\text{if }x_{i}<\lambda\\ 0&\text{otherwise.}\end{cases} \tag{5}\]
We iterated between optimization of the two components of the cost function until convergence.
### Toy example for uncertainty estimation
Here we used the simple synthetic dataset introduced in [4] to which we added 1% outliers. Tagasovska and Lopez-Paz [4] applied simultaneous Quantile Regression (SQR) to estimate aleatoric uncertainty and suggested estimating all the quantile levels simultaneously. We modeled the data using a three-layer neural network with ReLU activation function. We then applied the three robust methods TQR, RCP, and \(\beta\)-QR. We used SGD with the ADAM optimizer for training. We trained TQR and \(\beta\)-QR with a batch size of 128 and ran each for 500 epochs. RCP was trained for ten epochs and 500 steps of iterative optimization. We estimated the performance of the robust model for 0.25, 0.5 and 0.75 quantiles. Our results shown that \(\beta\)-QR estimates robust quantiles and comparable results to TQR (Fig. 2).
### Quantile regression for uncertainty estimation in diffusion models
In this section, we present an experiment aimed at showcasing the effectiveness of our proposed robust quantile regres
Figure 1: Robust linear quantile regression using TQR, RCP, \(\beta\) -QR for star cluster CYB OB1 dataset.
Figure 2: Robust non-linear quantile regression using TQR, RCP, \(\beta\)-QR using a simple neural network for a toy example.
sion approach in a medical imaging task. Specifically, we focus on addressing the outlier problem in an image translation task, where we employ a diffusion model to predict various quantiles of T2-weighted brain MRI images based on input T1-weighted images.
Our training dataset consists of two distinct groups of subjects: (i) Lesion-free subjects that come from the Cam-CAN dataset (in-liers) [21], representing individuals without any brain lesions, and (ii) Lesion subjects that are sourced from the BRATS dataset [22] who do have brain lesions, thereby introducing outliers into the dataset. Training the diffusion model solely on the Cam-CAN data and using the loss function in (1) yields a reliable model that successfully captures the relationship between T1 and the quantiles of T2 images. However, introducing the "outlier" lesioned brain images from the BRATS dataset into the training set and using the same loss function significantly perturbs the training process, resulting in a notably less reliable model and corrupted quantiles. To mitigate the adverse effects of the outlier samples and restore model reliability, we integrate the proposed robust loss function presented in (4) into the training process. This loss function is designed to down-weight the influence of outliers during training, effectively bringing the model's performance closer to that of the model trained solely on clean data from the Cam-CAN dataset. The robust loss function was employed to train the model with the combined Cam-CAN and BRATS data sets. The results of this experiment are illustrated in Fig. 3, providing a qualitative comparison of the trained models. Table 2 shows our quantitative results. These results unequivocally demonstrate that the inclusion of the robust loss function during model training significantly enhances the model's robustness to outliers, resulting in a reliable model that closely approximates the performance of the model trained exclusively on clean data. We estimated 0.05,0.5 and 0.95 for this dataset. For comparison of the robust and non-robust models, we calculated: (1) the MSE between the estimated quantiles and the outlier-free model predicted quantiles; and (2) the MSE between the predicted median and the ground-truth T2 image. We tuned the \(\beta\) parameter using a validation set.
## 4 Conclusion
In this paper we introduced a robust quantile regression approach designed to enhance the reliability of deep learning models in the presence of outliers. Our method leverages concepts from robust divergences to down-weight outlier influence during training. We demonstrated the effectiveness of our approach on a simple yet real dataset, showcasing its ability to improve quantile regression accuracy compared to existing robust quantile regression methods. Extending the application to medical imaging, and demonstrating its practical utility, the proposed approach proved effective in mitigating outlier effects on training a diffusion model to translate MRI brain images from a T1-weighted to T2-weighted modality, bringing the performance closer to that of the model trained solely on clean data. The presented findings highlight the practical value of the proposed method, particularly in training scenarios compromised by outliers.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & CYB-Q1 & CYB-Q2 & CYB-Q3 \\ \hline TQR & 1.04 & 1.12 & 1.12 \\ \hline RCP & 3.43 & 4.82 & 3.74 \\ \hline \(\beta\)-QR & 0.93 & 0.77 & 0.85 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of performance of TQR, RCP and \(\beta\)-QR. Each entry shows the Frobenius norm of the difference between the estimated quantiles and their (outlier-free) ground truth for the star cluster CYB OB1 dataset.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Method & Prediction error & Quantile error \\ \hline Outlier free & 0.0086 & - \\ \hline Baseline & 0.0132 & 0.0097 \\ \hline \(\beta\)-QR & 0.0074 & 0.0013 \\ \hline TQR & 0.0107 & 0.0015 \\ \hline \end{tabular}
\end{table}
Table 2: Comparing performance of \(\beta\)-QR with outlier free on baseline model. For the prediction error, MSE calculated between ground truth T2 and the median of each model
Figure 3: Estimating T2 MRI QL(0.05),QH(0.95),QM(0.5) for diffusion models from T1 MRI. Comparing the estimated quantiles using the non-robust and robust (\(\beta\)-QR) model with the outlier free model. |
2309.11516 | Private Matrix Factorization with Public Item Features | We consider the problem of training private recommendation models with access
to public item features. Training with Differential Privacy (DP) offers strong
privacy guarantees, at the expense of loss in recommendation quality. We show
that incorporating public item features during training can help mitigate this
loss in quality. We propose a general approach based on collective matrix
factorization (CMF), that works by simultaneously factorizing two matrices: the
user feedback matrix (representing sensitive data) and an item feature matrix
that encodes publicly available (non-sensitive) item information.
The method is conceptually simple, easy to tune, and highly scalable. It can
be applied to different types of public item data, including: (1) categorical
item features; (2) item-item similarities learned from public sources; and (3)
publicly available user feedback. Furthermore, these data modalities can be
collectively utilized to fully leverage public data.
Evaluating our method on a standard DP recommendation benchmark, we find that
using public item features significantly narrows the quality gap between
private models and their non-private counterparts. As privacy constraints
become more stringent, models rely more heavily on public side features for
recommendation. This results in a smooth transition from collaborative
filtering to item-based contextual recommendations. | Mihaela Curmei, Walid Krichene, Li Zhang, Mukund Sundararajan | 2023-09-17T11:13:52Z | http://arxiv.org/abs/2309.11516v1 | # Private Matrix Factorization with Public Item Features
###### Abstract
We consider the problem of training private recommendation models with access to public item features. Training with Differential Privacy (DP) offers strong privacy guarantees, at the expense of loss in recommendation quality. We show that incorporating public item features during training can help mitigate this loss in quality. We propose a general approach based on collective matrix factorization (CMF), that works by simultaneously factorizing two matrices: the user feedback matrix (representing sensitive data) and an item feature matrix that encodes publicly available (non-sensitive) item information.
The method is conceptually simple, easy to tune, and highly scalable. It can be applied to different types of public item data, including: (1) categorical item features; (2) item-item similarities learned from public sources; and (3) publicly available user feedback. Furthermore, these data modalities can be collectively utilized to fully leverage public data.
Evaluating our method on a standard DP recommendation benchmark, we find that using public item features significantly narrows the quality gap between private models and their non-private counterparts. As privacy constraints become more stringent, models rely more heavily on public side features for recommendation. This results in a smooth transition from collaborative filtering to item-based contextual recommendations.
recommendation system, differential privacy, side features, matrix factorization
## 1 Introduction
Recommender systems trained on private user feedback present the risk of leaking sensitive information about users' activity or preferences (Zhang et al., 2021; Calandrino et al., 2011), and thus, providing formal privacy protections is increasingly important. Differential privacy (DP) (Dwork et al., 2014) has emerged as the de facto standard for formalizing and quantifying privacy protections. These DP guarantees often come at the expense of some degradation in model quality, as DP training involves adding noise to quantities derived from user data (for example, adding noise to the gradients (Abadi et al., 2016)). Recent progress in private recommendation algorithms (Jain et al., 2018; Chien et al., 2021; Krichene et al., 2023) has significantly improved the privacy/utility trade-offs, but there still remains a large quality gap between private models and their non-private counterparts.
It was recently shown (Krichene et al., 2023; Chien et al., 2021) that these quality losses are due to degradation in item representation as a result of the noise added to ensure DP (particularly for tail items, which have fewer ratings and are more impacted by noise). Making item embeddings robust to the added noise may be the key to narrowing the quality gap between private and non-private models. One promising direction is to utilize public item features to improve item representation while maintaining strict user-privacy guarantees.
In this work, we investigate methods to utilize such item features to improve the quality of privacy-preserving recommenders. We take inspiration from the literature on Collective Matrix Factorization (CMF) (Singh and Gordon, 2008), which learns shared embeddings from collections of related matrices, rather than a single matrix. Throughout the paper, we will distinguish between _private user feedback_, which is sensitive and needs to be protected, and _public item features_, which represent non-sensitive, publicly available information that does not need privacy protection.
### Contributions
* **Formulation**: We model both public item features and sensitive user-item feedback as matrices. Two low-rank factorizations are learned simultaneously. One factorization approximates the user feedback matrix and the other approximates the item feature matrix. Importantly, the item representation is shared between the two factorizations, which enables item embeddings to benefit from public features. This setup is versatile as it can encode various modalities of public information. For instance, features can represent public item metadata. The setup can also encode pairwise item similarity derived from public data, where the 'features' correspond to items and represent similarity scores. Finally, we can encode user feedback, for instance, from users who choose to make their ratings or reviews publicly available, here, the 'features' are users and represent the affinity between a user and an item.
* **Method**: To provide DP guarantees, we propose Differentially Private Collective Matrix Factorization (DP-CMF), that extends the recently proposed DPALS algorithm (Chien et al., 2021) to the CMF formulation. DP-CMF works by adding noise to the sufficient statistics derived from sensitive data, while using exact statistics derived from public data.
* **Evaluation**: We evaluate DP-CMF on the same private recommendation benchmark used in (Jain et al., 2018; Chien et al., 2021; Krichene et al., 2023). We find that incorporating public item features significantly narrows the quality gap between private and non-private models, particularly so when privacy requirements are high. This study offers a promising direction for improving privacy-utility trade-offs in recommender systems by leveraging public data sources while preserving user privacy.
### Related Work
_Differential privacy in recommender systems._ The importance of privacy in recommender systems has been recognized for a long time (Narayanan and Shmatikov, 2008), and some early attempts were made (McSherry and Mironov, 2009; Kapralov and Talwar, 2013) to develop differentially private algorithms that offer strong protection, but this usually required significant losses in model quality. Recent work (Jain et al., 2018; Chien et al., 2021; Krichene et al., 2023) developed new algorithms that narrowed this quality gap, by using alternating minimization (Chien et al., 2021; Jain et al., 2021), and developing methods to adaptively allocate privacy budgets (Krichene et al., 2023). Our proposed algorithm builds on these recent improvements, by extending the DPALS technique (Chien et al., 2021) to incorporate public item data. While utilizing public data to improve DP models has been explored in other domains (as described below), our work is the first to carry out a systematic study for private recommenders.
_Using side features in recommenders._ User and item side information are commonly employed to address the "cold-start" problem for users and items with limited or no interaction data (Gantner et al., 2010; Saveski and Mantrach, 2014; Kula, 2015; Deldjoo et al., 2019; Cortes, 2018). Furthermore, side information can tackle fairness concerns and mitigate popularity bias in recommendations (Shi et al., 2014). Side features can be integrated into MF models through Collective Matrix Factorization (CMF) (Singh and Gordon, 2008; Shi et al., 2014; Dong et al., 2017; Liang et al., 2016; Jenatton et al., 2012), also known as Joint Matrix Factorization (Zhu et al., 2007), which originated in the Statistical Relational Learning literature (Getoor and Taskar, 2007). Our work leverages the CMF approach and extends it to private recommendations. While in recommender systems, both user and item side information can be useful, in the privacy context, it is more natural to consider only item side information, as it generally represents non-sensitive data, while user side information (such as demographic features) is sensitive and would require privacy protection. Our paper will hence focus on item features.
_Using public data to improve private models._ Leveraging public information to enhance privacy/utility trade-offs has been explored in various contexts. Existing approaches fall in two broad categories. The first is public pre-training followed by private fine-tuning. Empirically, this approach is effective in domains with abundant public data, such as
natural language processing (Li et al., 2021; Yu et al., 2021; Behnia et al., 2022) and vision (Golatkar et al., 2022; Xu et al., 2022). The second is to directly incorporate public data into the private learning process. These techniques are based either on projecting private gradients onto a low-dimensional subspace estimated from public gradients (Kairouz et al., 2021; Yu et al., 2021; Zhou et al., 2021), or utilizing public data to modify the objective function (Bassily et al., 2020; Amid et al., 2022; Li et al., 2022). For an extensive review, see (Cummings et al., 2023). These approaches often make the restrictive assumption that public and private data come from the same distribution (Kairouz et al., 2021; Amid et al., 2022; Wang and Zhou, 2020; Zhou et al., 2021) (so that public and private gradients lie on the same subspace). Our approach can work even if the public data comes from a different distribution: access to item metadata can be informative about item similarity, even if this data is of an entirely different nature than user feedback. Another notable difference is that existing work focuses on _gradient-based_ methods, while ours is, to the best of our knowledge, the first to explore the benefits of public data on _second-order_ methods (Alternating Least Squares).
## 2 Preliminaries
### Setup & Notation
Throughout, \(\mathbf{M}\in\mathbb{R}^{m\times n}\) denotes the user-item feedback matrix, and \(\mathbf{S}\in\mathbb{R}^{s\times n}\) the item-feature matrix, where \(m,n,s\) are the number of users, items, and features, respectively. We denote by \(\Omega\) a subset of \([m]\times[n]\) representing the indices of the observed entries in \(\mathbf{M}\). We define \(\Omega_{i}:=\{j\in[m]:(i,j)\in\Omega\}\) the set of items rated by user \(i\),and define \(\Omega_{:j}:=\{i\in[n]:(i,j)\in\Omega\}\) the set of users that rated item \(j\). Further, we denote by \(\Omega^{\prime}\) the subset of \([s]\times[n]\) representing the observed entries of the item-feature matrix. For instance, \((k,j)\in\Omega^{\prime}\) if item \(j\) has corresponding public feature token \(k\) (e.g., \(j\equiv\) Titanic, \(k\equiv\) director: James Cameron). The goal of CMF is to learn two low-rank factorizations: \(\mathbf{M}_{\Omega}\approx\mathbf{U}\mathbf{V}^{\top}\) that approximates the user feedback matrix, and \(\mathbf{S}_{\Omega^{\prime}}\approx\mathbf{F}\mathbf{V}^{\top}\) that approximates the item feature matrix. Where \(\mathbf{U}\in\mathbb{R}^{n\times d}\), \(\mathbf{V}\in\mathbb{R}^{m\times d}\) and \(F\in\mathbb{R}^{s\times d}\) are \(d-\)dimension embeddings corresponding to users, items and features, respectively. The notation \(\mathbf{M}_{\Omega}\) means that approximate equality is desired only with respect to the entries \(\mathbf{M}_{ij}\) for \((i,j)\in\Omega\).
For a vector \(\mathbf{v}\in\mathbb{R}^{d}\), \(\|\cdot\|\) denotes the usual Euclidean \(\ell^{2}\) norm. For two vectors \(\mathbf{u},\mathbf{v}\in\mathbb{R}^{d}\), \(\langle\mathbf{u},\mathbf{v}\rangle\) and \(\mathbf{u}\otimes\mathbf{v}\) denote the inner and the outer product, respectively. By \(\Pi_{\text{PSD}}(\cdot)\), we denote the projection operation on the set of positive semidefinite matrices. By \(\|\cdot\|_{frob}\), we denote the Frobenius norm of a matrix. For a matrix \(\mathbf{U}\), \(\mathbf{u}_{i}\) specifies the \(i\)-th row. Finally, we use \(\mathcal{N}^{d}\) to denote the standard multivariate normal distribution and \(\mathcal{N}^{d\times d}\) to denote the distribution of symmetric \(d\times d\) matrices whose upper triangular entries are i.i.d. standard normal.
### Privacy considerations
Following (Jain et al., 2018; Chien et al., 2021; Jain et al., 2021), we adopt the notion of _user-level_ DP (Dwork et al., 2014), where the goal is to protect _all of the ratings_ from a user. Intuitively, the user-level DP guarantee limits the impact that any user can have on the algorithm's output. More formally, let \(D=\{d_{1},d_{2},\ldots d_{n}\}\) be a set of inputs corresponding to the \(n\) users, and let \(\mathcal{A}:\mathcal{D}^{n}\rightarrow\mathcal{Y}\) be a randomized algorithm that produces an output \(y\in\mathcal{Y}\). In our case, \(d_{i}\) are the ratings associated with user \(i\) and \(y\) is the set of all item embeddings \(\mathbf{V}\) and feature embeddings \(\mathbf{F}\). Denote by \(D_{-i}\) the inputs for all users except \(i\). Two sets of inputs \(D\), \(D^{\prime}\) are said to be _adjacent_ if they differ in at most one user; i.e. \(D=\{d_{i},D_{-i}\}\) and \(D^{\prime}=\{d^{\prime}_{i},D_{-i}\}\).
**Definition 2.1** (User-level Differential Privacy (Kearns et al., 2014)).: An algorithm \(\mathcal{A}\) satisfies user-level \((\varepsilon,\delta)\)-DP if for all adjacent data sets \(D\) and \(D^{\prime}\), and any measurable set of outputs \(Y\subset\mathcal{Y}\), the following holds: \(\Pr(\mathcal{A}(D)\in Y)\leq e^{\varepsilon}\Pr(\mathcal{A}(D^{\prime})\in Y )+\delta\).
Intuitively, \((\varepsilon,\delta)\) are privacy parameters that control the "indistinguishability" between the outputs of the algorithm when it processes two datasets that differ in a single user's data. The smaller the values of \(\varepsilon\) and \(\delta\), the stronger the privacy guarantee provided by the algorithm. The parameter \(\delta\) is typically taken to be \(\leq 1/n\) (\(n\) is the number of users). The values of \(\varepsilon\) depend on the domain, studies typically report values ranging from \(\varepsilon=0.1\) (high privacy regime) to \(\varepsilon=10\).
_Remark 2.2_ (User-level vs. rating-level Differential Privacy).: Some prior techniques (Dwork et al., 2014; Kapralov and Talwar, 2013) provide rating-level DP guarantees, meaning that neighboring datasets are allowed to differ in at most a single rating. In other words, rating-level DP limits risk of leakage from each individual rating, but this offers a
much weaker protection at the user-level, (since users typically have many ratings, and the leakage risk compounds with the number of ratings). In contrast, user-level DP (Kearns et al., 2014; Jain et al., 2018) ensures that a user's full set of ratings is protected. This makes user-level DP both more challenging to accomplish, but also more practically significant and relevant in terms of privacy protection of a user's data.
## 3 Differentially Private Collective Matrix Factorization
We now introduce the DP-CMF algorithm for private recommendations with public item features. We first recall the Alternating Least Squares (ALS) algorithm for (non-private) CMF, then introduce the necessary modifications to satisfy user-level DP.
### ALS for (non-private) CMF
CMF jointly optimizes the following weighted loss function to find low-rank approximations of \(\mathbf{M}_{\Omega}\approx\mathbf{U}\mathbf{V}^{\top}\) and \(\mathbf{S}_{\Omega^{\prime}}\approx\mathbf{F}\mathbf{V}^{\top}\).
\[\begin{split}\mathcal{L}(\mathbf{U},\mathbf{V},\mathbf{F})=& \sum_{(i,j)\in\Omega}\mathbf{W}_{ij}\left(\langle\mathbf{u}_{i},\mathbf{v}_{j} \rangle-\mathbf{M}_{ij}\right)^{2}+\alpha\sum_{(k,j)\in\Omega^{\prime}}\left( \langle\mathbf{f}_{k},\mathbf{v}_{j}\rangle-\mathbf{S}_{kj}\right)^{2}+\\ &+\lambda\left(\|\mathbf{U}\|_{frob}^{2}+\|\mathbf{V}\|_{frob}^{2}\right) +\lambda^{\prime}\|\mathbf{F}\|_{frob}^{2},\end{split} \tag{1}\]
where \(\mathbf{W}_{ij}\) is the weight associated with the contribution of user \(i\)'s rating of item \(j\) to the loss function, \(\lambda\) is the regularization weight for user and item embeddings and \(\lambda^{\prime}\) is the regularization weight for feature embeddings. Finally, \(\alpha\) is a hyper-parameter that controls the relative importance of fitting the public versus private data. A small \(\alpha\) means that item embeddings \(\mathbf{V}\) will primarily depend on user-item feedback; whereas a large \(\alpha\) means that item embeddings will depend more on the item-feature matrix. Although the loss is not jointly convex, for fixed item embeddings \(\mathbf{V}\), it is a convex quadratic with respect to \((\mathbf{U},~{}\mathbf{F})\) and vice-versa. ALS takes advantage of this fact, and alternates between updating \((\mathbf{U},\mathbf{F})\) and updating \(\mathbf{V}\), as follows \(\forall i\in[n],\forall k\in[s]\) and \(\forall j\in[m]\), respectively :
\[\mathbf{u}_{i}^{t}\leftarrow \arg\min_{u}\sum_{j\in\Omega_{i:}}\left(\mathbf{W}_{ij}\left\langle \mathbf{u},\mathbf{v}_{j}^{t-1}\right\rangle-\mathbf{M}_{ij}\right)^{2}+\lambda\|\mathbf{u}\|_ {2}^{2}=\left[\sum_{j\in\Omega_{i:}}\mathbf{W}_{ij}\mathbf{v}_{j}^{t-1}\otimes\mathbf{v} _{j}^{t-1}+\lambda I\right]^{-1}\Bigg{[}\sum_{j\in\Omega_{i}}\mathbf{W}_{ij}\mathbf{ M}_{ij}\mathbf{v}_{j}^{t-1}\Bigg{]}; \tag{2}\] \[\mathbf{f}_{k}^{t}\leftarrow \arg\min_{f}\alpha\sum_{j\in\Omega_{k:}^{\prime}}\left(\langle \mathbf{f},\mathbf{v}_{j}^{t-1}\rangle-\mathbf{S}_{kj}\right)^{2}+\lambda^{\prime}\|\mathbf{f} \|_{2}^{2}=\Bigg{[}\sum_{j\in\Omega_{k}^{\prime}}\mathbf{v}_{j}^{t-1}\otimes\mathbf{v }_{j}^{t-1}+\lambda^{\prime}I\Bigg{]}^{-1}\Bigg{[}\sum_{j\in\Omega_{k:}^{ \prime}}\mathbf{S}_{kj}\mathbf{v}_{j}^{t-1}\Bigg{]};\] (3) \[\mathbf{v}_{j}^{t}\leftarrow \arg\min_{v}\sum_{i\in\Omega_{j}}\left(\mathbf{W}_{ij}\left\langle \mathbf{u}_{i}^{t},\mathbf{v}\right\rangle-\mathbf{M}_{ij}\right)^{2}+\alpha\sum_{k\in \Omega_{ij}^{\prime}}\left(\langle\mathbf{f}_{k}^{t},\mathbf{v}\rangle-\mathbf{S}_{kj} \right)^{2}+\lambda\|\mathbf{v}\|_{2}^{2}=\left[A_{j}^{t}\right]^{-1}\left[b_{j}^{ t}\right];\]
where \(A_{j}^{t}:=\Bigg{[}\sum_{i\in\Omega_{j}}\mathbf{W}_{ij}\mathbf{u}_{i}^{t}\otimes\mathbf{u }_{i}^{t}+\alpha\sum_{k\in\Omega_{;j}^{\prime}}\mathbf{f}_{k}^{t}\otimes\mathbf{f}_{k} +\lambda I\Bigg{]}^{-1}\) and \(b_{j}^{t}:=\Bigg{[}\sum_{i\in\Omega_{;j}}\mathbf{W}_{ij}\mathbf{M}_{ij}\mathbf{u}_{i}^{t} +\alpha\sum_{k\in\Omega_{;j}^{\prime}}\mathbf{S}_{kj}\mathbf{f}_{k}^{t}\Bigg{]}\)
The ALS updates for user and feature embeddings (Eqs. (2) and (3)) are decoupled and can happen simultaneously. In essence, item features (e.g. genre:comedy) can be treated as "fictitious users".
_Remark 3.1_ (Implicit feedback and binary features).: When the user feedback is implicit (e.g. clicks, views), or when the public item features are categorical, we use the implicit ALS formulation (Hu et al., 2008) that penalizes non-zero predictions outside of the observation sets \(\Omega\) and \(\Omega^{\prime}\), by adding terms \(\|\mathbf{U}\mathbf{V}^{\top}\|_{frob}\) and \(\|\mathbf{F}\mathbf{V}^{\top}\|_{frob}\) to the optimization objective in Eq. (1). This results in changes to the update equations that are standard in the literature.
### Differentially Private CMF
To ensure user-level DP, we introduce DP-CMF (see Algorithm 1), which extends the DPALS procedure (Chien et al., 2021) to CMF with public features. DP-CMF computes and releases the item and feature embeddings \((\mathbf{V}^{t},\mathbf{F}^{t})\) with DP protection on a trusted centralized platform (server-side). Meanwhile, each user \(i\) independently updates their
embedding \(\mathbf{u}_{i}^{t}\) on their own device (client-side). As a result, the user embedding update (step 2 of Algorithm 1) is identical to the non-private update in Eq. (2) with additional assumption that the update is unweighted (i.e. \(W_{ij}=1\)). Furthermore, the feature embedding update (step 3) only depends on \(\mathbf{S}\) (public data) and \(\mathbf{V}^{t-1}\) (which is DP-protected), hence by the DP post-processing property, it requires no additional noise and can be computed as in Eq. (3). On the other hand, the item embedding update (step 4) depends on private data \(\mathbf{M},\mathbf{U}^{t}\), and must be modified to guarantee DP. This requires two modifications: the first is to limit the impact of each user on the item embeddings, this is done by clipping the magnitude of individual ratings (step 6) clipping the user embedding norm (step 7) and weighting the ratings of each user with appropriately chosen weights \(\mathbf{W}\) (step 9). The second is to add noise to the sufficient statistics (steps 8-11) via the Gaussian mechanism (Vu and Slavkovic, 2009; Foulds et al., 2016; Wang, 2018). Note that the statistics \(\hat{\mathbf{A}}_{j},\hat{\mathbf{b}}_{j}\) (step 9) depend on sensitive data and are protected via noise, while the statistics \(\mathbf{A}_{j}^{\prime},\mathbf{b}_{j}^{\prime}\) (step 10) depend only on public data and are computed exactly.
_Remark 3.2_.: Step 1 intuitively highlights the potential benefit of using item features: the item embedding is the solution of a linear system \(A\mathbf{x}=b\) with \(A=\hat{\mathbf{A}}_{j}+\alpha\mathbf{A}_{j}^{\prime}\), and \(b=\hat{\mathbf{b}}_{j}+\alpha\mathbf{b}_{j}^{\prime}\), where \(\hat{\mathbf{A}}_{j},\hat{\mathbf{b}}_{j}\) are noisy quantities derived from user feedback, while \(\mathbf{A}_{j}^{\prime},\mathbf{b}_{j}^{\prime}\) are derived from public features and are exact. A larger \(\alpha\) makes the solution more robust to the noise, but favors fitting the item features. When the item features are informative (e.g., they accurately capture item-item similarity), this can improve the item representation compared to only using noisy user feedback \((\alpha=0)\).
_Remark 3.3_ (Computational cost of DP-CMF).: One step of DPALS(Chien et al., 2021) consists of forming the sufficient statistics (a cost of \(O(|\Omega|d^{2})\)) then solving \(m+n\) linear systems (a cost of \(O((m+n)d^{3})\). In DP-CMF (Algorithm 1), the sufficient statistics computation cost increases to \(O((|\Omega|+|\Omega^{\prime}|d^{2})\), and the linear system cost increases to \(O((m+n+s)d^{3})\). Hence, the added cost of using public features remains reasonable if \(|\Omega^{\prime}|\) is comparable in size to \(|\Omega|\), and the total number of features \(s\) is smaller or comparable to \(m+n\).
_Remark 3.4_ (Threat model).: Observe that in this model, the recommendation platform broadcasts the item embeddings \(\mathbf{V}\) and the feature embeddings \(\mathbf{F}\). The user embeddings \(\mathbf{U}\) are never published. Rather, each user \(i\) can compute her own embedding \(\mathbf{u}_{i}\) (by solving a least-squares problem involving her own ratings along with the published
item embeddings \(\mathbf{V}\), see Eq. (1)), then use it to generate recommendations by computing scores \(\mathbf{u}_{i}^{\top}\mathbf{V}\). This captures a very strong notion of privacy, as it protects user \(i\) even against potential collusion of the remaining \(n-1\) users (i.e. an adversary with access to \(\mathbf{V},\mathbf{F}\) and \(D_{-i}\)), while allowing the user to take full advantage of her data to generate recommendations. Importantly, the platform hosting the recommendation system is a trusted entity (it has access to the raw user ratings and user embeddings when computing the noisy sufficient statistics). The goal is to protect against privacy attacks from malicious users or external agents, not the recommender system itself. However, if the recommender itself is considered untrusted, these algorithms (DP-ALS and DP-CMF) can potentially be implemented using secure aggregation algorithms (Bonawitz et al., 2017), although this comes at an increased computational cost.
**Proposition 3.1** (Privacy Guarantee).: _For all \(\varepsilon>0,\delta\in(0,1)\), if the inputs to Algorithm 1 satisfy \(\sum_{j\in\Omega_{i}}\mathbf{W}_{ij}^{2}\leq\frac{\varepsilon^{2}}{47(\log(1/ \delta)+\varepsilon)}\forall i\in[n]\), then the algorithm is \((\varepsilon,\delta)\) user-level DP._
_Remark 3.5_.: The weights \(\mathbf{W}\) are used to control a user's impact on the model. The simplest way to generate weights that satisfy the condition of the proposition is to assign a uniform weight to each user. Specifically, given a desired privacy level \((\varepsilon,\delta)\), let \(\beta=\frac{\varepsilon^{2}}{47(\log(1/\delta)+\varepsilon)}\), then simply set \(\mathbf{W}_{ij}=\sqrt{\beta/|\Omega_{i}|}\) satisfies the inequality (notice that a user with more ratings, i.e. a larger \(|\Omega_{i}|\), will have lower weights, to limit the solution's sensitivity to that user's data). A more sophisticated method was developed in (Krichene et al., 2023) that adapts to the item frequencies by putting more weight on infrequent items. We use the latter in our experiments.
Proof.: First, we argue that it suffices to prove the result for \(\alpha=0\). Indeed, by step 10, the statistics \(\mathbf{A}_{j}^{\prime},\mathbf{b}_{j}^{\prime}\) only depend on the public feature matrix \(\mathbf{S}\) and on the feature embeddings \(\mathbf{F}^{t}\), which in turn only depends on \(\mathbf{S}\) and \(\mathbf{V}^{t-1}\) (by step 3). Since \(\mathbf{V}^{t-1}\) is released with DP protection, there is no additional privacy cost for computing \(\mathbf{A}_{j}^{\prime},\mathbf{b}_{j}^{\prime}\) (by the post-processing property of DP (Dwork et al., 2014a, Proposition 2.1)). Therefore the privacy guarantee of the algorithm with \(\alpha=0\) or \(\alpha>0\) are identical. When \(\alpha=0\) (no features), the algorithm becomes identical to DPALS, and the guarantee is proved in (Krichene et al., 2023, Theorem 3.3).
## 4 Empirical Evaluation
We evaluate DP-CMF on a standard DP recommendations benchmark used in (Jain et al., 2018; Chien et al., 2021; Krichene et al., 2023), based on the MovieLens datasets (Harper and Konstan, 2015). The benchmark considers a rating prediction task on the MovieLens 10M (ML10M) dataset, which records over \(10\) million ratings ranging from 1 to 5 for \(n=69878\) users and \(m=10677\) movies. For the feature-item matrix we consider 3 sources of public data:
Item metadata.We construct a categorical feature dataset by cross-referencing movie IMDb identifiers with data available on Wikidata.org. For each movie in the ML10M dataset, we collect genre, topic, and cast information. We construct a binary feature matrix \(\mathbf{S}\), where each row corresponds to a feature token (e.g., the first row is labeled as director:James Cameron, and non-zero entries in this row correspond to movies directed by James Cameron). The metadata dataset comprises \(s=12637\) feature tokens with an overall feature density of \(0.13\%\).
Item to item similarity scores.We create an item-to-item similarity dataset from a non-private recommendation model trained on a variant of the ML20M dataset, as proposed in (Liang et al., 2018). This dataset, is commonly used for benchmarking recommender performance on implicit feedback, as the training data is a binary matrix corresponding to ratings \(\geq 4\). We first train a Matrix Factorization model on the dataset and use the resulting item embeddings to identify, for each movie, the \(k\) most similar movies based on similarity scores (we experiment with inner product and cosine similarity). Each row in the feature matrix \(\mathbf{S}\) corresponds to a movie, with non-zero values \(\mathbf{S}_{ij}\) indicating similarity between movies \(i\) and \(j\). Finally, we consider both actual and binarized scores in the \(\mathbf{S}\) feature matrix.
Public user ratings.We use the ML20M dataset and select the observations corresponding to 70152 users that do not overlap with the ML10M users. We consider this set of user-item feedback as public item side data. More specifically, each user, whose data is considered public, plays the same functional role as a feature token. We observe that in this case, the public data is of the same semantic type as the private data. This setup is closest to the common assumption in the literature, that private and public data come from the same or similar distributions. In experiments, we use subsets of public users of various sizes, ranging from very small values (\(s=100\)) to the full available data (\(s=70152\)). Finally, we consider both raw ratings (from the original ML20M) as well as binarized ratings.
### Experimental procedure
We follow the procedure of (Lee et al., 2016) to partition the ML10M into train, test and validation datasets. For the privacy parameters, following (Jain et al., 2018; Chien et al., 2021; Krichene et al., 2023), we consider a range of \(\varepsilon=[1,5,10,20]\) and fix \(\delta=10^{-5}\). For each privacy setting we use the optimal hyper-parameters such as number of ALS iterations \(T\), regularization weight \(\lambda\), clipping norm \(\Gamma_{U}\) tuned by (Krichene et al., 2023) for the same task without side features. With these pre-tuned hyper-parameters, for each \(\varepsilon\) and each public data source, we tune only4 the hyper-parameters corresponding to side features: the side feature weight \(\alpha\) and side feature regularization \(\lambda^{\prime}\). We select hyper-parameters based on performance on validation set and finally we report performance on test set. Performance is measured using the Root Mean Squared Error (RMSE) between true and predicted ratings: \(RMSE(\mathbf{U},\mathbf{V})=\sqrt{\frac{1}{|\Omega|}\sum_{i,j\in\Omega}(\mathbf{M}_{ij}- \langle\mathbf{u}_{i},\mathbf{v}_{j}\rangle)^{2}}\). It is important to note that, although the training loss considers side feature-item data and the learned embeddings for features play a crucial role in updating item embeddings, the final performance is measured solely based on rating data (user feedback). In addition, we report performance metrics sliced by item popularity to gain a deeper understanding how public item information impacts the the quality of private models across frequent and infrequent items.
Footnote 4: Retuning the full set of hyper-parameters may lead to even stronger performance, but our experiments show that tuning only the two parameters \(\alpha,\lambda^{\prime}\) already achieves quality improvements. This leads to a simple procedure, where one can first tune the DP-ALS hyper-parameters, then separately tune the CMF-related parameters \(\alpha,\lambda^{\prime}\).
### Results
In Figure 0(a), we compare DP-CMF's performance to the DPALS algorithm without side features from (Chien et al., 2021; Krichene et al., 2023) (blue curve), which is the current state-of-the-art on the ML10M benchmark. We also report for reference the non-private ALS baseline (dashed line). Incorporating public item information significantly narrows the existing quality gap between private and non-private models. The relative improvement depends on the public data source. Public user rating data (red curve) consistently outperforms other sources, as expected, since it comes from a closely related distribution. However, even tangentially related public item data, such as item metadata from Wikidata, substantially improves model quality. Furthermore, the different public item data modalities are composable, leading to compounded accuracy improvements (purple curve). The gap between private and non-private models is largest for high privacy requirements (low \(\varepsilon\)), with side features closing up to 60% of the performance gap.
#### 4.2.1 Tail performance
Figure 0(b) shows performance across four popularity buckets for models trained under privacy parameter \(\varepsilon=1\), with each bucket containing roughly 2500 items. Due to the skewed distribution of ratings, the buckets hold 86.6%, 9.4%, 3%, and 1% of the all ratings, respectively. Thus, popular items have a greater impact on overall performance. The
Figure 1: Impact of public item features on private recommendation accuracy. Wiki Metadata corresponds to categorical genre, topic and cast features. Item-Item Similarity considers \(k=100\) similar items according to dot product; ML20M Public Users considers binary observations for ratings \(\geq 4\)
performance ordering of head items (bucket 1) matches the global ordering. However, for tail items (buckets 2 through 4), the order is reversed, with Wikidata features showing the most improvement for tail items.
We posit that Wikidata movie metadata outperforms on tail items due to its less pronounced bias towards popular items. As Fig. 1(a) illustrates, 90% of feature-item observations from ML20M public users correspond to popular items, while only 37% of Wikidata feature matrix entries do so. However, feature density alone doesn't fully account for this performance difference, as item-to-item similarity doesn't perform as well on tail items, despite being perfectly balanced (by construction, we select the same number of neighbors for all movies).One hypothesis is that side features are most beneficial for tail items when they can transfer information from popular items. Fig. 1(b) supports this, showing that while public users primarily rate popular movies. Wikidata features, on the other hand, are less frequent, but describe both popular and tail items. This balance may explain why Wikidata outperforms other public data sources on tail items. Another notable difference between the sources of data is illustrated in Fig. 1(b). The figure shows, for each feature, the fraction of top-bucket movies for that feature (so a fraction of \(r\) means that among all occurrences of that feature, \(r\) fraction fall in the top bucket while the rest fall in other buckets). We can observe that while public users primarily rate popular movies, Wikidata features are more balanced across popular and tail items. This may explain why Wikidata outperforms other data sources on tail items.
#### 4.2.2 Performance across public data sources
We find that cast information alone captures most of the performance lift achieved by Wiki Metadata features (3). The cast information is the most effective side feature for both head and tail items. This phenomenon is potentially explained by the fact that cast information is very granular and plausibly correlates with user preferences.
In Figure 4 we consider variants of pairwise item similarity. We find that the performance improves with the number of similar items for both cosine and inner product similarity scoring. Inner product scores generally outperform cosine similarity scores, in part because they take into account the magnitude of two vectors, not just their angle. Given
Figure 3: Performance comparison of DP-CMF using Genre, Topic, and Cast data from Wikidata Metadata
Figure 2: Popularity bias of Wikidata metadata, item-item similarity and ML20M public users’ data
that higher magnitudes typically correspond to more popular items, this leads to popularity bias, reflected in the comparatively weaker performance of dot product similarities on tail items.
Finally, in Figure 5 we consider public item data derived from public ratings. Increasing the number of features (in this case, users with public ratings) significantly enhances model performance. This improvement is more pronounced when using non-binarized ratings, with the private model's performance approaching that of the non-private model even under strict privacy settings. While the most considerable accuracy gains are achieved with large amounts of public data (\(s=50000\)), even modestly sized sources of in-distribution data (\(s=1000\)) yield performance improvements comparable to the best gains achieved through Wiki Metadata.
## 5 Discussion
In this work, we introduce DP-CMF, a method aimed at improving the privacy-accuracy trade-off of private recommendation models. Our technique incorporates public item feature data into private recommendations that satisfy \((\varepsilon,\delta)\)-DP. This approach is simple to implement, easy to tune, and highly scalable. DP-CMF allows for the integration of public side item information, pairwise item similarities, and public rating data. This is achieved within the same formulation, without requiring any changes in privacy accounting. Our experimental results demonstrate practical improvements in the privacy-accuracy trade-off by utilizing public item features.
Identifying public features that align with user interests and enhance recommendation performance remains a challenge. This task is domain-dependent. In general, access to high-quality annotations is beneficial, and this may be harder to obtain in some domains, for instance when content creation is cheap, and annotations are relatively more expensive. In such cases, another potential source is learning unsupervised, content-based similarity (Jansen et al., 2018)
Future work includes comparing DP-CMF with pre-training approaches and extending our methodology beyond CMF. This could involve exploring other models that utilize item side features, such as Inductive Matrix Completion (Gantner et al., 2010; Xu et al., 2013; Chiang et al., 2015; Jain and Dhillon, 2013; Goldberg et al., 2010) which enjoys favorable theoretical guarantees.
Figure 4: Comparing DP-CMF performance across varying similarity functions and numbers of selected similar items
Figure 5: Comparing DP-CMF performance for varying number of ML20M Public Users (in thousands) |
2303.17896 | Exploring the Limits of Deep Image Clustering using Pretrained Models | We present a general methodology that learns to classify images without
labels by leveraging pretrained feature extractors. Our approach involves
self-distillation training of clustering heads based on the fact that nearest
neighbours in the pretrained feature space are likely to share the same label.
We propose a novel objective that learns associations between image features by
introducing a variant of pointwise mutual information together with instance
weighting. We demonstrate that the proposed objective is able to attenuate the
effect of false positive pairs while efficiently exploiting the structure in
the pretrained feature space. As a result, we improve the clustering accuracy
over $k$-means on $17$ different pretrained models by $6.1$\% and $12.2$\% on
ImageNet and CIFAR100, respectively. Finally, using self-supervised vision
transformers, we achieve a clustering accuracy of $61.6$\% on ImageNet. The
code is available at https://github.com/HHU-MMBS/TEMI-official-BMVC2023. | Nikolas Adaloglou, Felix Michels, Hamza Kalisch, Markus Kollmann | 2023-03-31T08:56:29Z | http://arxiv.org/abs/2303.17896v2 | # Exploring the Limits of Deep Image Clustering using Pretrained Models
###### Abstract
We present a general methodology that learns to classify images without labels by leveraging pre-trained feature extractors. Our approach involves self-distillation training of clustering heads, based on the fact that nearest neighbors in the pretrained feature space are likely to share the same label. We propose a novel objective to learn associations between images by introducing a variant of pointwise mutual information together with instance weighting. We demonstrate that the proposed objective is able to attenuate the effect of false positive pairs while efficiently exploiting the structure in the pretrained feature space. As a result, we improve the clustering accuracy over \(k\)-means on \(17\) different pretrained models by \(6.1\)% and \(12.2\)% on ImageNet and CIFAR100, respectively. Finally, using self-supervised pretrained vision transformers we push the clustering accuracy on ImageNet to \(61.6\)%. The code will be open-sourced.
Machine Learning, ICML
## 1 Introduction
Given a plethora of publicly available pretrained vision models, we ask the following questions: a) how well-structured is the feature space of pretrained architectures with respect to label-related information, and b) how to best adapt this structure to unsupervised tasks. To answer these questions, we focus on unsupervised image classification, also known as image clustering. Image clustering is the task of assigning a semantic label to an image, given an a priori finite set of classes. Ultimately, image clustering consists of simultaneously learning the relevant representations and the cluster assignments. Regarding representation learning, multiple approaches have been recently developed, consisting of supervised (Touvron et al., 2021), self-supervised (Chen et al., 2020), semi-supervised (Sohn et al., 2020) and natural language supervised (Radford et al., 2021) methods.
To begin addressing the aforementioned questions, we present the key challenges regarding image clustering. First, given that we can roughly estimate the number of ground-truth labels, the underlying distribution among classes is hard to infer from the data, which is typically assumed to be uniform. Second, the images should ideally be classified both highly consistent (images of the same class are grouped together) and highly confident (one-hot distributed prediction probability). Consistency can be achieved by either learning features that are invariant under transformations of the same image (e.g. cropping, color jitter, etc.), or invariant w.r.t. to substitution by other images that belong to the same semantic class. Since the described balance between class utilization, class consistency, and confidence is hard to be practically achieved, clustering methods are typically prone to degenerate solutions (Amrani et al., 2022). In other words, samples tend to collapse into a single cluster or the prediction probability spreads out uniformly.
It is well-established that representation learning plays a critical role in image clustering (Chang et al., 2017). Recent progress in self-supervised representation learning has advanced computer vision (Chen et al., 2020). The self-supervised learned features are typically more transferable to new tasks than features from supervised learning (Ericsson et al., 2021), even for non-contrastive objectives (Tian et al., 2021). The frequently used joint-embedding architectures (Grill et al., 2020; Zbontar et al., 2021; Caron et al., 2021) are by design invariant to strong image transformations that preserve label information and, unlike contrastive objectives (Wang and Isola, 2020), they allow for a strongly
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Architecture & NMI (\%) & ACC (\%) & ARI (\%) \\ \hline Scla (VLT, et al.) & Resnet50 & 65.7 & 30.5 & 16.2 \\ SCAN (Van Gansbeke et al.) & Resnet50 & 72.0 & 39.9 & 27.5 \\ SSCN (Amrani et al.) & Resnet50 & 73.3 & 41.1 & 29.5 \\ \hline _Our method_ & & & & \\ TEMI (DINO pretraining) & Resnet50 & 74.5 & 45.2 & 31.3 \\ TEMI (DINO pretraining) & ViT-B/16 & 81.4 & 58 & 45.9 \\ TEMI (MSN pretraining) & ViT-L/16 & **82.5** & **61.6** & **48.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Unsupervised image classification (clustering) results for the ImageNet validation set, without using the ground-truth labels or additional data.** Evaluation metrics include clustering accuracy (ACC), normalized mutual information (NMI), and adjusted random index (ARI). All our models are pretrained on ImageNet.
inhomogeneous distribution of samples in feature space. That renders these architectures as promising candidates for image clustering (Van Gansbeke et al., 2020), which has not been thoroughly explored at scale (Zhou and Zhang, 2022). Even though the transferability of self-supervised models (Ericsson et al., 2021) and vision transformers (ViTs) (Naseer et al., 2021) has been separately established, limited research has been conducted to study the transferability of self-supervised ViTs. Since no labels are required for pretraining, large-scale models can be trained and adapted to new tasks on challenging datasets. Concurrently, how the recent progress in natural language supervision (i.e. CLIP Radford et al., 2021) trained on massive image-text pairs transfes to unsupervised downstream tasks is unknown.
How to adapt a pretrained model for image clustering is non-trivial. For instance, it is well known that \(k\)-means is sub-optimal, as it often leads to imbalanced clusters (Van Gansbeke et al., 2020). In general, \(k\)-means is primarily suitable for evenly scattered data samples around their centroids (Yang et al., 2017). Interestingly, applying \(k\)-means to self-supervised ViTs already yields remarkable results (Dosovitskiy et al., 2020; Assran et al., 2022) on ImageNet (see Fig. 1). Another way to get images with the same semantic class is by mining the nearest neighbors (NN) based on their feature similarity (Dwibedi et al., 2021; Huang et al., 2019). In this way, a clustering head can be trained based on pairs (Huang et al., 2022) or triplets (Wu et al., 2019). Still, images that are close in the feature space do not always share the same class (Van Gansbeke et al., 2020) and therefore must be considered as noisy pairs.
In this paper, a two-stage method that aims to separate feature learning and clustering is proposed. In contrast to Van Gansbeke et al. (2020), where features are learned from scratch for each downstream dataset, we leverage existing large-scale pretrained models. Considering that a pretrained model has already captured label-related features, this work focuses on learning the cluster assignments. Our contributions are summarized as follows:
1. A self-distillation clustering framework is introduced using a novel objective based on temperature-scaled pointwise mutual information and instance weighting.
2. A comprehensive experimental study across models and datasets is conducted. Therein, we report an average gain of \(6.1\)% and \(12.2\)% in clustering accuracy compared to \(k\)-means on ImageNet and CIFAR100
Figure 1: **Clustering accuracies on ImageNet (left) and CIFAR100 (right) across 17 pretrained models. Supervised and self-supervised models (MSN, MoCoV3, DINO) were pretrained on ImageNet. R50 stands for ResNet50 (He et al., 2016), C for ConvNext (Liu et al., 2022), and V for Vision Transformer (Dosovitskiy et al., 2020). Small (S), Base (B), and Large (L) indicate the size of the models. The vertical distance of each data point to the diagonal (dashed line) shows the improvement over \(k\)-means. Results for the Masked Autoencoder (He et al., 2022) (Appendix B.1) are not shown as the accuracies are significantly below R50. Best viewed in color.**
across \(17\) different pretrained models, as illustrated in Figure 1. Overall, we show that ViTs capture the most transferable label-related features. We additionally find that self-supervised ViTs (Assran et al., 2022) achieve state-of-the-art results (\(61.6\)% clustering accuracy) on ImageNet, without using the ground-truth labels or external data (Table 1).
## 2 Related Work
**Single-stage Deep Image Clustering Methods**. Deep image clustering approaches can be roughly divided into single and multi-stage methods. The majority of single-stage methods alternate between learning the features and the clusters, i.e. in an expectation-maximization (EM) manner. For instance, in DAC, Chang et al. (2017) formulate a binary pairwise-classification task, where at each iteration pairs are selected based on their feature similarity. Next, the computed pairs are used to train a convolutional neural network (CNN). In the same direction, in DeepCluster (Caron et al., 2018), the authors alternate between clustering the features of a CNN with \(k\)-means (Lloyd, 1982) and using the obtained cluster assignments as pseudo-labels to optimize the parameters of the CNN. Later on, YM. et al. (2020) demonstrate that DeepCluster is prone to degenerate solutions that are avoided via particular hyperparameter choices. To that end, the authors design a multi-step pseudo-label extraction framework, called SeLa. The latter iteratively estimates the pseudo-label assignment matrix under the equipartition constraint and then uses the pseudo-labels in a standard supervised setting. In PCL (Li et al., 2020), the authors formulate clustering as learning the cluster centroids with \(k\)-means in parallel with optimizing the network via contrastive learning (Chen et al., 2020). To overcome the class collision of the negative pairs that may be caused by contrastive learning, Huang et al. (2022) extend PCL in a proximal framework called ProPos. ProPos only maximizes the distance between the cluster centroids with contrastive learning, while mining NN in the embedding space as positive pairs for neighboring sample alignment (Grill et al., 2020). However, most of the existing approaches still rely on \(k\)-means for estimating the clusters (pseudo-labels).
Several single-stage approaches exist, which aim to jointly learn the feature representations and clusters. Such single-step methods, or simply end-to-end methods, are known to be sensitive to weight initialization (Dang et al., 2021). In this direction, DCMM is developed (Wu et al., 2019) to progressively mine NN in the feature space as well as high-confident samples. Another single-stage end-to-end example is IIC, wherein Ji et al. (2019) derive a mutual information-based objective for paired data to train a CNN. This objective is close to ours as they are both grounded in information theory. Nevertheless, the aforementioned approaches only consider stochastic transforms of the same image to obtain a pair. They are hence limited to solely learning invariances w.r.t. image augmentations, which cannot cover the variability of a given class (Dwibedi et al., 2021). More recently, Amrani et al. (2022) presented a single-stage end-to-end method, called SSCN, that employs a variant of the cross-entropy loss whilst considering a queue of NN.
**Multi-stage Deep Image Clustering Methods**. Multi-stage methods initially design a pretext task in order to learn semantically meaningful features. As an example, early multi-stage methods use denoising autoencoders (Xie et al., 2016) as a pretext task. A major breakthrough in deep image clustering is established by the adoption of contrastive self-supervised learning (Chen et al., 2020; He et al., 2020). For instance, Van Gansbeke et al. (2020) decouple image clustering into three distinct steps, starting with contrastive learning. Subsequently, the authors train a head to cluster the mined NN from the extracted features. Lastly, they use the pseudo-labels from the confidently assigned samples to fine-tune the whole architecture. In a similar approach, called NNM (Dang et al., 2021), the authors first aim to learn contrastive-based representations. NN are then mined, both from the batch and dataset features, which makes this method hard to scale in large-scale datasets. Recently, Zhou and Zhang (2022) leverage self-supervised pretrained ViTs (Caron et al., 2021) and train a clustering head on small-scale datasets, which is closer to our method. However, their approach (TSP) heavily relies on \(k\)-means for the weight initialization phase.
Surprisingly, very few image clustering approaches (Van Gansbeke et al., 2020; Amrani et al., 2022; YM. et al., 2020) have been successfully applied on large-scale datasets such as ImageNet (Deng et al., 2009). Besides, most methods report results only with the Resnet50 (He et al., 2016) architecture, while superior architectures for image recognition remain unexplored (Liu et al., 2022; Dosovitskiy et al., 2020).
## 3 Proposed Method
### Classification Model
Our aim is to learn a probabilistic classifier from pairs of examples that share label-related information. We assume that the data distribution, \(p(x)\), is the result of a generative process, \(c\sim p(c)\) and \(x\sim p(x|c)\), with \(p(c)\) the prior probability that an example belongs to a class \(c\in\{1,..,C\}\). Consequently, the joint distribution, \(p(x,x^{\prime})\) that a pair of examples, \((x,x^{\prime})\), belongs to the same class is given by
\[p(x,x^{\prime})=\sum_{c=1}^{C}p(x|c)p(x^{\prime}|c)p(c). \tag{1}\]
We introduce a parametrized probabilistic classifier, \(q(c|x)\), that distributes examples \(x\sim p(x)\) among classes, with class occupancy given by \(q(c)=\mathbb{E}_{x\sim p(x)}[q(c|x)]\). Using Bayes' theorem \(q(x|c)=q(c|x)p(x)/q(c)\), the joint distribution, \(p(x,x^{\prime})\), can be predicted by
\[q(x,x^{\prime})=\sum_{c=1}^{C}q(x|c)q(x^{\prime}|c)q(c). \tag{2}\]
To estimate the association between \(x\) and \(x^{\prime}\) we introduce the pointwise mutual information, \(\mathrm{pmi}(x,x^{\prime})\)(Church and Hanks, 1990), defined by
\[\mathrm{pmi}(x,x^{\prime}) \coloneqq\log\frac{q(x,x^{\prime})}{p(x)p(x^{\prime})} \tag{3}\] \[=\log\sum_{c=1}^{C}\frac{q(c|x)q(c|x^{\prime})}{q(c)}. \tag{4}\]
**Theorem 1**.: _If (i) each example \(x\sim p(x)\) belongs to one and only one cluster under the generative model \(p(x)=\sum_{c}p(x|c)p(c)\), (ii) the joint distribution \(p(x,x^{\prime})\) is known, and (iii) \(q^{*}(c|x)\) is a probabilistic classifier defined by_
\[q^{*}(c|x)=\arg\max_{q(c|x)}\mathbb{E}_{x,x^{\prime}\sim p(x,x^{\prime})}[ \mathrm{pmi}(x,x^{\prime})], \tag{5}\]
_then \(q^{*}(c|x)\) is equal to the optimal probabilistic classifier, \(p(c|x)=p(x|c)p(c)/p(x)\), up to a permutation of cluster indices._
The proof can be found in Appendix A. Theorem 1 states that under condition (i) the knowledge of pairs of examples belonging to the same class suffices to establish an objective for an optimal classification model.
### Self-distillation Clustering Framework
The starting point is a pretrained feature extractor (backbone) \(g(\cdot)\) that assigns each example \(x\) in the dataset \(D\) a feature vector \(g(x)\). We mine the \(k\) nearest neighbors (\(k\)-NN) of \(x\) in the feature space by computing the cosine similarity between \(g(x)\) and the feature vectors of all other images in \(D\). We denote the set of \(k\)-NN for x by \(S_{x}\). The sets \(\{S_{x}|x\in D\}\) can be precomputed before training. During training, we randomly sample \(x\) from \(D\) along with \(x^{\prime}\) from \(S_{x}\), to generate image pairs that share label information with high probability.
We introduce two clustering heads, a _student head_, \(h_{s}(\cdot)\), and a _teacher head_, \(h_{t}(\cdot)\), that share the same architecture but differ w.r.t. their parameters, \(\theta_{s}\), and \(\theta_{t}\). Each head consists of a three-layer fully connected feed-forward network. Image pairs \(x,x^{\prime}\) are passed through the backbone, and resulting two feature vectors enter the two heads separately, \(h_{s}(g(x))\) and \(h_{t}(g(x^{\prime}))\). The head outputs are converted to probabilistic classifiers, \(q_{s}(c|x)\) and \(q_{t}(c|x^{\prime})\), using a temperature scaled softmax function, which for student head is given by
\[q_{s}(c|x)=\frac{\exp(h_{s}(g(x))_{c}\left/\,\tau\right.)}{\sum_{c^{\prime}} \exp(h_{s}(g(x))_{c^{\prime}}\left/\,\tau\right.)}, \tag{6}\]
where \(\tau\) is the temperature hyperparameter. Unlike previous self-distillation frameworks (Caron et al., 2021), we use the same temperature \(\tau=0.1\) for both heads. We approximate the pointwise mutual information by
\[\widetilde{\mathrm{pmi}}(x,x^{\prime})\coloneqq\log\sum_{c=1}^{C}\frac{q_{s}( c|x)q_{t}(c|x^{\prime})}{\tilde{q}_{t}(c)}. \tag{7}\]
and estimate \(q(c)\) by an exponential moving average (EMA) over batches using the teacher head
\[\tilde{q}_{t}(c)\gets m\,\tilde{q}_{t}(c)+(1-m)\frac{1}{B}\sum_{i=1}^{B} q_{t}(c|x_{i}), \tag{8}\]
with \(B\) the batch size and \(m\in[0,1)\) a momentum parameter. In practice, we symmetrize Equation (7) to compute the loss function
\[\mathcal{L}(x,x^{\prime})\coloneqq-\frac{1}{2}\left(\widetilde{\mathrm{pmi}} (x,x^{\prime})+\widetilde{\mathrm{pmi}}(x^{\prime},x)\right). \tag{9}\]
Note that only the parameters \(\theta_{s}\) of the student head are updated using backpropagation. The parameters of the teacher head, \(\theta_{t}\), are updated by an exponential moving average for the student parameters, \(\theta_{s}\), over past update steps (Caron et al., 2021; Grill et al., 2020). As a result, \(p_{t}(c|x)\) represents a sufficiently stable target distribution for the student head. In contrast to other self-distillation frameworks (Caron et al., 2021) no complicated adaptation of softmax temperatures over training is required. Following previous work (Van Gansbeke et al., 2020), we employ an ensemble of \(H\) clustering heads in training (Figure 2). For the evaluation, we use the teacher head with the lowest training loss.
### Balancing class utilization
For a dataset \(D\) that has been generated using balanced classes, \(p(c)=\mathit{const}\), we expect that \(\tilde{q}(c)\approx\mathit{const}\), as a consequence of the optimization process. However, in practice, we observe that classes are typically far from uniformly utilized. We suspect that our self-distillation learning framework leads to over-confident class predictions for a fraction of classes in the early training phase. To allow confidence to grow simultaneously over all classes during training, we introduce a hyperparameter \(\beta\) in Equation (7) to reduce over-confidence without affecting the optimal solution and rewrite it as
\[\widetilde{\mathrm{pmi}}^{i}(x,x^{\prime})=\log\sum_{c=1}^{C}\frac{\left(q^{i }_{s}(c|x)q^{i}_{t}(c|x^{\prime})\right)^{\beta}}{\tilde{q}^{i}_{t}(c)}, \tag{10}\]
where \(\beta\in(0.5,1]\) and \(i\in\{1,\dots,H\}\) is the head index. Note that for \(\beta=0.5\) the loss corresponds to the Bhattacharyya distance (Bhattacharyya, 1946) for \(\tilde{q}^{i}_{t}(c)=const\). The Bhattacharyya distance can be minimal even if \(q^{i}_{t}\) is far from one-hot. Moreover, if utilization of all classes is not required - for example as in the case of overclustering - we set \(\beta=1\). We empirically found \(\beta=0.6\) to work well across different backbones and datasets. We propose an experimental strategy to choose \(\beta\) without access to the ground-truth labels, as explained in Section 4.3. The symmetrized loss from Equation (10) is defined as \(\mathcal{L}^{i}(x,x^{\prime})\) in analogy to Equation (9).
### Teacher-guided Instance Weighting
As discussed in Section 1, the mined \(k\)-NN in the feature space of \(g(\cdot)\) tend to be noisy. For this reason, we introduce an instance weighting term for each head \(i\) given by
\[w_{i}(x,x^{\prime})=\sum_{c=1}^{C}q^{i}_{t}(c|x)q^{i}_{t}(c|x^{\prime}). \tag{11}\]
Intuitively, \(w_{i}(x,x^{\prime})\) acts as a guidance term that assigns a higher weight to true positive pairs compared to false positive ones. Importantly, \(w_{i}(x,x^{\prime})\) relies only on the predictions of the teacher. The rationale behind this is that model averaging over training iterations tends to produce more accurate predictions (Tarvainen and Valpola, 2017; Polyak and Juditsky, 1992). We call this setup teacher-weighted pointwise mutual information (WPMI). The final objective for each separate head \(i\) is given by
\[\mathcal{L}^{i}_{\mathrm{WPMI}}(x,x^{\prime})\coloneqq w_{i}(x,x^{\prime}) \mathcal{L}^{i}(x,x^{\prime}). \tag{12}\]
### Teacher Ensemble Instance Weighting
We further propose to aggregate the information from multiple heads, which results in a single scalar weight for each image. For this purpose, we use the mean weight across the heads
\[w(x,x^{\prime})=\frac{1}{H}\sum_{i=1}^{H}w_{i}(x,x^{\prime}). \tag{13}\]
This is conceptually similar to model ensembling. We thus call this setup TEMI (teacher ensemble-weighted pointwise mutual information). The TEMI loss function is defined by
\[\mathcal{L}^{i}_{\mathrm{TEMI}}(x,x^{\prime})\coloneqq w(x,x^{\prime}) \mathcal{L}^{i}(x,x^{\prime}). \tag{14}\]
## 4 Experimental evaluation
### Datasets and Metrics
The proposed method (TEMI) is evaluated on five common benchmark datasets, namely CIFAR10, CIFAR20, CIFAR100 (Krizhevsky et al., 2009), STL10 (Coates et al., 2011), and ImageNet (Deng et al., 2009). CIFAR10, CIFAR20 and CIFAR100 contain \(50K\) training images of size \(32\times 32\), STL10 contains \(5K\) training samples of size \(96\times 96\), and ImageNet has \(1{,}281{,}167\) training samples. CIFAR20 has the same training data as CIFAR100 with \(20\) superclasses derived from the ground-truth labels. We resize all images to \(224\times 224\). The training set is used during the optimization phase, while the evaluations are carried out on the validation set. Further dataset information and results on smaller subsets of ImageNet (Van Gansbeke et al., 2020) can be found in Appendix B.1.
To quantify the clustering performance, we report the following metrics: a) the clustering accuracy (ACC), b) the normalized mutual information (NMI), and the adjusted random index (ARI). To estimate the accuracy, the one-to-one mapping between cluster predictions and ground-truth labels is computed by the Hungarian matching algorithm (Kuhn, 1955). For our overclustering experiments, we only report the adjusted mutual information (AMI), similar to Li et al. (2020). Finally, we establish two baselines: a) \(k\)-means and b) the SCAN clustering loss within our self-distillation framework. For a fair comparison, we tune the
Figure 2: **An overview of the proposed self-distillation clustering framework.** The nearest neighbors are mined in the feature space of \(g\). The operation \(\mathrm{stop\,gradient}\) indicates that no gradients are backpropagated. EMA refers to the exponential moving average over the parameters.
entropy regularization hyperparameter of SCAN, \(\lambda\), based on a grid search and use the value \(\lambda=4\).
### Implementation Details
For a fair comparison with the existing methods, we assume to know in advance the number of ground-truth labels. Concerning the hyperparameters, we set \(H=50\) and \(\beta=0.6\) for all datasets, apart from CIFAR20 where we take \(\beta=0.55\). Since the uniform prior is unnecessary for overclustering, we set \(\beta=1\) in this case. We use \(25\)-NN on ImageNet, \(150\)-NN on CIFAR20, and \(50\)-NN for the remaining datasets. Unless otherwise specified, we use the same setup across different backbones and report the results at the end of training. We trained for \(200\) epochs with a batch size of \(512\) using the AdamW optimizer (Loshchilov and Hutter, 2018) with a learning rate of \(10^{-4}\), \(20\) warmup epochs, and weight decay of \(10^{-4}\). We used a batch size of \(1024\) and \(100\) epochs on ImageNet, while on STL10 we trained for \(800\) epochs to compensate for the reduced amount of training samples. For the pretrained model weights, we either used the official repositories (Liu et al., 2022; Chen et al., 2021; Radford et al., 2021; Caron et al., 2021; Assran et al., 2022) or the _timm_ library (Wightman, 2019).
Unlike previous methods (Van Gansbeke et al., 2020), we found that augmentations (RandAugment Cubuk et al., 2020, and the ones from Chen et al. (2020)) were not improving the clustering metrics when training with \(k\)-NN. Hence, we precomputed the backbone feature representations. This enables us to train the clustering heads significantly faster and with less memory. As a consequence, all clustering experiments can be conducted with a single GPU, with \(12\)GB of VRAM, within \(24\) hours for all models and datasets. Crucially, we found that some pretrained models (i.e. MSN) produce unnormalized features. For that reason, we standardize the features of all models.
For the linear probing experiments, we trained a linear layer without augmentations, using the Adam (Kingma and Ba, 2014) optimizer with a learning rate of \(10^{-3}\) and weight decay of \(10^{-3}\). To enforce reproducibility, the means and standard deviations are reported for all our experiments and metrics, computed over \(3\) independent runs with different seeds.
### Experimental Results
We first present a strategy to choose \(\beta\in(0.5,1]\) for clustering with the number of ground-truth labels known. As depicted in Figure 3, an accurate model, \(p_{t}(c|x)\) should be able to maintain a high entropy \(\mathrm{H}(q_{t}(c))\), while maintaining its discriminative power. To quantify the latter we use the conditional entropy \(\mathrm{H}(q_{t}(c|x))\). The lower the value of \(\mathrm{H}(q_{t}(c|x))\) the more discriminative the predictions. The extreme case \(\mathrm{H}(q_{t}(c|x))=0\), corresponds to a one-hot distribution. Thus, we propose to pick the lowest value of \(\beta\) such that \(\mathrm{H}(q_{t}(c|x))\) remains sufficiently low. We experimentally found \(0.6\) to work consistently well across models and datasets. An exception is CIFAR20, where we used \(\beta=0.55\) since superclasses are conceptually a form of underclustering.
As shown in Table 2, an average accuracy gain of \(5.0\)% over \(k\)-means is found for CIFAR100 and ImageNet, even with the plain PMI setup. Introducing multiple heads in PMI further improves the obtained results by an average gain of \(0.8\)%. Critically, for our best setup (TEMI) we observe an average gain of \(8.1\)% and \(3.7\)% compared to \(k\)-means and the SCAN clustering loss, respectively.
To study the applicability of our method, we then applied our best setup (TEMI) to various publicly available pretrained models, as shown in Fig 1. Therein, we report an average
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Heads & CIFAR100 & ImageNet \\ \hline \(k\)-means & - & 56.99 & 52.26 \\ SCAN & 50 & 62.6\(\pm\)0.94 & 55.6\(\pm\)0.15 \\ \hline PMI & 1 & 61.6\(\pm\)0.41 & 57.5\(\pm\)0.22 \\ WMI & 1 & 63.4\(\pm\)1.89 & 56.5\(\pm\)0.41 \\ \hline PMI & 50 & 63.1\(\pm\)0.56 & 57.7\(\pm\)0.06 \\ WMI & 50 & 65.6\(\pm\)1.04 & 57.0\(\pm\)0.38 \\ TEMI & 50 & **67.1\(\pm\)1.30** & **58.4\(\pm\)0.22** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation study for the TEMI objective**. All the experiments were conducted with \(\beta=0.6\), and DINO ViT-B/16 as the backbone model. The clustering accuracy is reported in %.
Figure 3: **Effect of \(\beta\) on the validation accuracy and on the entropy of \(q_{t}(c|x)\) and \(q_{t}(c)\) on CIFAR100**. The values are computed using MWMI DINO ViT-B/16. The dashed horizontal line illustrates the maximal possible entropy, i.e. \(\log C\). A high entropy of \(q_{t}(c)\) indicates that the clusters are almost uniformly utilized, while a low entropy of \(q_{t}(c|x)\) indicates highly confident predictions (one-hot).
accuracy gain of \(6.1\)% and \(12.1\)% compared to \(k\)-means on ImageNet and CIFAR100 across \(17\) different pretrained models. More specifically, TEMI MSN ViT-L/14 and TEMI DINO ViT-B/16 are the best-performing self-supervised methods on ImageNet (\(61.6\)% ACC) and CIFAR100 (\(67.1\)% ACC). Moreover, CLIP-based backbones have the highest ACC increase over \(k\)-means when trained with TEMI, precisely \(10.7\)% on ImageNet and \(14.1\)% on CIFAR100.
Concerning the supervised pretrained models in Fig 1, we demonstrate that ConvNext-L outperforms ViT-L on ImageNet, precisely by \(2.7\)% on ACC with TEMI. However, the supervised ViT-L surpasses ConvNext-L by a large margin of \(22.3\)% in ACC, when benchmarked on CIFAR100 with TEMI. Among the architectures investigated, large ViTs learn the most transferable label-related features, even without supervised fine-tuning. Our findings are consistent with Naseer et al. (2021). However, by comparing ConvNets with ViTs (Figure 1) we cannot confirm that clustering accuracy for the pretraining validation set is a strong predictor for out-of-distribution generalization, as stated in Wenzel et al. (2022).
Regarding ImageNet, we compare various self-supervised architectures that were trained without any external data, as depicted in Table 1. Using the same architecture (Resnet50) as current state-of-the-art models (SSCN Amrani et al., 2022), TEMI achieves an improvement of \(4.1\)% in ACC. With MSN ViT-L/16 as the backbone, we push the state-of-the-art ACC on ImageNet to \(61.6\)%, resulting in a substantial gain of \(20.5\)% compared to SSCN. The obtained results strongly indicate that first learning the augmentation-invariant features and then focusing on learning the invariances w.r.t. images that belong to the same class is an effective strategy for image clustering.
Incentivized by the above observation, we investigate the overclustering performance in Table 3, by adopting the setup from Li et al. (2020). More concretely, we use \(25\)K clusters and set \(\beta=1\) without any hyperparameter tuning. We almost match the performance of ProPos (Huang et al., 2022) with TEMI DINO Resnet50 while reaching a considerable gain of \(7.4\)% in AMI with TEMI DINO ViT-B/16. TEMI can hence be easily applied to any number of desired clusters.
In Table 4, the transfer performance on three small-scale datasets is evaluated. TEMI DINO ViT-B backbone has inarguably the best transfer performance, outperforming the ACC of ProPos by 4.6% and TSP by 2.9% on average. It is worth pointing out that TSP (Zhou and Zhang, 2022) uses the same pretrained model and it is thus a fair comparison. Ultimately, we notice the large accuracy gap between clustering with TEMI and probing in CIFAR20, which suggests that the superclass structure cannot be inferred from the visual input. For instance, clocks, lamps, and telephones are grouped into household electrical devices.
### Discussion
**How expressive can an image classifier be by only training with pairs?** We examine the training accuracy in Table 5, by training with the true positive pairs from the computed \(k\)-NN. The 98.55% training accuracy on CIFAR100 with TEMI DINO ViT-B/16 indicates that it is possible to
Figure 4: **In each row, ImageNet samples that are assigned to the same cluster by TEMI MSN ViT-L/16 are shown.** The ground-truth labels are indicated in the text below the images. The first two columns correspond to correctly classified images while the last two are examples of misclassified images. More samples can be found in Appendix C.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & AMI (\%) \\ \hline DeepCluster (Caron et al., 2018) & 28.1 \\ MoCo (He et al., 2020) & 28.5 \\ PCL (Li et al., 2020) & 41.0 \\ ProPos (Huang et al., 2022) & 52.5 \\ \hline TEMI DINO Resnet50 & 51.8\(\pm\)0.11 \\ TEMI DINO ViT-B/16 & **59.9\(\pm\)0.19** \\ TEMI MSN ViT-L/16 & 58.8\(\pm\)0.51 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Overclustering results on the ImageNet validation set.** The adjusted mutual information (AMI) score for 25K clusters is reported, as in (Li et al., 2020). For all experiments, we set \(\beta=1\).
train a powerful unsupervised image classifier by only relying on pairs. In fact, we observe that we almost match the supervised linear probing accuracy on CIFAR100, which was trained with a standard cross-entropy loss (84.09% vs 85.34%). Still, we identify cases where the human-annotated label is ambiguous and cannot be determined solely by the visual signal, as illustrated in Figure 4.
**What is the impact of the instance weighting term?** Apart from the clustering accuracy gains presented in Table 2, we examined the actual value of the instance weighting term \(w(x,x^{\prime})\) after training. To this end, we computed the mean weights for true positives and false positives sampled from \(50\)-NN within the CIFAR100 validation set, which take the values \(0.76\) and \(0.40\), respectively. Furthermore, \(w(x,x^{\prime})\) has a negative impact when only true positive pairs are considered during training (Table 5). This is an expected behavior, as a fraction of true positive pairs will be down-weighted by \(w(x,x^{\prime})\) due to low feature similarity. As an example, digital and analog clocks share the same label in CIFAR100 but have low similarity in feature space.
**How discriminative are the resulting cluster assignments?** Besides Figure 3, we quantify the discriminative power of TEMI by computing the mean and median maximum softmax probability (MSP Hendrycks and Gimpel, 2016). We calculate a mean and median MSP of 88.5% and 98.9% on CIFAR100 and 85.3% and 99.2% on ImageNet. The computed results verify that the introduced framework results in discriminative predictions.
**Contrastive versus non-contrastive pretraining for image clustering.** The performance gap between contrastive (MoCoV3 ViT-B) and non-contrastive (DINO ViT-B) backbones likely originates from a homogeneous distribution of examples in feature space as part of the contrastive learning objective, which attenuates the necessary structure in feature space for image clustering (Wang and Isola, 2020; Huang et al., 2022).
## 5 Conclusion
In this paper, a novel self-distillation framework for image clustering was proposed. In addition, a new objective based on pointwise mutual information was presented. After studying the performance of \(17\) pretrained models, it was shown that TEMI can be used with any type of pretraining with significant improvements over \(k\)-means. Finally, new state-of-the-art results were achieved on ImageNet both for clustering and overclustering, leveraging self-supervised ViTs. To conclude, future works are encouraged to explore the connection between image clustering and representation learning in greater depth.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Datasets & \multicolumn{3}{c}{CIFAR10} & \multicolumn{3}{c}{CIFAR20} & \multicolumn{3}{c}{STL10} \\ \cline{2-10} Methods & NMI(\%) & ACC(\%) & ARI(\%) & NMI(\%) & ACC(\%) & ARI(\%) & NMI(\%) & ACC(\%) & ARI(\%) \\ \hline DAC (Chang et al.) & 39.6 & 52.2 & 30.6 & 18.5 & 23.8 & 8.8 & 36.6 & 47 & 25.7 \\ DCCM (Wu et al.) & 49.6 & 62.3 & 40.8 & 28.5 & 32.7 & 17.3 & 37.6 & 48.2 & 26.2 \\ PICA (Huang et al.) & 59.1 & 69.6 & 51.2 & 31 & 33.7 & 17.1 & 61.1 & 71.3 & 53.1 \\ NNM (Dang et al.) & 74.8 & 84.3 & 70.9 & 48.4 & 47.7 & 31.6 & 69.4 & 80.8 & 65 \\ PCL (Li et al.) & 80.2 & 87.4 & 76.6 & 52.8 & 52.6 & 36.3 & 71.8 & 41.0 & 67.0 \\ SCAN (Van Gansbeke et al.) & 79.7 & 88.3 & 77.2 & 48.6 & 50.7 & 33.3 & 69.8 & 80.9 & 64.6 \\ SPICE (Niu et al.) & 86.5 & 92.6 & 85.2 & 56.7 & 53.8 & 38.7 & 87.2 & 93.8 & 87.0 \\ ProPS (Huang et al.) & 88.6 & 94.3 & 88.4 & 60.6 & 61.4 & 45.1 & 75.8 & 86.7 & 73.7 \\ TSPI (Zhou and Zhang) & 88.0 & 94.0 & 87.5 & 61.4 & 55.6 & 43.3 & 95.8 & 97.9 & 95.6 \\ \hline TEMI DINO ViT-B/16\({}^{\dagger}\) & **88.6\(\pm\)0.05** & **94.5\(\pm\)0.03** & **88.5\(\pm\)0.08** & **65.4\(\pm\)0.45** & **63.2\(\pm\)0.38** & **48.9\(\pm\)0.21** & **96.5\(\pm\)0.13** & **98.5\(\pm\)0.04** & **96.8\(\pm\)0.09** \\ TEMI MSN ViT-L/16\({}^{\dagger}\) & 82.9\(\pm\)0.16 & 90.0\(\pm\)0.14 & 80.7\(\pm\)0.22 & 59.8\(\pm\)0.04 & 57.8\(\pm\)0.42 & 42.5\(\pm\)0.08 & 93.6\(\pm\)1.10 & 96.7\(\pm\)0.89 & 93.0\(\pm\)1.74 \\ \hline _(natural language)/supervised pretraining_ & \multicolumn{3}{c}{} \\ \hline TEMI CILP ViT-L/14\({}^{\dagger}\) & 92.6\(\pm\)0.13 & 96.9\(\pm\)0.07 & 93.2\(\pm\)0.15 & 64.5\(\pm\)0.12 & 61.8\(\pm\)1.47 & 46.8\(\pm\)1.17 & 96.4\(\pm\)0.79 & 97.4\(\pm\)0.69 & 94.9\(\pm\)1.26 \\ TEMI Sup. ViT-L/16\({}^{\dagger}\) & 91.8\(\pm\)0.65 & 96.0\(\pm\)0.53 & 91.6\(\pm\)1.02 & 65.0\(\pm\)0.89 & 58.4\(\pm\)0.98 & 45.4\(\pm\)1.41 & 82.7\(\pm\)2.94 & 84.6\(\pm\)2.37 & 73.9\(\pm\)2.77 \\ \hline _supervised baselines_ & \multicolumn{3}{c}{} \\ \hline Probing DINO ViT-B/16\({}^{\dagger}\) & 92.5 & 96.8 & 93.1 & 82.4 & 89.5 & 79.5 & 97.8 & 99.2 & 98.2 \\ Probing MSN ViT-L/16\({}^{\dagger}\) & 91.5 & 96.4 & 92.3 & 80.7 & 88.2 & 77.0 & 96.8 & 98.8 & 97.4 \\ Probing CLIP ViT-L/14\({}^{\dagger}\) & 95.1 & 98.1 & 95.8 & 85.7 & 91.7 & 83.6 & 99.2 & 99.7 & 99.4 \\ Probing Sup. ViT-L/16\({}^{\dagger}\) & 91.5 & 96.5 & 92.4 & 83.7 & 90.8 & 81.7 & 98.0 & 99.3 & 98.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Clustering performance metrics on small-scale benchmark datasets, evaluated on their validation splits.** Probing means training a linear layer on top of the pretrained backbone in a supervised manner. We only highlight the best self-supervised pretrained model as the new state-of-the-art. We clarify that methods with \({}^{\dagger}\) use models pretrained on external data, while \({}^{\star}\) indicates methods that include additional dataset splits during training (i.e. validation data).
\begin{table}
\begin{tabular}{l c} \hline \hline Loss & Validation ACC (\%) & Train ACC (\%) \\ \hline PMI & **84.1\(\pm\)0.36** & **98.6\(\pm\)0.38** \\ TEMI & 82.6\(\pm\)0.67 & 96.5\(\pm\)0.88 \\ \hline Linear probing & 85.3 & 99.3 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Clustering accuracies on CIFAR100 when training only with the true positive NN pairs using TEMI DINO ViT-B/16.** |
2309.15002 | Scalar field Restricted Boltzmann Machine as an ultraviolet regulator | Restricted Boltzmann Machines (RBMs) are well-known tools used in Machine
Learning to learn probability distribution functions from data. We analyse RBMs
with scalar fields on the nodes from the perspective of lattice field theory.
Starting with the simplest case of Gaussian fields, we show that the RBM acts
as an ultraviolet regulator, with the cutoff determined by either the number of
hidden nodes or a model mass parameter. We verify these ideas in the scalar
field case, where the target distribution is known, and explore implications
for cases where it is not known using the MNIST data set. We also demonstrate
that infrared modes are learnt quickest. | Gert Aarts, Biagio Lucini, Chanju Park | 2023-09-26T15:17:43Z | http://arxiv.org/abs/2309.15002v2 | # Scalar field Restricted Boltzmann Machine
###### Abstract
Restricted Boltzmann Machines (RBMs) are well known tools used in Machine Learning to learn probability distribution functions from data. We analyse RBMs with scalar fields on the nodes from the perspective of lattice field theory. Starting with the simplest case of Gaussian fields, we show that the RBM acts as an ultraviolet regulator, with the cutoff determined by either the number of hidden nodes or a model mass parameter. We verify these ideas in the scalar field case, where the target distribution is known, and explore implications for cases where it is not known using the MNIST data set. We also demonstrate that infrared modes are learnt quickest.
###### Contents
* 1 Introduction
* 2 Scalar fields on a bipartite graph
* 3 Training RBM parameters
* 4 Semi-analytical solution
* 4.1 Singular value decomposition
* 4.2 Learning dynamics
* 4.3 Simple examples
* 5 Learning Gaussian distributions
* 5.1 Initialisation with an exact solution
* 5.2 Initialisation with a random coupling matrix
* 5.3 Ultraviolet regularization by the RBM mass parameter
* 5.4 Ultraviolet regularisation by the number of hidden nodes
* 6 MNIST data set
* 7 Interactions
* 8 Conclusion
* A Details of the algorithm
* B Kullback-Leibler divergence
## 1 Introduction
In recent years Machine Learning (ML) has gained tremendous popularity in the physical sciences [1]. In theoretical nuclear and high-energy physics, ML is applied to a wide range of problems, see e.g. the reviews [2, 3]. In lattice field theory (LFT), there are applications to all aspects of LFT computations [4], with the development of the flow model to generate field configurations a particularly active area of research [5, 6]. From a theoretical perspective, it is of interest to explore synergies between ML on the one hand and statistical physics and LFT on the other hand, as many ML problems can be studied using the tools of the latter, see e.g. Ref. [7]. The connection between neural networks, Markov random fields and (Euclidean) lattice field theory has indeed not gone unnoticed, leading to the notions of quantum field-theoretic machine learning (QFT/ML) [8] and neural network/quantum field theory (NN-QFT) correspondence [9, 10]. Further exploration of this connection may be fruitful in both directions, providing potential insights relevant to both the ML and the LFT/QFT communities.
In this paper, we take a step in this direction by considering one of the simplest generative ML models, the Restricted Boltzmann Machine (RBM) [11, 12]. We analyse the RBM with continuous fields as degrees of freedom from the perspective of an Euclidean LFT and give a complete understanding in the case of Gaussian fields. We verify our analytical insights using simple scalar field theories in one and two dimensions, for which the target distribution is known, and also the MNIST data set, to demonstrate that our findings are indeed relevant for
typical ML data sets without known target distributions. We are in particular interested in the choice of "architecture", which admittedly is quite straightforward for an RBM, namely the number of hidden nodes as well as the choice of certain hyperparameters. Our main conclusion is that the scalar field RBM acts as an ultraviolet regulator, with the cutoff determined by either the number of hidden nodes or a model mass parameter. We will make clear what this implies for the MNIST data set, but note here already that in QFT language the MNIST data set is ultraviolet divergent and infrared safe.
The paper is organised as follows. In Sec. 2 we introduce scalar field RBMs from the perspective of LFT and give some exact solutions for the Gaussian case. The standard equations to train an RBM are summarised in Sec. 3. In Sec. 4 we analyse these equations analytically and work out some simple examples in detail. The findings of this section will be further explored in the two following sections. First, we consider as target theories free scalar fields in one and two dimensions in Sec. 5, for which the target distribution is known. In Sec. 6 we validate our findings for a data set with an unknown distribution, namely the MNIST data set. Options to add interactions are discussed in Sec. 7. A summary is given in the final section. App. A contains some more details on the algorithm employed.
## 2 Scalar fields on a bipartite graph
Restricted Boltzmann Machines (RBMs) are defined on a bipartite graph, consisting of one visible layer (with \(N_{v}\) nodes) and one hidden layer (with \(N_{h}\) nodes), see Fig. 1. Importantly, there are no connections within each layer, only between the two layers. The degrees of freedom living on the nodes can be discrete, as in an Ising model, continuous or mixed; Ref. [13] is a useful review.
In this section, we consider an RBM from the viewpoint of lattice field theory. We consider continuous fields and denote these as \(\phi_{i}\) (\(i=1,\ldots,N_{v}\)) for the visible layer and \(h_{a}\) (\(a=1,\ldots,N_{h}\)) for the hidden layer. The layers are coupled via bilinear terms and involve the \(N_{v}\times N_{h}\) weight matrix \(W\), as
\[\phi^{T}Wh=\sum_{i=1}^{N_{v}}\sum_{a=1}^{N_{h}}\phi_{i}W_{ia}h_{a}. \tag{1}\]
The aim is to describe a probability distribution \(p(\phi)\) on the visible layer, constructed by
Figure 1: Bipartite graph, with \(N_{v}\) (\(N_{h}\)) nodes in the visible (hidden) layer.
integrating over the hidden nodes in the joint probability distribution \(p(\phi,h)\), as follows,
\[p(\phi)=\int Dh\,p(\phi,h),\hskip 28.452756ptp(\phi,h)=\frac{\exp(-S(\phi,h))}{Z}, \tag{2}\]
where we have denoted the "energy" in the exponential as an action (following LFT notation) and the partition function reads
\[Z=\int D\phi Dh\exp(-S(\phi,h)). \tag{3}\]
The integrals are over all nodes,
\[\int D\phi=\prod_{i=1}^{N_{v}}\int_{-\infty}^{\infty}d\phi_{i},\hskip 42.679134pt \int Dh=\prod_{a=1}^{N_{h}}\int_{-\infty}^{\infty}dh_{a}. \tag{4}\]
Due to the absence of intralayer connections, the action takes a simple form in general,
\[S(\phi,h)=V_{\phi}(\phi)+V_{h}(h)-\phi^{T}Wh, \tag{5}\]
where the two potentials can be any function (as long as the integrals are well-defined) and be node-dependent, i.e.,
\[V_{\phi}(\phi)=\sum_{i}V_{i}^{(\phi)}(\phi_{i}),\hskip 42.679134ptV_{h}(h)=\sum_ {a}V_{a}^{(h)}(h_{a}). \tag{6}\]
Since there is no coupling between nodes within a layer, there is no "kinetic" or nearest-neighbour term; these are only generated via the coupling to the other layer.
To proceed, a natural starting point is to consider quadratic potentials, i.e. free fields (we discuss interactions in Section 7). We hence consider as action,
\[S(\phi,h) =\sum_{i}\frac{1}{2}\mu^{2}\phi_{i}^{2}+\sum_{a}\frac{1}{2\sigma _{h}^{2}}\left(h_{a}-\eta_{a}\right)^{2}-\sum_{i,a}\phi_{i}W_{ia}h_{a}\] \[=\frac{1}{2}\mu^{2}\phi^{T}\phi+\frac{1}{2\sigma_{h}^{2}}\left(h -\eta\right)^{T}(h-\eta)-\phi^{T}Wh. \tag{7}\]
A few comments are in order. We have denoted the prefactor as a mass term (\(\mu^{2}\)) in the case of \(\phi\) and as a variance (\(1/\sigma_{h}^{2}\)) in the case of \(h\); this is inessential, but emphasises that the model on the visible layer is ultimately the one we are interested in. Both \(\mu^{2}\) and \(\sigma_{h}^{2}\) are independent of the node; this is sufficient, as node dependence can be introduced via the weight matrix \(W\), as we will see shortly. Finally, a source (or bias) \(\eta_{a}\) is introduced in the hidden layer but not in the visible layer; again this is sufficient, as a nonzero bias breaks both symmetries, \(h\rightarrow-h\), \(\phi\rightarrow-\phi\).
Integrating out the hidden nodes then leads to the following distribution on the visible layer,
\[p(\phi)=\int Dh\,p(\phi,h)=\frac{1}{Z}\exp\left(-\frac{1}{2}\phi^{T}K\phi+ \phi^{T}J\right), \tag{8}\]
with
\[K\equiv\mu^{2}\mathds{1}-\sigma_{h}^{2}WW^{T},\hskip 28.452756ptJ\equiv W\eta, \tag{9}\]
and where \(Z\) now reads
\[Z=\int D\phi\,\exp\left(-\frac{1}{2}\phi^{T}K\phi+\phi^{T}J\right). \tag{10}\]
We note therefore that the distribution on the visible layer resembles a generating function for a scalar field theory, with the possibility of all-to-all bilinear interactions between the fields via the non-local kernel \(K\), and the bias resulting in a source term \(J\) coupled to \(\phi\). The connected two-point function or propagator is given by
\[\langle\phi_{i}\phi_{j}\rangle-\langle\phi_{i}\rangle\langle\phi_{j}\rangle=K_{ ij}^{-1}. \tag{11}\]
The hidden layer has provided auxiliary degrees of freedom to establish correlations between the visible nodes.
To continue the discussion we now assume the target probability distribution \(p_{\rm target}(\phi)\) is known and Gaussian, such that we can solve the RBM explicitly, i.e. we give explicit expressions for the weight matrix \(W\) and the bias \(\eta\). We denote the target kernel as \(K^{\phi}\) and consider the symmetric case (\(\phi\to-\phi\), \(\eta=J=0\)) for simplicity. Since \(K^{\phi}\) is a real and symmetric matrix, it can be diagonalised; for the theory to exist, all its eigenvalues are assumed to be semi-positive. The RBM is then solved by equating the two kernels, \(K^{\phi}=K\), i.e.,
\[K^{\phi}=\mu^{2}{\rm 1\mskip-4.5mu l}-\sigma_{h}^{2}WW^{T}\quad\Rightarrow \quad WW^{T}=\frac{1}{\sigma_{h}^{2}}\left(\mu^{2}{\rm 1\mskip-4.5mu l}-K^{\phi} \right)\equiv{\cal K}. \tag{12}\]
Since \(WW^{T}\) is semi-positive, we find conditions on the parameter \(\mu^{2}\), namely
\[\mu^{2}/\sigma_{h}^{2} \geq \max\left[\text{eigenvalues}\left(WW^{T}\right)\right], \tag{13}\] \[\mu^{2} \geq \max\left[\text{eigenvalues}\left(K^{\phi}\right)\right].\]
Consider now the case that \(N_{h}=N_{v}\). It is then easy to find some solutions for \(W\), given that the RHS of Eq. (12) is symmetric and positive:
* The RHS of Eq. (12) can be decomposed in a Cholesky decomposition, \({\cal K}=LL^{T}\), where \(L\) is a lower triangular matrix with real and positive diagonal entries. The solution is then simply \(W=L\). The triangular structure means that hidden node \(a\) connects to visible nodes with \(a\leq i\) only.
* The RHS of Eq. (12) can be diagonalised via an orthogonal transformation, \[{\cal K}=ODO^{T}=O\sqrt{D}O^{T}O\sqrt{D}O^{T},\] (14) yielding the symmetric solution \(W=W^{T}=O\sqrt{D}O^{T}\).
Hence we have found two explicit solutions. Additional solutions are found from either of the above by a right multiplication of \(W\) by an orthogonal transformation, rotating the hidden nodes,
\[W\to WO_{R}^{T},\qquad\qquad h\to O_{R}h,\qquad\qquad O_{R}^{T}O_{R}={\rm 1 \mskip-4.5mu l}, \tag{15}\]
since \(O_{R}\) drops out of the combination \(WW^{T}\).
We conclude therefore that an infinite number of solutions is present. These can be constrained by imposing further conditions on \(W\), as in the first two cases above. We will discuss this degeneracy further below.
Next, we may consider the case where \(N_{h}<N_{v}\). From Eq. (12) it is clear that the accuracy of reproducing the target distribution depends on the ranks of the matrices involved. We find
\[\text{rank}\left(WW^{T}\right)\leq\min\left(N_{v},N_{h}\right),\qquad\qquad \text{rank}\left({\cal K}\right)\leq N_{v}. \tag{16}\]
Only when the ranks are equal will the target distribution be reproducible; this is particularly relevant when choosing \(N_{h}\ll N_{v}\). Below we will consider in detail what happens of either of the two conditions found so far, i.e. Eq. (13) and \(\text{rank}\left(WW^{T}\right)=\text{rank}\left({\cal K}\right)\) is not valid.
Training RBM parameters
The exact solutions above are only useful when the target model is a known Gaussian model and \(N_{h}=N_{v}\). In general, the target distribution is not known and one has to learn from a finite data set. The training of the model is then done by maximising the log-likelihood function \(\mathcal{L}(\theta|\phi)\). The learnable parameters are collectively indicated as \(\theta=\{W,\eta,\mu^{2}\}\). Note that we will consider the case of unbroken symmetry and hence the bias is taken to be zero throughout, \(\eta_{a}=0\). We are hence concerned with determining the weight matrix \(W\) and the mass parameter \(\mu^{2}\).
The model distribution is given by Eq. (8), with \(J=0\). Given data consisting of \(N_{\rm conf}\) configurations, labelled as \(\phi^{(d)},d=1,\ldots,N_{\rm conf}\), the log-likelihood function of the model is written as
\[\mathcal{L}(\phi|\theta)=\frac{1}{N_{\rm conf}}\sum_{d=1}^{N_{\rm conf}}\log p _{\rm model}\left(\phi^{(d)};\theta\right)=-\frac{1}{N_{\rm conf}}\sum_{d=1}^{ N_{\rm conf}}\left(\frac{1}{2}\phi^{(d)T}K\phi^{(d)}+\ln Z\right). \tag{17}\]
This log-likelihood function can be optimised with gradient ascent algorithms, where the gradient is taken with respect to the coupling matrix \(W\) and the mass parameter \(\mu^{2}\). Explicitly,
\[\frac{\partial\mathcal{L}}{\partial W_{ia}} =\frac{1}{N_{\rm conf}}\sum_{d}\sum_{j}\sigma_{h}^{2}\phi_{i}^{( d)}W_{aj}\phi_{j}^{(d)}-\sum_{j}\sigma_{h}^{2}\left\langle\phi_{i}W_{aj}\phi_{j} \right\rangle_{\rm model}\] \[=\sigma_{h}^{2}\sum_{j}\left(\frac{1}{N_{\rm conf}}\sum_{d}\phi_ {i}^{(d)}W_{aj}\phi_{j}^{(d)}-\left\langle\phi_{i}W_{aj}\phi_{j}\right\rangle_ {\rm model}\right)\] \[=\sigma_{h}^{2}\sum_{j}\left(C_{ij}^{\rm target}-C_{ij}^{\rm model }\right)W_{ja}, \tag{18}\]
where the two-point correlation matrices for the data (i.e. the target) and the model are given respectively by
\[C_{ij}^{\rm target} =\frac{1}{N_{\rm conf}}\sum_{d=1}^{N_{\rm conf}}\phi_{i}^{(d)} \phi_{j}^{(d)}=\left\langle\phi_{i}\phi_{j}\right\rangle_{\rm target}\equiv K _{\phi ij}^{-1}, \tag{19}\] \[C_{ij}^{\rm model} =\left\langle\phi_{i}\phi_{j}\right\rangle_{\rm model}=K_{ij}^{-1}. \tag{20}\]
Similarly, for \(\mu^{2}\) one finds
\[\frac{\partial\mathcal{L}}{\partial\mu^{2}}=-\frac{1}{2}\sum_{i}\left(\langle \phi_{i}^{2}\rangle_{\rm target}-\langle\phi_{i}^{2}\rangle_{\rm model}\right). \tag{21}\]
Alternatively, we may consider the case where the target distribution \(p_{\rm target}(\phi)\) is known and the correlation matrix \(C_{ij}^{\rm target}\) of the target theory is obtainable. In that case, there is no need to use data but one can use the correlation function directly. It should be noted that in general the correlation matrix \(C_{ij}^{\rm target}\) is not directly accessible due to computational complexity, even if the analytical form of the target distribution is known.
If the target distribution is known, the same equations can also be derived by extremising the Kullback-Leibler divergence,
\[KL(p_{\rm target}||p_{\rm model})=\int D\phi\,p_{\rm target}(\phi)\log\frac{p_ {\rm target}(\phi)}{p_{\rm model}(\phi,\theta)}, \tag{22}\]
keeping in mind that only the model distribution depends on the learnable parameters \(\theta\). With the distribution given by Eq. (8) and the \(\theta\) dependence contained in the kernel \(K\) only (recall that \(J=0\)), extremising with respect to \(\theta\) then yields
\[\frac{\partial}{\partial\theta}KL(p_{\text{target}}||p_{\text{model}})=\frac{1} {2}\left\langle\phi^{T}\frac{\partial K}{\partial\theta}\phi\right\rangle_{ \text{target}}-\frac{1}{2}\left\langle\phi^{T}\frac{\partial K}{\partial \theta}\phi\right\rangle_{\text{model}}, \tag{23}\]
which yields the same equations for \(W\) and \(\mu^{2}\) as above, but with the opposite sign, as the KL divergence is minimised.
In actual applications, the gradients are used in a discretised update of the form
\[\theta_{n+1}=\theta_{n}+\eta_{n}\frac{\partial\mathcal{L}}{\partial\theta}, \tag{24}\]
where \(\eta_{n}\) is the, possibly time-dependent, learning rate. Details of the commonly used persistent contrastive divergence algorithm and time-dependent learning rate can be found in App. A.
## 4 Semi-analytical solution
### Singular value decomposition
Before solving the RBM numerically, we aim to gain analytical insight in the update equations using a singular decomposition for the weight matrix in the continuous time limit [13].
The update equations for the weight matrix \(W\) and the mass term \(\mu^{2}\), in the continuous time limit, read (see Eqs. (18-21)),
\[\dot{W} =\sigma_{h}^{2}\left[K_{\phi}^{-1}-K^{-1}\right]W \tag{25}\] \[\dot{\mu}^{2} =-\frac{1}{2}\text{tr}\,K_{\phi}^{-1}+\frac{1}{2}\text{tr}\,K^{-1}, \tag{26}\]
with the two-point functions (or propagators)
\[K_{\phi ij}^{-1} =\langle\phi_{i}\phi_{j}\rangle_{\text{target}}, \text{tr}\,K_{\phi}^{-1} =\sum_{i=1}^{N_{v}}\langle\phi_{i}\phi_{i}\rangle_{\text{target}}, \tag{27}\] \[K_{ij}^{-1} =\langle\phi_{i}\phi_{j}\rangle_{\text{model}}, \text{tr}\,K^{-1} =\sum_{i=1}^{N_{v}}\langle\phi_{i}\phi_{i}\rangle_{\text{model}}. \tag{28}\]
Recall that \(\langle\phi_{i}\rangle=0\). The dot denotes the time derivative. We remind the reader that both \(K\) and \(K_{\phi}\) are symmetric \(N_{v}\times N_{v}\) matrices and that the weight matrix \(W\) is of size \(N_{v}\times N_{h}\). We assume \(N_{h}\leq N_{v}\). The RBM (model) kernel is
\[K=\mu^{2}\text{1}\hskip-2.845276pt\text{l}-\sigma_{h}^{2}WW^{T}, \tag{29}\]
where \(\sigma_{h}^{2}\) is the variance of the hidden nodes.
We use the singular value decomposition to write \(W\) as
\[W=U\,\Xi\,V^{T},\hskip 28.452756ptUU^{T}=\text{1}\hskip-2.845276pt\text{l}_{N_ {v}\times N_{v}},\hskip 28.452756ptVV^{T}=\text{1}\hskip-2.845276pt\text{l}_{N_ {h}\times N_{h}}, \tag{30}\]
where \(U\) is an orthogonal \(N_{v}\times N_{v}\) matrix, \(V\) is an orthogonal \(N_{h}\times N_{h}\) matrix, and \(\Xi\) is the rectangular \(N_{v}\times N_{h}\) matrix with the (ordered) singular values \(\xi_{a}\) (\(a=1,\ldots,N_{h}\)) on the diagonal. The RBM kernel then takes the form
\[K=\mu^{2}\text{1}\hskip-2.845276pt\text{l}-\sigma_{h}^{2}U\Xi\Xi^{T}U^{T}=U \left[\mu^{2}\text{1}\hskip-2.845276pt\text{l}-\sigma_{h}^{2}\Xi\Xi^{T}\right] U^{T}\equiv UD_{K}U^{T}, \tag{31}\]
with the diagonal matrix
\[D_{K}=\mathrm{diag}\big{(}\underbrace{\mu^{2}-\sigma_{h}^{2}\xi_{1}^{2},\mu^{2}- \sigma_{h}^{2}\xi_{2}^{2},\ldots,\mu^{2}-\sigma_{h}^{2}\xi_{N_{h}}^{2}}_{N_{h}},\underbrace{\mu^{2},\ldots,\mu^{2}}_{N_{v}-N_{h}}\big{)}. \tag{32}\]
Note that the existence of the model requires that \(\mu^{2}>\sigma_{h}^{2}\xi_{1}^{2}\), with \(\xi_{1}\) the largest singular value of \(W\). Eq. (32) demonstrates that only the first \(N_{h}\) eigenvalues can potentially be learnt, with the remaining \(N_{v}-N_{h}\) eigenvalues frozen at the higher scale \(\mu^{2}\).
The symmetric target kernel \(K_{\phi}\) and the corresponding two-point function \(K_{\phi}^{-1}\) can be diagonalised via an orthogonal transformation as
\[K_{\phi}=O_{\phi}D_{\phi}O_{\phi}^{T},\qquad\quad K_{\phi}^{-1}=O_{\phi}D_{ \phi}^{-1}O_{\phi}^{T},\qquad\quad O_{\phi}O_{\phi}^{T}=\leavevmode \hbox{\small 1\kern-3.8pt\normalsize 1}_{N_{v}\times N_{v}}, \tag{33}\]
where the eigenvalues of \(K_{\phi}\) are assumed to be positive again.
The RHS of Eq. (25) can now be written as
\[\sigma_{h}^{2}\left[K_{\phi}^{-1}-K^{-1}\right]W=U\sigma_{h}^{2}\left[U^{T}O_ {\phi}D_{\phi}^{-1}O_{\phi}^{T}U-D_{K}^{-1}\right]\Xi V^{T}. \tag{34}\]
The term within the brackets will be encountered frequently below and hence we honour it with a new symbol,
\[\Lambda\equiv U^{T}O_{\phi}D_{\phi}^{-1}O_{\phi}^{T}U-D_{K}^{-1}=\Lambda^{T}. \tag{35}\]
The evolution equation for \(W\) can then be compactly written as
\[\dot{W}=\sigma_{h}^{2}U\Lambda\Xi V^{T},\qquad\quad\dot{W}^{T}=\sigma_{h}^{2 }V\Xi^{T}\Lambda U^{T}. \tag{36}\]
We note that \(\Lambda\) drives the evolution in the learning process: it vanishes when the basis on the visible layer is aligned with the basis for the data (\(U\to O_{\phi}\)) and the eigenvalues, or widths of the Gaussians, are correctly determined (\(D_{K}\to D_{\phi}\)). One may note that \(\Lambda\) does not depend on \(V\), which acts on the hidden nodes, resulting in the degeneracy discussed in Sec. 2: any rotation of the hidden nodes leaves the solution on the visible layer invariant and the learning stops when \(\Lambda\to 0\), irrespective of what \(V\) is.
### Learning dynamics
Having defined the needed quantities, we are now in a position to determine the learning dynamics of \(W\) in detail, i.e. the evolution of \(U,V\), and the singular values \(\Xi\). We consider separately
\[WW^{T}=U\Xi\Xi^{T}U^{T},\qquad\qquad W^{T}W=V\Xi^{T}\Xi V^{T}. \tag{37}\]
Taking the derivative of the first product gives
\[\frac{d}{dt}\left(WW^{T}\right)=\dot{U}\,\Xi\Xi^{T}U^{T}+U\Xi\Xi^{T}\dot{U}^{ T}+U\frac{d}{dt}\left(\Xi\Xi^{T}\right)U^{T}. \tag{38}\]
On the other hand, Eq. (36) gives
\[\frac{d}{dt}WW^{T}=\dot{W}W^{T}+W\dot{W}^{T}=\sigma_{h}^{2}U\Lambda\Xi\Xi^{T}U ^{T}+\sigma_{h}^{2}U\Xi\Xi^{T}\Lambda U^{T}. \tag{39}\]
Conjugating both equations with \(U^{T}\) and \(U\) then yields
\[U^{T}\dot{U}\,\Xi\Xi^{T}+\Xi\Xi^{T}\dot{U}^{T}U+\frac{d}{dt}\left(\Xi\Xi^{T} \right)=\sigma_{h}^{2}\Lambda\Xi\Xi^{T}+\sigma_{h}^{2}\Xi\Xi^{T}\Lambda. \tag{40}\]
Since \(U^{T}\dot{U}=-\dot{U}^{T}U\) is skew-symmetric (due to \(U\) being orthogonal) and \(\Xi\Xi^{T}\) is diagonal, it is easy to see that
\[U^{T}\dot{U}\,\Xi\Xi^{T}+\Xi\Xi^{T}\dot{U}^{T}U=U^{T}\dot{U}\,\Xi\Xi^{T}-\Xi\Xi^ {T}U^{T}\dot{U} \tag{41}\]
is a symmetric matrix with zeroes on the diagonal. Eq. (40) then decomposes into one equation for the diagonal elements, determining the singular values, and one for the off-diagonal ones, determining \(U\), namely
\[\frac{d}{dt}\left(\Xi\Xi^{T}\right) =\sigma_{h}^{2}\Lambda_{d}\Xi\Xi^{T}+\sigma_{h}^{2}\Xi\Xi^{T} \Lambda_{d}=2\sigma_{h}^{2}\Lambda_{d}\Xi\Xi^{T}, \tag{42}\] \[U^{T}\dot{U}\,\Xi\Xi^{T}-\Xi\Xi^{T}U^{T}\dot{U} =\sigma_{h}^{2}\left(\Lambda-\Lambda_{d}\right)\Xi\Xi^{T}+\sigma_ {h}^{2}\Xi\Xi^{T}\left(\Lambda-\Lambda_{d}\right), \tag{43}\]
where
\[\Lambda_{d}=\mathrm{diag}\left(\Lambda\right). \tag{44}\]
Repeating the same analysis for \(W^{T}W\) gives nearly identical equations in the \(N_{h}\times N_{h}\) subspace, namely
\[\frac{d}{dt}\left(\Xi^{T}\Xi\right) =2\sigma_{h}^{2}\Xi^{T}\Lambda_{d}\Xi, \tag{45}\] \[V^{T}\dot{V}\,\Xi^{T}\Xi-\Xi^{T}\Xi V^{T}\dot{V} =2\sigma_{h}^{2}\Xi^{T}\left(\Lambda-\Lambda_{d}\right)\Xi. \tag{46}\]
Note that
\[\Xi\Xi^{T} =\mathrm{diag}(\xi_{1}^{2},\xi_{2}^{2},\ldots,\xi_{N_{h}}^{2},0, \ldots,0), \tag{47}\] \[\Xi^{T}\Xi =\mathrm{diag}(\xi_{1}^{2},\xi_{2}^{2},\ldots,\xi_{N_{h}}^{2}). \tag{48}\]
The equations for \(\Xi\Xi^{T}\) and \(\Xi^{T}\Xi\) yield identical equations for the \(N_{h}\) singular values.
The equation for \(\mu^{2}\) finally reads, in this notation,
\[\dot{\mu}^{2}=-\frac{1}{2}\mathrm{tr}\,\Lambda=-\frac{1}{2}\mathrm{tr}\, \Lambda_{d}. \tag{49}\]
Keeping \(\mu^{2}\) fixed, it is easy to see that \(\sigma_{h}^{2}\) can be absorbed in the time parameter (\(\tilde{t}=t\sigma_{h}^{2}\)) and the singular values, see Eq. (32); hence it does not add any freedom to the model. When \(\mu^{2}\) is learnt as well, its time evolution will depend on \(\sigma_{h}^{2}\), after rescaling time as \(t\to\tilde{t}\).
As noted, \(V\) does not appear in the driving term \(\Lambda\). Hence \(V\) simply follows the evolution, until \(\Lambda-\Lambda_{d}\to 0\), see Eq. (46). For square matrices, \(N_{h}=N_{v}\), this redundancy can be removed by choosing \(W\) to be symmetric (\(V=U\)) or by enforcing \(W\) to be of the lower (or upper) triangular form (Cholesky decomposition of \(WW^{T}\)), see Sec. 2.
### Simple examples
#### \(\boldsymbol{N_{v}=N_{h}=2}\)
The simple example of two visible and two hidden nodes can be worked out in detail. We will note a number of characteristics which remain relevant also for larger systems.
First we note that \(U,V\) and \(O_{\phi}\) are all \(2\times 2\) rotation matrices; we denote the angles as \(\theta_{U}\), \(\theta_{V}\) and \(\theta_{0}\) respectively. Then one notes that
\[U^{T}\dot{U}=\dot{\theta}_{U}\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right), \tag{50}\]
and the same for \(V^{T}\dot{V}\), with \(\dot{\theta}_{V}\). Finally, the combination \(O_{\phi}^{T}U\) is a rotation over an angle \(\Delta\theta=\theta_{U}-\theta_{0}\).
We denote the two eigenvalues of the target kernel \(K_{\phi}\) with \(\kappa_{1,2}\) and of the RBM kernel \(K\) with \(\lambda_{1,2}=\mu^{2}-\sigma_{h}^{2}\xi_{1,2}^{2}\). This yields the driving term,
\[\Lambda=\left(\begin{array}{cc}\frac{1}{\kappa_{1}}\cos^{2}\Delta\theta+ \frac{1}{\kappa_{2}}\sin^{2}\Delta\theta-\frac{1}{\lambda_{1}}&\left(\frac{1} {\kappa_{2}}-\frac{1}{\kappa_{1}}\right)\cos\Delta\theta\sin\Delta\theta\\ \left(\frac{1}{\kappa_{2}}-\frac{1}{\kappa_{1}}\right)\cos\Delta\theta\sin \Delta\theta&\frac{1}{\kappa_{2}}\cos^{2}\Delta\theta+\frac{1}{\kappa_{1}} \sin^{2}\Delta\theta-\frac{1}{\lambda_{2}}\end{array}\right), \tag{51}\]
Putting everything together then gives the following equations
\[\dot{\xi}_{1} =\sigma_{h}^{2}\left(\frac{1}{\kappa_{1}}\cos^{2}\Delta\theta+ \frac{1}{\kappa_{2}}\sin^{2}\Delta\theta-\frac{1}{\mu^{2}-\sigma_{h}^{2}\xi_{ 1}^{2}}\right)\xi_{1}, \tag{52}\] \[\dot{\xi}_{2} =\sigma_{h}^{2}\left(\frac{1}{\kappa_{2}}\cos^{2}\Delta\theta+ \frac{1}{\kappa_{1}}\sin^{2}\Delta\theta-\frac{1}{\mu^{2}-\sigma_{h}^{2}\xi_{ 2}^{2}}\right)\xi_{2},\] (53) \[\dot{\Delta\theta} =\sigma_{h}^{2}\frac{\xi_{1}^{2}+\xi_{2}^{2}}{\xi_{1}^{2}-\xi_{2 }^{2}}\rho,\] (54) \[\dot{\theta}_{V} =2\sigma_{h}^{2}\frac{\xi_{1}\xi_{2}}{\xi_{1}^{2}-\xi_{2}^{2}}\rho,\] (55) \[\dot{\mu}^{2} =\frac{1}{2}\left(\frac{1}{\mu^{2}-\sigma_{h}^{2}\xi_{1}^{2}}+ \frac{1}{\mu^{2}-\sigma_{h}^{2}\xi_{2}^{2}}-\frac{1}{\kappa_{1}}-\frac{1}{ \kappa_{2}}\right), \tag{56}\]
where
\[\rho=\left(\frac{1}{\kappa_{2}}-\frac{1}{\kappa_{1}}\right)\cos\Delta\theta \sin\Delta\theta. \tag{57}\]
These equations have several fixed points. The difference of angles is given by \(\Delta\theta=0,\pi/2\). Which of these is selected depends on which eigenvalue \(\kappa_{1,2}\) is smaller. Note that the SVD decomposition orders the singular values, \(\xi_{1}>\xi_{2}\). We consider first the case of fixed \(\mu^{2}\). The actual realisation depends on the ordering of \(\kappa_{1,2}\) and \(\mu^{2}\). We find
\[\mu^{2}>\kappa_{2}>\kappa_{1}: \Delta\theta=0, \mu^{2}-\sigma_{h}^{2}\xi_{1}^{2}=\kappa_{1}, \mu^{2}-\sigma_{h}^{2}\xi_{2}^{2}=\kappa_{2}, \tag{58}\] \[\mu^{2}>\kappa_{1}>\kappa_{2}: \Delta\theta=\pi/2, \mu^{2}-\sigma_{h}^{2}\xi_{1}^{2}=\kappa_{2}, \mu^{2}-\sigma_{h}^{2}\xi_{2}^{2}=\kappa_{1}. \tag{59}\]
This is illustrated in Fig. 2 (top row). In this case, both eigenvalues are learnt correctly. If \(\mu^{2}\) is smaller than an eigenvalue, it cannot be reproduced and is replaced by \(\mu^{2}\),
\[\kappa_{2}>\mu^{2}>\kappa_{1}: \Delta\theta=0, \mu^{2}-\sigma_{h}^{2}\xi_{1}^{2}=\kappa_{1}, \xi_{2}=0, \tag{60}\] \[\kappa_{1}>\mu^{2}>\kappa_{2}: \Delta\theta=\pi/2, \mu^{2}-\sigma_{h}^{2}\xi_{1}^{2}=\kappa_{2}, \xi_{2}=0, \tag{61}\]
see Fig. 2 (middle row). In this case, only the smallest eigenvalue is learnt, while the other one evolves to \(\mu^{2}\) (see also Eq. (32)).
In case \(\mu^{2}\) is smaller than all eigenvalues, \(\mu^{2}<\kappa_{1,2}\), the eigenmodes cannot be reproduced and are replaced by \(\mu^{2}\), with \(\xi_{1}=\xi_{2}=0\). Finally, we remark again that \(\theta_{V}\) simply evolves until \(\rho\to 0\), but it does not influence the learning of the other parameters.
The actual eigenvalues may not be known, and one may choose \(\mu^{2}\) to be too low, as in the second example above. This can be evaded by learning \(\mu^{2}\) itself, using Eq. (56). The system is now over-parameterised, with \(\xi_{1,2}\) and \(\mu^{2}\) being learnt to reproduce \(\kappa_{1,2}\). In this case one finds that the eigenvalues are reproduced, irrespective of the initial value of \(\mu^{2}\), see Fig. 2 (bottom row). Note that one of the singular values decreases since \(\mu^{2}\) itself increases towards the largest eigenvalue.
### Approach to the fixed point
To understand the evolution towards the fixed point, a simple linearisation suffices. We consider the case of fixed \(\mu^{2}\). Taking concretely case (58) above, we expand about the fixed point and write
\[\sigma_{h}^{2}\xi_{i}^{2}=\mu^{2}-\kappa_{i}+x_{i},\hskip 28.452756pt(\Delta \theta)^{2}=0+y. \tag{62}\]
Expanding Eqs. (52, 53, 54) in \(x_{i}\) and \(y\) and absorbing \(\sigma_{h}^{2}\) in the time parameter (denoting the derivative with respect to \(\tilde{t}=\sigma_{h}^{2}t\) with a prime) then yields the linearised equations
\[x_{1}^{\prime}= -2\left(\mu^{2}-\kappa_{1}\right)\left[\frac{x_{1}}{\kappa_{1}^{ 2}}+\left(\frac{1}{\kappa_{1}}-\frac{1}{\kappa_{2}}\right)y\right], \tag{63}\] \[x_{2}^{\prime}= -2\left(\mu^{2}-\kappa_{2}\right)\left[\frac{x_{2}}{\kappa_{2}^{ 2}}+\left(\frac{1}{\kappa_{2}}-\frac{1}{\kappa_{1}}\right)y\right],\] (64) \[y^{\prime}= -2\frac{2\mu^{2}-\kappa_{1}-\kappa_{2}}{\kappa_{1}\kappa_{2}}y. \tag{65}\]
We hence find exponential convergence, controlled by the relaxation rates
\[\gamma_{i}=\frac{\mu^{2}-\kappa_{i}}{\kappa_{i}^{2}},\hskip 42.679134pt\gamma_{ \Delta\theta}=\frac{\kappa_{1}}{\kappa_{2}}\gamma_{1}+\frac{\kappa_{2}}{ \kappa_{1}}\gamma_{2}. \tag{66}\]
The angle \(\Delta\theta(\bar{t})\) relaxes with \(\gamma_{\Delta\theta}\), whereas the singular values \(\xi_{i}(\bar{t})\) decay with \(\min(\gamma_{i},\gamma_{\Delta\theta})\). Interestingly, the relaxation rates are set by the difference between the RBM mass parameter \(\mu^{2}\) and the eigenvalues \(\kappa_{i}\) in the spectrum. Irrespective of the actual values of \(\mu^{2}\) and the eigenvalues \(\kappa_{i}\), the mode corresponding to the higher eigenvalue relaxes the slowest. We hence conclude:
* infrared modes, i.e. those corresponding to the smallest eigenvalues will converge faster, this can indeed be observed in Fig. 2 (top row);
* increasing the value of \(\mu^{2}\) will lead to more rapid convergence for all modes. This will be explored below in more realistic cases.
#### \(\boldsymbol{N_{v}=2,N_{h}=1}\)
The case of one hidden node serves to demonstrate what happens when \(N_{h}<N_{v}\). It is particularly simple as \(V\) is replaced by \(v=1\) and we only need to consider one angle and one singular value, determined by the following equations
\[\dot{\xi}_{1} =\sigma_{h}^{2}\left(\frac{1}{\kappa_{1}}\cos^{2}\Delta\theta+ \frac{1}{\kappa_{2}}\sin^{2}\Delta\theta-\frac{1}{\mu^{2}-\sigma_{h}^{2}\xi_{1 }^{2}}\right)\xi_{1}, \tag{67}\] \[\dot{\Delta\theta} =\sigma_{h}^{2}\rho,\] (68) \[\dot{\mu}^{2} =\frac{1}{2}\left(\frac{1}{\mu^{2}-\sigma_{h}^{2}\xi_{1}^{2}}+ \frac{1}{\mu^{2}}-\frac{1}{\kappa_{1}}-\frac{1}{\kappa_{2}}\right), \tag{69}\]
where
\[\rho=\tilde{\rho}\cos\Delta\theta\sin\Delta\theta,\qquad\quad\tilde{\rho}= \left(\frac{1}{\kappa_{2}}-\frac{1}{\kappa_{1}}\right) \tag{70}\]
The equation for the angle is now decoupled and can be solved, as
\[\tan\left[\Delta\theta(t)\right]=\tan\left[\Delta\theta(0)\right]e^{\sigma_{h }^{2}\tilde{\rho}t}, \tag{71}\]
such that
\[\kappa_{2}>\kappa_{1} \Leftrightarrow \tilde{\rho}<0 \Leftrightarrow \lim_{t\rightarrow\infty}\Delta\theta(t)=0, \tag{72}\] \[\kappa_{2}<\kappa_{1} \Leftrightarrow \tilde{\rho}>0 \Leftrightarrow \lim_{t\rightarrow\infty}\Delta\theta(t)=\frac{\pi}{2}. \tag{73}\]
Using this in Eq. (67) confirms that the smallest eigenvalue of \(K_{\phi}\) is reproduced (for constant \(\mu^{2}\)). If \(\mu^{2}\) is learnt as well, Eq. (69) ensures it becomes equal to the largest of the two eigenvalues.
To summarise, we note the following observations: if either the number of hidden nodes or the mass parameter \(\mu^{2}\) is chosen too small, the infrared part of the spectrum (lowest eigenvalue) is reproduced, while the ultraviolet part (highest eigenvalue) evolves to \(\mu^{2}\); making \(\mu^{2}\) a learnable parameter yields one more degree of freedom to correctly reproduce the next eigenvalue; infrared modes are learnt quicker than ultraviolet modes. These observations for the simple case considered here remain relevant for more interesting systems, as we will demonstrate now.
## 5 Learning Gaussian distributions
We continue with the case for which the target distribution is known and Gaussian, namely free scalar fields discretised on a lattice in one and two dimensions. The continuum action reads,
in \(n\) Euclidean dimensions,
\[S(\phi)=\int d^{n}x\,\frac{1}{2}\left(\partial_{\mu}\phi\partial^{\mu}\phi+m^{2} \phi^{2}\right). \tag{74}\]
The simplest lattice-discretised equivalent is, on a one-dimensional lattice with \(N_{v}\) nodes and with periodic boundary conditions,
\[S(\phi)=\frac{1}{2}\sum_{i,j=1}^{N_{v}}\phi_{i}K^{\phi}_{ij}\phi_{j},\qquad \qquad K^{\phi}_{ij}=(2+m^{2})\delta_{ij}-\delta_{i,j+1}-\delta_{i,j-1}. \tag{75}\]
We use 'lattice units', \(a=1\), throughout. The spectrum of the target kernel \(K^{\phi}\) is easy to compute analytically and reads
\[\kappa_{k}=m^{2}+p_{\text{lat},k}^{2}=m^{2}+2-2\cos\left(\frac{2\pi k}{N_{v}} \right),\qquad\qquad-\frac{N_{v}}{2}<k\leq\frac{N_{v}}{2}. \tag{76}\]
Each eigenvalue is doubly degenerate, except the minimum (\(k=0,\kappa_{\text{min}}=m^{2}\)) and the maximal (\(k=N_{v}/2,\kappa_{\text{max}}=m^{2}+4\)) ones. Referring back to Sec. 2, the exact spectrum can only be learnt when \(N_{h}=N_{v}\) and when the RBM mass parameter
\[\mu^{2}>\kappa_{\text{max}}=m^{2}+4. \tag{77}\]
Since the target theory is known, we can train the model directly from the correlation matrix of the target theory without the need for pre-generated training data. Then each term of the gradient is estimated by persistent contrastive divergence (PCD) to train the RBM, see App. A for details. The scalar field mass parameter is chosen as \(m^{2}=4\) and the variance on the hidden layer equals \(\sigma_{h}^{2}=1\) throughout.
### Initialisation with an exact solution
We start with the case of a constant RBM mass parameter \(\mu^{2}=9>\kappa_{\text{max}}=8\), with \(N_{v}=N_{h}=10\). To test the numerical code, we may initialise the weight matrix \(W\) according to one of the exact solutions found in Sec. 2: the Cholesky (lower triangular) solution and the symmetric solution. The results are shown in Figs. 3 and 4. Here and throughout we denote the exact eigenvalues of the target distribution with \(\kappa_{\alpha}\) (\(\alpha=1,\ldots,N_{v}\)) and the eigenvalues of the model kernel \(K\) with \(\lambda_{\alpha}=\mu^{2}-\sigma_{h}^{2}\xi_{\alpha}^{2}\). We will refer to these as the RBM eigenvalues. The latter depends on the training stage, indicated by epochs, see App. A. As can be seen in Figs. 3 and 4 (left), the RBM eigenvalues are correctly initialised for both choices and fluctuate around the correct values during training. To indicate the size of the fluctuations, we do the following. In the Cholesky case, we consider separately the \(L_{2}\) norm of the lower triangular elements, of the upper triangular elements (which are initialised at zero) and of the elements on the diagonal. We then standard normalise these to compare the amplitudes of the fluctuations, see Fig. 3 (right). We observe that the sum of each part fluctuates around the average value during training, whose size is set by the initial value, demonstrating the stability of the PCD updates.
For the symmetric initialisation, we show the \(L_{2}\) norms of the symmetric and asymmetric parts, \(W_{\text{sym}}=(W+W^{T})/2,W_{\text{asym}}=(W-W^{T})/2\). Since the initial coupling matrix \(W\) is symmetric, we expect the norm of the asymmetric part to remain significantly smaller during training. This can indeed be seen in Fig. 4 (right), where we show the evolution after standard
normalisation. The norm of the symmetric part of the coupling matrix is six orders of magnitude larger than that of the asymmetric part. As with the Cholesky initialisation, we observe that the overall structure of the coupling matrix is approximately preserved. Note there is no reason for it to be _exactly_ preserved, as this is neither imposed nor required.
### Initialisation with a random coupling matrix
In practical applications, the coupling matrix \(W\) is not initialised at an exact solution, but with random entries, drawn e.g. from a Gaussian distribution. In Fig. 5 we show the results obtained with elements of \(W\) sampled from a normal distribution \(\mathcal{N}(0,0.1)\). Other parameters are as above within particular \(N_{h}=N_{v}\) and \(\mu^{2}>\kappa_{\max}\); hence there are no obstructions to learning the target distribution exactly. This can indeed be seen in Fig. 5, where both the eigenvalues (left) and the action density (right) are seen to match. For the latter, configurations are generated using the trained RBM; the same number of Monte Carlo (Metropolis) generated configurations are shown, using binning to remove auto-correlations. The analytical result follows from the
Figure 4: Left: As in Fig. 3, for the symmetric initialisation. Right: Standardised \(L_{2}\) norm of symmetric and asymmetric parts of the coupling matrix. The latter remains small during updates.
Figure 3: Cholesky initialisation. Left: Evolution of RBM eigenvalues \(\lambda_{\alpha}\) during training. Note that adjacent eigenvalues are coloured alternatively. Exact eigenvalues \(\kappa_{\alpha}\) are shown with horizontal dashed lines and the RBM mass parameter \(\mu^{2}\) with the horizontal full line. After the Cholesky initialisation, the RBM eigenvalues fluctuate around the correct values. Right: The \(L_{2}\) norm of each part of the coupling matrix, \(D\)iagonal, \(U\)pper triangular, and \(L\)ower triangular. Values are standardised, with \(\overline{x}\) (\(\sigma\)) the mean value (standard deviation) along the training interval. Each part fluctuates around its average value
equipartition.
Since the elements of \(W\) are initially relatively small, the corresponding singular values \(\xi_{\alpha}\) are small as well and the RBM eigenvalues \(\lambda_{\alpha}=\mu^{2}-\sigma_{h}^{2}\xi_{\alpha}^{2}\) are close to \(\mu^{2}\) initially. They quickly evolve to the target values \(\kappa_{\alpha}\). The order in which the modes are learnt (or thermalised) can be understood easily. Referring back to Sec. 4, we consider Eq. (42) for the singular values and Eq. (35) for the driving term. Assuming we are on the correct eigenbasis, the latter reduces to
\[\Lambda=\Lambda_{d}=D_{\phi}^{-1}-D_{K}^{-1}=\text{diag}\left(1/\kappa_{\alpha }-1/\lambda_{\alpha}\right),\qquad\qquad\lambda_{\alpha}=\mu^{2}-\sigma_{h}^{ 2}\xi_{\alpha}^{2}. \tag{78}\]
Eq. (42) then becomes [13]
\[\frac{d}{dt}\xi_{\alpha}^{2}=2\sigma_{h}^{2}\left(\frac{1}{\kappa_{\alpha}}- \frac{1}{\mu^{2}-\sigma_{h}^{2}\xi_{\alpha}^{2}}\right)\xi_{\alpha}^{2}. \tag{79}\]
Note that this equation was encountered before (in a general basis) for \(N_{v}=N_{h}=2\), see Eqs. (52, 53). During the initial stages, the term within the brackets is negative and largest for the smallest eigenvalues. Hence the corresponding singular values evolve quickest. At late times, one may linearise around the fixed point. In Sec. 4 we demonstrated for \(N_{v}=2\) nodes that the convergence in the linearised regime is exponentially fast and that the rate of convergence is set by \(\gamma_{\alpha}=(\mu^{2}-\kappa_{\alpha})/\kappa_{\alpha}^{2}\). Hence the most infrared modes equilibrate fastest and the ultraviolet modes slowest. These aspects are demonstrated in Fig. 6, where we have shown the evolution of
Figure 5: Left: Evolution of RBM eigenvalues \(\lambda_{\alpha}\) during training, starting from a random coupling matrix \(W\). Presentation as in Fig. 3 (left). Right: Histogram density of action density from Monte Carlo generated and RBM generated samples.
Figure 6: Convergence of the singular values \(\xi_{\alpha}\) (left) and the eigenvalues \(\lambda_{\alpha}\) (right) for the system of Fig. 5. Infrared modes are learnt the quickest.
both the singular values (left) and the eigenvalues (right) during the initial stages of the training (the largest singular values correspond to the smallest eigenvalues). We note the similarity with the case of \(N_{v}=2\) modes in Sec. 4, see in particular Fig. 2 (top row).
So far we have kept the RBM mass parameter \(\mu^{2}\) fixed. However, it can also be treated as a learnable parameter using Eq. (21). This is particularly useful if details of the target spectrum are not known. It provides then an additional degree of freedom. In Fig. 7, the initial RBM mass parameter is initialised below \(\kappa_{\text{max}}\). It subsequently increases to match the largest eigenvalue, see Fig. 7 (left). Since the system is over-parametrised, one of the singular values remains at the initial value, see Fig. 7 (right). Note the different timescale for equilibration compared to the case with a constant \(\mu^{2}\), as it takes time for \(\mu^{2}\) to find the correct value.
Up to now, we considered a scalar field in one dimension only. The generalisation to higher dimensions is interesting since the RBM does not know about the dimensionality a priori, with the \(N_{v}\) visible nodes only connecting to the hidden nodes. We consider here two dimensions, using an \(N_{x}\times N_{y}\) lattice. The eigenvalues of the kinetic operator are
\[\kappa_{\mathbf{k}}=m^{2}+p_{\text{lat},\mathbf{k}}^{2}=m^{2}+4-2\cos\left( \frac{2\pi k_{x}}{N_{x}}\right)-2\cos\left(\frac{2\pi k_{y}}{N_{y}}\right), \qquad-\frac{N_{x,y}}{2}<k_{x,y}\leq\frac{N_{x,y}}{2}. \tag{80}\]
In this case, there is a larger degeneracy of eigenvalues. The RBM has \(N_{v}=N_{x}\times N_{y}\) visible nodes. The dimensionality has to be learnt and encoded in the coupling matrix \(W\). The (target) kernel and two-point functions are \((N_{x}\times N_{y})\times(N_{x}\times N_{y})\)-dimensional tensors. This two-dimensional structure needs to be flattened in a one-dimensional representation preserving the eigenvalues of the original two-dimensional theory. This can be achieved using the form
\[K_{ij}^{\phi,2d}=(m^{2}+4)\delta_{ij}-(\delta_{i,j+1}+\delta_{i,j-1}+\delta_{i,j+N_{x}}+\delta_{i,j-N_{x}}), \tag{81}\]
where \(1\leq i,j\leq N_{v}\) and \(1\leq n_{x,y}\leq N_{x,y}\), with the identification \(i,j=n_{x}+N_{x}n_{y}\). The periodic boundary conditions are imposed before flattening and the flattened kernel yields the correct eigenvalues (80). The Kronecker deltas above are understood to reflect this periodicity. Fig. 8 shows an example of a flattened \(4\times 4\) scalar field kernel with \(m^{2}=4\). The four nearest neighbours can be found at \(n_{x}\pm 1\) and \(n_{y}\pm 1\) which are mapped to \(i,j\) using the relation above.
In Fig. 9 (left), we show the evolution of the RBM eigenvalues. The RBM mass parameter is \(\mu^{2}=16>\kappa_{\text{max}}=12\). There should be four degenerate eigenvalues at 6 and 10, and six degenerate ones at 8. Yet it appears the eigenvalues only lie within a band close to the expected value. This is due to the fact that to obtain these results we have used a fixed learning rate
Figure 7: Left: Evolution of the RBM eigenvalues and mass parameter \(\mu^{2}\), with the latter initialised below \(\kappa_{\text{max}}\). Right: Evolution of the singular values. Since the system is over-parametrised, one of the singular values remains at the initial value.
(time step), which prevents the system from reaching high precision. This can be remedied by introducing an epoch dependent learning rate. This is explored in App. A. We multiply the learning rate by a factor close to one, \(r=0.99\), after a given number of epochs, \(N_{\text{epoch}}=128\). The virtue of having a diminished learning rate in the later stages is that it allows the model to be finely trained, with less statistical fluctuations. The result is shown in Fig. 9 (right), where we indeed observe precise agreement with the target spectrum.
### Ultraviolet regularization by the RBM mass parameter
Up to now, we have considered the ideal "architecture", namely \(N_{h}=N_{v}\) and \(\mu^{2}>\kappa_{\text{max}}\), for which Gaussian distributions can be learnt exactly, as we have demonstrated. In practice, one often chooses \(N_{h}<N_{v}\) and the maximum eigenvalue may not be known. Here we determine what this implies.
We start with the case where \(N_{h}=N_{v}\), but with \(\mu^{2}\) fixed and less than \(\kappa_{\text{max}}\). We refer to Eq. (79) for the evolution of the singular values in the eigenbasis. Take \(\mu^{2}<\kappa_{\alpha}\). In this case, the term inside the brackets is always negative and the only solution is \(\xi_{\alpha}=0\). The corresponding eigenvalue is then \(\lambda_{\alpha}=\mu^{2}\). When \(\mu^{2}>\kappa_{\alpha}\), the solution is given by the fixed point, \(\sigma_{h}^{2}\xi_{\alpha}^{2}=\mu^{2}-\kappa_{\alpha}\), and \(\lambda_{\alpha}=\mu^{2}-\sigma_{h}^{2}\xi_{\alpha}^{2}\). We hence conclude that the infrared part of the
Figure 8: Flattened kernel for a two-dimensional scalar field theory on a lattice with \(N_{x}\times N_{y}=4\times 4\) sites. Each site has 4 nearest neighbours.
Figure 9: Evolution of the eigenvalues in the two-dimensional case during training, with constant \(\mu^{2}=16\) and \(N_{v}=N_{h}=16\), using a fixed (left) and a diminishing (right) learning rate.
spectrum, with \(\kappa_{\alpha}<\mu^{2}\), can be learnt, whereas the ultraviolet part, with \(\kappa_{\alpha}>\mu^{2}\), cannot be learnt. Instead, the RBM eigenvalues take the value of the cutoff, \(\mu^{2}\)[14].
This is demonstrated in Fig. 10 for a one-dimensional scalar field theory with \(N_{v}=N_{h}=10\) nodes. As the condition for exact training is violated, the RBM model can no longer faithfully reproduce the target data and distribution. The impact of this depends on the importance of the ultraviolet modes, as we will see below for the MNIST data set.
### Ultraviolet regularisation by the number of hidden nodes
Next, we consider the case with \(N_{h}<N_{v}\). This is straightforward, as there are only \(N_{h}\) singular values, leading to the RBM eigenvalues
\[\lambda_{\alpha}=\begin{cases}\mu^{2}-\sigma_{h}^{2}\xi_{\alpha}^{2}&\quad 1 \leq\alpha\leq N_{h},\\ \mu^{2}&\quad N_{h}<\alpha\leq N_{v},\end{cases} \tag{82}\]
see e.g. Eq. (32). Again we note that the infrared part of the spectrum can be reproduced, whereas the ultraviolet part is fixed at \(\mu^{2}\), irrespective of the actual value of the target eigenvalue.
This is shown in Fig. 11 for the one-dimensional case with \(N_{v}=10\) and \(N_{h}=9,8,7,6\). Note that all eigenvalues, except the minimal and maximal ones, are doubly degenerate. Hence in the case of \(N_{h}=8\) and \(6\), one of the degenerate eigenvalues remains and one is removed.
Finally, in Fig. 12 we give two examples in the two-dimensional scalar theory, using \(\mu^{2}=9<\kappa_{\text{max}}\) on the left and \(N_{h}=8<N_{v}=16\) on the right.
Figure 10: Regularisation by RBM mass parameter \(\mu^{2}\): Evolution of the eigenvalues in the one-dimensional scalar field theory. Only the infrared part of the spectrum is reproduced.
Figure 11: As above, but with regularisation by the number of hidden nodes \(N_{h}\).
Figure 12: Regularisation by the RBM mass parameter \(\mu^{2}=9\) (left) and by the number of hidden modes \(N_{h}=8\) (right) in the two-dimensional scalar field theory with \(N_{x}\times N_{y}=4\times 4\).
MNIST data set
It is important to ask whether the considerations above are also relevant for realistic data sets, commonly used in ML. We consider the MNIST data set [15], consisting of 60,000 \(28\times 28\) images of digits. Hence \(N_{v}=784\), substantially larger than what we have considered so far.
Unlike in the case of scalar fields, the probability distribution function is not known. However, we may still obtain the correlation matrix \(\langle\phi_{i}\phi_{j}\rangle\) by summing over the 60,000 realisations. The MNIST (target) kernel is then given by its inverse,
\[K^{\rm MNIST}=\langle\phi_{i}\phi_{j}\rangle_{\rm MNIST}^{-1}\,. \tag{83}\]
The eigenvalues of the correlation matrix are the inverse of the eigenvalues of the kernels discussed so far and we hence denote them as \(1/\kappa_{\alpha}\). The 784 eigenvalues are shown in Fig. 13. Many eigenvalues are close to zero. In the language of the previous sections, these correspond to the ultraviolet part of the spectrum of the quadratic kernel and hence the MNIST data set can be said to be ultraviolet divergent. The values of the ten largest eigenvalues of the correlation matrix are listed explicitly on the right. These correspond to the infrared part of the spectrum of the quadratic kernel. Since these are finite, the MNIST data set is infrared safe.
We will now train the scalar field RBM on the MNIST data set, starting with \(N_{h}=N_{v}\) and a fixed RBM mass parameter \(\mu^{2}=100\). The result is shown in Fig. 14 (left). As before, the horizontal dashed black lines are the target eigenvalues, obtained from the MNIST correlation matrix. The blue lines are the RBM eigenvalues. The initial values are close to \(\mu^{2}\) and hence they become smaller during the learning stage. As above the infrared part of the spectrum is learnt quickest. This is further illustrated in Fig. 14 (right), where the evolution during the final 60,000 epochs is shown (out of one million). The smallest eigenvalues agree with the target values but the larger ones have essentially stopped learning before reaching the correct value, due to the reduced learning rate. We note that the RBM mass parameter \(\mu^{2}=100\) regulates the number of modes here. In fact, there are 289 modes below the cutoff set by \(\mu^{2}\). Hence the
Figure 13: Eigenvalues of the correlation matrix \(\langle\phi_{i}\phi_{j}\rangle\) of the MNIST data set. Note that many eigenvalues are close to zero. The values of the ten largest eigenvalues are indicated.
number of hidden modes, \(N_{h}=784\), can be reduced without a loss of quality. We come back to this below.
Without knowledge of the target spectrum, the (constant) value for \(\mu^{2}\) may be chosen to be on the small side; as is obvious in Fig. 14 (left), there are many eigenvalues above \(\mu^{2}=100\). This can be remedied by promoting \(\mu^{2}\) to a learnable parameter. This is demonstrated in Fig. 15, where \(\mu^{2}\) increases such that the target spectrum can be better learnt. The learning dynamics employs a diminishing learning rate, see App. A, which slows down the increase of \(\mu^{2}\) but also hinders the learning of the spectrum beyond the infrared modes. As stated, larger eigenvalues give smaller contributions to the update equations, leading a slower learning.
As in the scalar field case, we can also regularise the MNIST data by choosing \(N_{h}<N_{v}\). In this case, the number of modes that can be learnt depends on the number of modes in the spectrum with an eigenvalue less than \(\mu^{2}\), and the number of hidden nodes. In Fig. 16 we show the quality of regenerated images after one pass forward and backwards through the trained RBM. Using the fixed RBM mass parameter \(\mu^{2}=100\) limits the maximal number of modes to be included to \(N_{\text{modes}}^{\text{max}}=289\), the number of modes with an eigenvalue less than \(\mu^{2}\). We observe that it seems that one needs at least around 64 hidden nodes to have an acceptable
Figure 14: Left: Evolution of the eigenvalues for the MNIST data set with fixed RBM mass parameter \(\mu^{2}=100\), and \(N_{v}=N_{h}=784\). Right: Evolution during the last few training epochs. The lowest eigenvalues have already matched their target values but the higher modes are still being trained, albeit at a very small learning rate.
Figure 15: As above, but with a learnable RBM mass parameter \(\mu^{2}\).
generation, which is considerably smaller than the maximal possible number. This illustrates that the ultraviolet part of the spectrum can be ignored.
To give a more quantitative measure of the quality of regeneration, we have computed the data-averaged KL divergence for the trained model,
\[KL(p_{\text{data}}||p_{\text{model}})= -\frac{1}{N_{\text{conf}}}\sum_{d=1}^{N_{\text{conf}}}\log p_{ \text{model}}(\phi^{(d)},\theta^{*})+\text{constant}\] \[= \frac{1}{N_{\text{conf}}}\sum_{d=1}^{N_{\text{conf}}}\frac{1}{2} {\phi^{(d)}}^{T}K\phi^{(d)}+\log Z_{\text{model}}+\text{constant}, \tag{84}\]
where the 'constant' term is independent of the model distribution. The result is shown in Fig. 17. We indeed observe the KL divergence between the target distribution and the model distribution starts to increase as more modes are excluded. Adding modes beyond the cutoff imposed by the choice of \(\mu^{2}\) does not increase the quality, as expected. As concluded 'by eye' above, around 64-100 hidden nodes are required for a reasonable quality of regeneration.
## 7 Interactions
Strictly speaking, the Gaussian-Gaussian RBM can only be exact if the target distribution is Gaussian as well. To go beyond this, one needs to introduce interactions. There are (at least) two ways of doing so. Motivated by the notion of QFT-ML [8], one may add local interactions via potentials defined on nodes, see Eqs. (5, 6). A simple choice would be to add a \(\lambda\phi^{4}\) term to each node on the visible layer since such systems are well understood and allow one to study e.g. spontaneous symmetry breaking in the context of the learning process. Of course, sampling the hidden layer requires a more costly sampling method than for the Gaussian case.
One may also change the nature of the hidden nodes from continuous to discrete, taking e.g.
Figure 16: Quality of regenerated images with different numbers of hidden nodes. As the number of hidden nodes decreases, the regeneration quality decreases.
\(h_{a}=\pm 1\). This leads to the Gaussian-Bernoulli RBM (see e.g. Ref. [13]), with the distribution
\[p(\phi,h)=\frac{1}{Z}\exp(-S(\phi,h)),\qquad S(\phi,h)=V_{\phi}(\phi)-\sum_{i,a} \phi_{i}W_{ia}h_{a}+\sum_{a}\eta_{a}h_{a}, \tag{85}\]
where
\[Z=\int D\phi\prod_{a=1}^{N_{h}}\sum_{h_{a}=\pm 1}\exp(-S(\phi,h)). \tag{86}\]
This gives the following induced distribution on the visible layer,
\[p(\phi)=\frac{1}{Z}\exp(-S(\phi)),\qquad S(\phi)=V_{\phi}(\phi)-\sum_{a}\ln \left(2\cosh(\psi_{a})\right), \tag{87}\]
where \(\psi_{a}=\sum_{i}\phi_{i}W_{ia}-\eta_{a}\). A formal expansion in \(\psi_{a}\) then yields, up to a constant, the effective action on the visible layer,
\[S(\phi)=V_{\phi}(\phi)-\sum_{a}\ln\left(1+\sum_{n=1}^{\infty}\frac{\psi_{a}^{ 2n}}{(2n)!}\right)=V_{\phi}(\phi)+\sum_{a}\sum_{n=1}^{\infty}(-1)^{n}\frac{c_ {n}}{(2n)!}\psi_{a}^{2n}, \tag{88}\]
with easily determined coefficients \(c_{n}\). This is a highly nonlocal action.
To make the connection with the previous sections, it is straightforward to see that the quadratic (\(n=1,c_{1}=1\)) term yields the same structure as above,
\[-\sum_{a}\frac{1}{2}\psi_{a}^{2}=-\frac{1}{2}\phi^{T}WW^{T}\phi-\frac{1}{2} \eta^{T}\eta+\phi^{T}W\eta, \tag{89}\]
which, when combined with the RBM mass parameter \(\mu^{2}\) on the visible layer, gives the same kernel, \(K=\mu^{2}1\!\!1-WW^{T}\), and source, \(J=W\eta\).
Figure 17: Data-averaged KL divergence for the trained RBM, as a function of the number of hidden nodes \(N_{h}\) (on a logarithmic scale) at fixed \(\mu^{2}=100\). For this value of \(\mu^{2}\), the maximal number of modes included is \(N_{\rm modes}^{\rm max}=289\) and hence increasing \(N_{h}\) above this value does not lead to additional improvement.
Quartic interactions are generated at the next order. Taking for simplicity \(\eta_{a}=0\), such that only even terms in \(\phi_{i}\) are present, one finds the \(n=2\) term (\(c_{2}=2\)),
\[\sum_{a}\frac{1}{12}\psi_{a}^{4}=\frac{1}{12}\sum_{ijkl}\lambda_{ ijkl}\phi_{i}\phi_{j}\phi_{k}\phi_{l}, \lambda_{ijkl}=\sum_{a}W_{ia}W_{ja}W_{ka}W_{la}. \tag{90}\]
This is indeed a quartic term, but with an a priori highly non-local coupling, generated by the all-to-all coupling to the hidden layer. From a QFT perspective, it would be of interest to study such a theory, which we postpone to the future.
## 8 Conclusion
In this paper, we have studied scalar field Restricted Boltzmann Machines from the perspective of lattice field theory. The Gaussian-Gaussian case can be understood completely. We have demonstrated, using analytical work and numerical experiments, that the scalar field RBM is an ultraviolet regulator, regulating the ultraviolet part of the spectrum of the quadratic operator of the target theory. This is also the case when the target probability distribution is not known, such as in the MNIST case, but where the spectrum can be extracted from the data-averaged correlation matrix. The cutoff is determined by the choice of the RBM mass parameter or the number of hidden nodes. This provides a clear answer to generally difficult questions on the choice of ML "architecture", namely what are the consequences of choosing a particular setup. At least in this simple case the answer is straightforward and concerns the (in)sensitivity of the generative power of the RBM to the ultraviolet modes compared to the infrared modes.
We have also shown that infrared modes are learnt the quickest. This is of interest for models which suffer from critical slowing down, for which infrared modes are usually hard to handle. Indeed, many ML (inspired) generative approaches have surprisingly short auto-correlation times, which is worth exploring further.
As an outlook, we note that in the final section, we have indicated two ways to go beyond the Gaussian-Gaussian case. The QFT-ML approach, in which local potentials are added to nodes on e.g. the visible layer, is convenient for LFT practitioners since the resulting models are well understood. Replacing the continuous hidden degrees of freedom with binary ones (Gaussian-Bernoulli RBM) yields models of a very different character, involving highly non-local interaction terms to all orders. These would be of interest to understand these constructions further using QFT methods.
**Acknowledgements.** GA and BL are supported by STFC Consolidated Grant ST/T000813/1. CP is supported by the UKRI AIMLAC CDT EP/S023992/1. We thank ECT* and the ExtreMe Matter Institute EMMI at GSI, Darmstadt, for support and the participants of the ECT*/EMMI workshop _Machine learning for lattice field theory and beyond_ in June 2023 for discussion during the preparation of this paper.
## Appendix A Details of the algorithm
The training equations for the RBM parameters \(\theta\) read, schematically,
\[\theta_{n+1}=\theta_{n}+\eta_{n}\frac{\partial\mathcal{L}}{ \partial\theta}, \frac{\partial\mathcal{L}}{\partial\theta}=-\frac{1}{2}\left\langle \phi^{T}\frac{\partial K}{\partial\theta}\phi\right\rangle_{\text{target}}+ \frac{1}{2}\left\langle\phi^{T}\frac{\partial K}{\partial\theta}\phi\right \rangle_{\text{model}}, \tag{91}\]
where \(\eta_{n}\) is the learning rate. The first term on the RHS can be easily computed from the given data or target theory. The second term needs to be sampled from the model distribution, which is non-trivial. In most cases, this term is approximated by generating a Markov chain and truncating it after \(k\) steps, where \(k\) is empirically chosen. This is known as Contrastive Divergence (CD) [16]. For standard CD updates, the Markov chain is initialised from the input data and then the successive chains are sampled by Gibbs sampling. A more efficient update algorithm is Persistent Contrastive Divergence (PCD) [17] and is used in this paper. PCD initialises the Markov chain from the last state of the most recent update. Since this last state of the previous chain is already closer to the representative of the model distribution, the new Markov chain is initialised with a nearly thermalised state and only requires a small number of updates.
Alongside PCD, the gradient for each epoch is estimated by averaging over a minibatch. In the case of MNIST data, the training was done by using an effective correlation matrix obtained from the given dataset. Then 512 parallel PCD Markov chains were generated to form a minibatch. For the scalar field theory case, the training was done by directly using the analytical form of the kernel matrix of the target distribution without predefined training data. Then for each training epoch, 128 parallel PCD Markov chains were generated to be averaged and used to estimate the gradient.
The learning rate can be set to change during the training. For instance, one may multiply the learning rate by a factor of \(r\) after every \(N_{\text{epoch}}^{\text{rate}}\) epochs (e.g. \(r=0.99\), \(N_{\text{epoch}}^{\text{rate}}=128\)),
\[\eta_{n}=r\eta_{n-1}\qquad\text{if}\mod(n,N_{\text{epoch}}^{\text{rate}})=0. \tag{92}\]
Hence the learning rate becomes smaller as more epochs have passed. The virtue of having a small learning rate during the later part of the training is that it allows the model to be finely trained and that it reduces statistical fluctuations.
The effect of learning rate decay is shown in Fig. 18. Two models are trained with the same hyperparameters and initialisation except for the learning rate decay parameters \(r\) and \(N_{\text{epoch}}^{\text{rate}}\). The first model shown in Fig. 18 (left) is trained without learning rate decay. Fluctuation of the eigenvalues due to statistical noise remains. In contrast, the second model, shown in Fig. 18 (right), uses learning rate decay with \(r=0.99,N_{\text{epoch}}^{\text{rate}}=128\). Statistical fluctuations die off in the end, leading to a precise result.
However, the values of \(r\) and \(N_{\text{epoch}}^{\text{rate}}\) should be chosen in a delicate manner. If the decay rate \(r\) is too large, the learning rate decreases too fast and the model freezes before it reaches the target destination. For example, in Fig. 19, the training flow of the scalar field RBM
Figure 18: Scalar field RBM trained without (left) and with (right) learning rate decay, using \(r=0.99,N_{\text{epoch}}^{\text{rate}}=128\), and a fixed RBM mass parameter \(\mu^{2}=9\).
with the trainable mass parameter and \(r=0.99,N_{\rm epoch}^{\rm rate}=128\) is shown (compare with Fig. 7). The model does not suffer when it is learning infrared modes, which are learnt quickest, but it fails to learn the highest mode of the target kernel. The model parameter freezes out before it reaches the target. One can decide the learning rate decay parameters by observing the regenerated samples and measuring the goodness of those. Since the ultraviolet modes are less relevant compared to the infrared ones, one can accept a truncation of the higher modes provided a target goodness is achieved. One can also employ an adaptive learning rate decay.
We have also looked at employing momentum optimisation and \(L_{2}\) regularisation of the coupling matrix but have found no need for these.
## Appendix B Kullback-Leibler divergence
For completeness, we evaluate here the Kullback-Leibler (KL) divergence in the case that both the target theory and the model are Gaussian, without linear terms. This allows us to compare it with the log-likelihood in the main text. We consider the KL divergence,
\[KL(p||q)=\int D\phi\,p(\phi)\log\frac{p(\phi)}{q(\phi,\theta^{*})}, \tag{93}\]
with \(p(\phi)\) the target distribution and \(q(\phi,\theta^{*})\) the trained distribution (hence the asterisk on \(\theta\)). We assume the learning process has found the correct eigenbasis, such that the distributions are
\[p(\phi) =\frac{1}{Z_{p}}e^{-\frac{1}{2}\sum_{i}a_{i}\phi_{i}^{2}}, Z_{p}=\prod_{i}\int d\phi_{i}\,e^{-\frac{1}{2}a_{i}\phi_{i}^{2}}, \tag{94}\] \[q(\phi,\theta^{*}) =\frac{1}{Z_{q}}e^{-\frac{1}{2}\sum_{i}b_{i}\phi_{i}^{2}}, Z_{q}=\prod_{i}\int d\phi_{i}\,e^{-\frac{1}{2}b_{i}\phi_{i}^{2}}, \tag{95}\]
where all eigenvalues \(a_{i},b_{i}>0\). To make the connection with the scalar theory with \(N_{h}<N_{v}\) in Sec. 5, we note that \(i=1,\ldots,N_{v}\), and that after training,
\[b_{i}=\begin{cases}\kappa_{i}&i\leq N_{h},\\ \mu^{2}&i>N_{h}.\end{cases} \tag{96}\]
Figure 19: Scalar field RBM with trainable RBM mass parameter \(\mu^{2}\) and learning rate decay as above. The model parameters are frozen before the RBM mass parameter reaches the (expected) largest eigenvalue of the target kernel.
It is then straightforward to evaluate the KL divergence. In particular,
\[\log\frac{p(\phi)}{q(\phi,\theta^{*})}=-\frac{1}{2}\sum_{i}\left(a_{i}-b_{i} \right)\phi_{i}^{2}-\log\frac{Z_{p}}{Z_{q}}, \tag{97}\]
with
\[\log\frac{Z_{p}}{Z_{q}}=\frac{1}{2}\sum_{i}\log\frac{b_{i}}{a_{i}}. \tag{98}\]
Putting everything together, one finds
\[KL(p||q)=\frac{1}{2}\sum_{i}\left(-1+\frac{b_{i}}{a_{i}}-\log\frac{b_{i}}{a_{i }}\right)\geq 0. \tag{99}\]
Each term is non-negative, and \(KL(p||q)\geq 0\), as it should be. The equality is achieved only when each eigenvalue is correctly determined. For the scalar theory in Sec. 5, this becomes
\[KL(p||q)=\frac{1}{2}\sum_{i=N_{h}+1}^{N_{v}}\left(-1+\frac{\mu^{2}}{\kappa_{i} }-\log\frac{\mu^{2}}{\kappa_{i}}\right). \tag{100}\]
|
2309.06718 | Immersion and Invariance-based Disturbance Observer and Its Application
to Safe Control | When the disturbance input matrix is nonlinear, existing disturbance observer
design methods rely on the solvability of a partial differential equation or
the existence of an output function with a uniformly well-defined disturbance
relative degree, which can pose significant limitations. This note introduces a
systematic approach for designing an Immersion and Invariance-based Disturbance
Observer (IIDOB) that circumvents these strong assumptions. The proposed IIDOB
ensures the disturbance estimation error is globally uniformly ultimately
bounded by approximately solving a partial differential equation while
compensating for the approximation error. Furthermore, by integrating IIDOB
into the framework of control barrier functions, a filter-based safe control
design method for control-affine systems with disturbances is established where
the filter is used to generate an alternative disturbance estimation signal
with a known derivative. Sufficient conditions are established to guarantee the
safety of the disturbed systems. Simulation results demonstrate the
effectiveness of the proposed method. | Yujie Wang, Xiangru Xu | 2023-09-13T04:42:30Z | http://arxiv.org/abs/2309.06718v2 | # Immersion and Invariance-based Disturbance Observer and Its Application to Safe Control
###### Abstract
When the disturbance input matrix is nonlinear, existing disturbance observer design methods rely on the solvability of a partial differential equation or the existence of an output function with a uniformly well-defined disturbance relative degree, which can pose significant limitations. This note introduces a systematic approach for designing an Immersion and Invariance-based Disturbance Observer (IIDOB) that circumvents these strong assumptions. The proposed IIDOB ensures the disturbance estimation error is globally uniformly ultimately bounded by approximately solving a partial differential equation while compensating for the approximation error. Furthermore, by integrating IIDOB into the framework of control barrier functions, a filter-based safe control design method for control-affine systems with disturbances is established where the filter is used to generate an alternative disturbance estimation signal with a known derivative. Sufficient conditions are established to guarantee the safety of the disturbed systems. Simulation results demonstrate the effectiveness of the proposed method.
## I Introduction
Designing feedback controllers that guarantee the safety specification of a system has attracted significant attention in the past decades [1, 2, 3, 4, 5, 6]. Inspired by automotive safety applications, [7, 8, 9] proposed reciprocal and zeroing Control Barrier Functions (CBFs) that generalize previous barrier conditions to only require a single sub-level set to be controlled invariant. By including the CBF condition in a convex Quadratic Program (QP), a CBF-QP-based controller is generated in real time and acts as a safety filter that modifies potentially unsafe control inputs in a minimally invasive fashion. Various robust CBF approaches have been proposed for systems with model uncertainties and external disturbances [10, 11, 12, 13]; however, most of these robust CBF methods consider the worst-case of disturbances, resulting in overly conservative control behaviors.
To reduce the adverse effects of disturbances/uncertainties on system performance, several works integrating disturbance/uncertainty estimation and compensation techniques into the CBF-QP framework have been proposed recently [14, 15, 16, 17]. In our previous work [14], the Disturbance Observer (DOB) presented in [18] was incorporated into the CBF-QP framework for the first time. Compared with other robust control schemes, DOB-based control has two main advantages: (i) the DOB can be designed independently and added to a baseline controller to improve its robustness and disturbance attenuation capability; (ii) in the presence of disturbances/uncertainties, the nominal performance of the baseline controller can be recovered by the DOB-based controller [19, 20, 21].
Nevertheless, the design of DOBs is non-trivial and highly problem-specific. Specifically, designing a DOB requires the existence of two functions that can ensure the asymptotic stability of the error dynamics and the satisfaction of a partial differential equation (PDE) simultaneously (more details will be given in the next section) [19]. Fulfilling these two requirements is challenging, and existing methods rely on relatively strong assumptions, e.g., the disturbance relative degree is uniformly well-defined [18, 22]. A systematic and computationally feasible method for constructing DOBs for generic nonlinear control-affine systems is still lacking.
The contribution of this note is twofold: (i) Inspired by the Immersion and Invariance (I&I) technique [23, 24, 25], we propose a systematic approach for designing I&I-based Disturbance Observer (IIDOB) for general nonlinear control-affine systems without imposing the strong assumptions adopted by existing DOB design methods, such as the solvability of a PDE or the existence of an output function with a uniformly well-defined disturbance relative degree. By approximately solving the PDE and compensating for the approximation error, the proposed IIDOB ensures that the disturbance estimation error is globally Uniformly Ultimately Bounded (UUB). (ii) We propose a filter-based IIDOB-CBF-QP safe control design approach for control-affine systems with disturbances (see Fig. 1). We design a filter to obtain an alternative disturbance estimation signal with a known derivative and provide sufficient conditions that ensure the safety of the disturbed system. The remainder of this note is organized as follows: the background and the problem statement are provided in Section II; the proposed IIDOB is presented in Section III; the IIDOB-CBF-QP-based safe control strategy is provided in Section IV; numerical simulation results are provided in Section V; and finally, the conclusion is drawn in Section VI.
Figure 1: Configuration of the proposed IIDOB-CBF-QP method that consists of three components: (i) an IIDOB used for disturbance estimation, (ii) a filter that can generate an alternative disturbance estimation signal with a known derivative, and (iii) an IIDOB-CBF-QP-based safe controller that can ensure safety of the closed-loop system.
_Notation:_ For a given positive integer \(n\), denote \([n]=\{1,2,\cdots,n\}\). For a column vector \(x\in\mathbb{R}^{n}\) or a row vector \(x\in\mathbb{R}^{1\times n}\), denote \(x_{i}\) as the \(i\)-th entry of \(x\) and \(\|x\|\) as its 2-norm. Denote \(I_{n}\) as an identity matrix with dimension \(n\times n\). For a given matrix \(A\in\mathbb{R}^{n\times m}\), \(A_{ij}\) is the \((i,j)\)-th entry of \(A\), \(A_{j}\) is the \(j\)-th column of \(A\), and \(\|A\|\) is its Frobenius norm. Denote \(\mathrm{diag}[a_{1},a_{2},\cdots,a_{n}]\in\mathbb{R}^{n\times n}\) as a diagonal matrix with diagonal entries \(a_{1},a_{2},\cdots,a_{n}\in\mathbb{R}\). The gradient \(\frac{\partial h}{\partial x}\in\mathbb{R}^{n\times 1}\) is considered as a row vector, where \(x\in\mathbb{R}^{n}\) and \(h:\mathbb{R}^{n}\to\mathbb{R}\) is a function with respect to \(x\). For a function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) with respect to \(x\in\mathbb{R}^{n}\), \(\frac{\partial f}{\partial x}\) denotes the Jacobian matrix whose \((i,j)\)-th entry is \(\frac{\partial f_{i}}{\partial x_{j}}\).
## II Background and Problem Statement
### _Background_
Consider a control-affine system \(\dot{x}=f(x)+g(x)u\), where \(x\in\mathbb{R}^{n}\) is the state, \(u\in\mathbb{R}^{m}\) is the control input, and \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) and \(g:\mathbb{R}^{n}\to\mathbb{R}^{n\times m}\) are known and locally Lipchitz continuous function functions. Define a _safe set_\(\mathcal{C}\) as
\[\mathcal{C}=\{x\in\mathbb{R}^{n}:h(x)\geq 0\}, \tag{1}\]
where \(h:\mathbb{R}^{n}\to\mathbb{R}\) is a continuously differentiable function. The function \(h\) is called a CBF of (input) relative degree 1 if \(\sup_{u}\left[L_{f}h(x)+L_{g}h(x)u+\gamma h(x)\right]\geq 0\) holds for all \(x\in\mathbb{R}^{n}\), where \(\gamma>0\) is a given positive constant, and \(L_{f}h=\frac{\partial h}{\partial x}f\) and \(L_{g}h=\frac{\partial h}{\partial x}g\) are Lie derivatives [9]. It was proven in [9] that any Lipschitz continuous controller \(u(x)\in\{u:L_{f}h(x)+L_{g}h(x)u+\gamma h(x)\geq 0\}\) will ensure the safety, i.e., the forward invariance of \(\mathcal{C}\), of the closed-loop system.
Now consider the following control-affine system with disturbances:
\[\dot{x}=f(x)+g(x)u+p(x)w(t) \tag{2}\]
where \(x\in\mathbb{R}^{n}\) is the state, \(u\in\mathbb{R}^{m}\) is the control input, \(w:\mathbb{R}_{\geq 0}\to\mathbb{R}^{l}\) is the disturbance, and \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\), \(g:\mathbb{R}^{n}\to\mathbb{R}^{n\times m}\), and \(p:\mathbb{R}^{n}\to\mathbb{R}^{n\times l}\) are known functions. Provided that the disturbance \(w\) is bounded, robust CBF-based methods can be adopted to ensure the safety of system (2) [10, 11]. In existing robust CBF-based methods, safety is achieved by sacrificing the nominal performance, as the worst-case of the disturbances is considered in the safe controller design. Therefore, trajectories of the closed-loop system will stay in a shrunk subset of the original safe set \(\mathcal{C}\), implying that the performance of these controllers is conservative.
DOBs is one of the most effective tools for estimating and compensating disturbances/uncertainties in nonlinear control design, and has been extensively applied to numerous systems [19, 20, 21]. Our previous work [14] integrated DOBs into the CBF-QP framework, and proposed a DOB-CBF-QP controller with safety guarantees. However, designing DOBs for control-affine system (2) is non-trivial.
Suppose that \(w\) is slowly time-varying, that is, \(\dot{w}(t)\approx 0,\forall t\geq 0\). As shown in [19, 22], the DOB for system (2) has the following structure:
\[\dot{w} =z+q(x), \tag{3a}\] \[\dot{z} =-l(x)p(x)z-l(x)[f(x)+g(x)u+p(x)q(x)], \tag{3b}\]
where \(\dot{w}\) is the disturbance estimation, \(z\in\mathbb{R}^{l}\) is the internal state of the DOB. The function \(l(x)\), known as the _DOB gain_, and the function \(q(x)\) should be designed such that
\[l(x)=\frac{\partial q(x)}{\partial x}, \tag{4}\]
and the error dynamics is globally asymptotically stable:
\[\dot{e}_{w}+l(x)p(x)e_{w}=0, \tag{5}\]
where \(e_{w}=w-\hat{w}\) is the disturbance estimation error.
Designing \(l(x)\) and \(q(x)\) is a challenging and highly case-specific task in general [19]. Several methods have been proposed based on relatively strong assumptions. If \(p(x)\) has full column rank, then one can select \(q(x)\) by solving the PDE \(\frac{\partial q(x)}{\partial x}=p(x)^{\dagger}\), where \(p(x)^{\dagger}\) denotes the left inverse of \(p(x)\)[26]; however, when \(n>1\), this PDE is generally unsolvable and even when solvable, its closed-form solution is hard to obtain. If the disturbance relative degree is uniformly well-defined with respect to an output function \(s(x)\), an approach for designing \(q(x)\) is proposed in [18, 22]; however selecting such a function \(s(x)\) is challenging and its existence is not guaranteed (e.g., there may exist \(x^{*}\in\mathbb{R}^{n}\) such that \(p(x^{*})=0\)). A practically useful approach involves treating \(p(x)w\) as the total disturbance and assuming that \(\frac{\mathrm{d}}{\mathrm{d}t}(p(x)w)\) is bounded [27, 28]; however, this assumption is rather restrictive because \(\frac{\mathrm{d}}{\mathrm{d}t}(p(x)w)\) explicitly relies on \(u\) and \(x\). As will be shown in Section III, we will provide a systematic approach for designing DOBs that avoids the issues of the aforementioned methods.
### _Problem Statement_
Consider system (2) and the safe set defined in (1), where \(h(x)\) is a sufficiently smooth function. Recall that \(g_{j}\) denotes the \(j\)-th column of \(g\) for \(j\in[m]\), and \(p_{i}\) denotes the \(i\)-th column of \(p\) for \(i\in[l]\). System (2) is said to have a vector _Input Relative Degree_ (IRD) \(\mathcal{I}=(\sigma_{1},\sigma_{2},\cdots,\sigma_{m})\) at a given point \(x_{0}\in\mathbb{R}^{n}\) if \(L_{g_{j}}L_{f}^{k}h(x)=0\) for any \(k\in[\sigma_{j}-2]\), \(j\in[m]\), and for all \(x\) in a neighborhood of \(x_{0}\), and \(L_{g_{j}}L_{f}^{\sigma_{j}-1}h(x_{0})\neq 0\) holds for any \(j\in[m]\)[29, Remark 5.1.1]. Similarly, system (2) is said to have a vector _Disturbance Relative Degree_ (DRD) \(\mathcal{D}=(\nu_{1},\nu_{2},\cdots,\nu_{l})\) at a given point \(x_{0}\in\mathbb{R}^{n}\) if \(L_{p_{j}}L_{f}^{k}h(x)=0\) for any \(k\in[\nu_{j}-2]\), \(j\in[l]\), and for all \(x\) in a neighborhood of \(x_{0}\), and \(L_{p_{j}}L_{f}^{\nu_{j}-1}h(x_{0})\neq 0\) holds for any \(j\in[l]\)[30]. Note that because system (2) is multiple-input-single-output with \(h\) as the output, the definitions of vector IRD and vector DRD above are slight modifications of those given in [29, 30].
In this note, with a slight abuse of notation, we will call \(r_{I}=\min\mathcal{I}\) and \(r_{D}=\min\mathcal{D}\) as the _minimum IRD_ and the _minimum DRD_ of system (2) with respect to function \(h(x)\) at a given point \(x_{0}\in\mathbb{R}^{n}\), respectively; that is, \(r_{I}\) (or \(r_{D}\)) denotes the number of times \(h\) has to be differentiated to have at least one component of \(u\) (or \(w\)) explicitly appearing.
Next, a standard assumption for DOB design is given.
**Assumption 1**: _The disturbance \(w\) and its derivative are bounded as \(\|w\|\leq\omega_{0}\) and \(\|\hat{w}\|\leq\omega_{1}\), where \(\omega_{0}\) and \(\omega_{1}\) are positive constants not necessarily known in DOB design._
The first problem investigated in this note is to design a disturbance estimation law to estimate the total disturbance
\[d(x,t)\triangleq p(x)w(t). \tag{6}\]
_Problem 1:_ Consider system (2) with \(f,g\in C^{1}\) and \(p\in C^{2}\) and suppose Assumption 1 holds. Design a DOB-based estimation law to estimate the total disturbance \(d\) online.
Using the DOB-based estimation of the total disturbance, the second problem investigated in this note is to design a feedback control law such that system (2) is safe.
_Problem 2:_ Consider system (2) with \(f,g\in C^{1}\) and \(p\in C^{2}\) and the safe set \(\mathcal{C}\) defined in (1). Suppose that Assumption 1 holds and \(r_{I}=r_{D}\) for system (2) with respect to \(h(x)\). Given the DOB developed via solving Problem 1, design a feedback control law such that system (2) is safe, i.e., \(h(x(t))\geq 0\) for any \(t>0\) provided \(h(x(0))>0\).
_Remark 1:_ In Problem 2, if \(r_{I}<r_{D}\), the disturbance can be directly decoupled from the system via state feedback control [31]. The case \(r_{I}>r_{D}\) will be explored in our future work. Note that we don't assume the minimum DRD of system (2) is uniformly well-defined as in [18, 22], i.e., there may exist \(x_{0}\in\mathbb{R}^{n}\) such that \(L_{p_{I}}L_{f}^{\nu_{j}-1}h(x_{0})=0\) for any \(j\in[l]\).
## III IDOB Design
Inspired by the I&I technique [23, 24, 25], we propose an IIDOB design approach to solve Problem 1 in this section.
First, we augment system (2) with an additional integrator:
\[\dot{x} =f(x)+g(x)u+p(x)w, \tag{7a}\] \[\dot{u} =v, \tag{7b}\]
where \(v\) denotes the auxiliary control input to be designed, and \(u\) is considered as a state variable of the augmented system. The relationship between system (2) and the augmented system (7) is illustrated in Fig. 1. As will be shown in this section and Section IV, the auxiliary control input \(v\) will be used in the design of IIDOB, and it will be generated from solving the IIDOB-CBF-QP. The control input \(u\) for the original system (2) will be obtained through integrating \(v\).
Define a time-varying set \(\mathcal{M}(t)=\{(x,\hat{x},u)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times \mathbb{R}^{m}:\xi(t)+\beta(\hat{x},x,u)-p(x)w(t)=0\}\), where \(\hat{x}\) denotes the state estimation and \(\xi\), \(\beta\) are known functions that will be all specified later. Define
\[\hat{d}\triangleq\xi(t)+\beta(\hat{x},x,u)\]
as the estimated total disturbance, and the disturbance estimation error as
\[e_{d}\triangleq\hat{d}-d.\]
It is clearly that if the system trajectories are restricted to \(\mathcal{M}(t)\), the disturbance estimation is accurate. We also define
\[z=\frac{\xi(t)+\beta(x,\hat{x},u)-d}{r}, \tag{8}\]
where \(r\) is the scaling factor governed by an adaptive law yet to be designed. It is clear that \(e_{d}=z\cdot r\). Our IIDOB design will render \(e_{d}\) globally UUB [32, Definition 4.6] by guaranteeing that \(z\) is globally UUB and \(r\) remains bounded. Note that \(\hat{z}\), the time derivative of \(z\), can be expressed as
\[\dot{z} =-\frac{\dot{r}}{r}z+\frac{1}{r}\bigg{(}\dot{\xi}+\frac{\partial \beta}{\partial x}(f+gu+pw)+\frac{\partial\beta}{\partial u}v+\frac{\partial \beta}{\partial\hat{x}}\dot{\hat{x}}\] \[\quad-p\dot{w}-\sum_{i=1}^{l}\frac{\partial p_{i}}{\partial x}(f +gu+pw)w_{i}\bigg{)}, \tag{9}\]
where \(p_{i}\) denotes the \(i\)-th column of \(p\), \(i\in[l]\). Define
\[\psi(x,u)\!=\!\frac{\eta}{2}\bigg{[}\!\|p\|^{2}\!+\!\!\sum_{i=1}^{l}\!\!\left( \!\left\|\frac{\partial p_{i}}{\partial x}(f\!+\!gu)\right\|^{2}\!\!+\!\left\| \frac{\partial p_{i}}{\partial x}p\right\|^{2}\!\!\right)\!\!\bigg{]}\!+\!\gamma \tag{10}\]
where \(\gamma,\eta>0\) are tuning parameters and \(\gamma\) denotes the _observer gain_.
If \(\delta(x,u)\in\mathbb{R}^{n}\) is a solution to the following PDE:
\[\frac{\partial\delta(x,u)}{\partial x}=\psi(x,u)I_{n}, \tag{11}\]
then the DOB design becomes straightforward by following [23]. Specifically, one can select \(\beta(x,\hat{x},u)=\delta(x,u)\) and \(r=\hat{x}=1\) for any \(t\geq 0\), and design \(\dot{\xi}=-\frac{\partial\beta}{\partial x}(f+gu+\hat{d})-\frac{\partial \beta}{\partial u}v\) such that \(\dot{z}=-\psi z-p\dot{w}-\sum_{i=1}^{l}\frac{\partial p_{i}}{\partial x}(f+gu+ pw)w_{i}\). By selecting a candidate Lyapunov function \(V=\frac{1}{2}z^{\top}z\), one can easily verify that \(\dot{V}\leq-\psi\|z\|^{2}+\omega_{1}\|p\|\|z\|+\sum_{i=1}^{l}\omega_{0}\left\| \frac{\partial p_{i}}{\partial x}(f+gu)\right\|\|z\|+\sum_{i=1}^{l}\omega_{0}^{ 2}\left\|\frac{\partial p_{i}}{\partial x}p\right\|\|z\|\leq-\gamma\|z\|^{2} +\frac{1}{2n}(\omega_{1}^{2}+l\omega_{0}^{2}+l\omega_{0}^{4})\), which indicates that \(z\) is globally UUB. However, when \(n>1\), solving (11) is extremely challenging in principle, and even a solution to (11) may not exist [25]. To tackle this issue, we will follow [25] to first "approximately solve" (11) and then use \(\dot{r}\) to compensate for the approximation error.
Recall that \(f,g\in C^{1}\) and \(p\in C^{2}\). Then, \(\psi(x,u)\in C^{1}\), and it is easy to verify that there exist continuous functions \(\delta_{ij}:\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\), \(i,j\in[n]\), such that [33]:
\[\psi(\hat{x}_{1},\cdots,\hat{x}_{i-1},x_{i},\hat{x}_{i+1},\cdots, \hat{x}_{n},u)-\psi(x,u)\] \[=-\sum_{j=1}^{n}\delta_{ij}(x,\hat{x},e,u)e_{j}, \tag{12}\]
where \(e_{j}\) is the \(j\)-th entry of \(e\triangleq\hat{x}-x\), and \(\delta_{ii}=0\) for \(i,j\in[n]\).
The following theorem shows our IIDOB design ensures the disturbance estimation error \(e_{d}\) is globally UUB.
_Theorem 1:_ Consider system (7) where \(f,g\in C^{1}\) and \(p\in C^{2}\), and suppose Assumption 1 holds. If the disturbance estimation law \(\dot{d}\) is designed as:
\[\dot{d} =\xi+\beta, \tag{13a}\] \[\beta =\left[\begin{array}{l}\int_{0}^{x_{1}}\psi(\tau,\hat{x}_{2}, \hat{x}_{3},\cdots,\hat{x}_{n},u)\mathrm{d}\tau\\ \int_{0}^{x_{2}}\psi(\hat{x}_{1},\tau,\hat{x}_{3},\cdots,\hat{x}_{n},u)\mathrm{d} \tau\\ \vdots\\ \int_{0}^{x_{n}}\psi(\hat{x}_{1},\hat{x}_{2},\cdots,\hat{x}_{n-1},\tau,u) \mathrm{d}\tau\end{array}\right],\] (13b) \[\dot{\hat{x}} =f+gu+\hat{d}-K(x,u,\hat{x},r,e)e,\] (13c) \[\dot{\xi} =-\frac{\partial\beta}{\partial x}(f+gu+\hat{d})-\frac{\partial \beta}{\partial u}v-\frac{\partial\beta}{\partial\hat{x}}\dot{\hat{x}},\] (13d) \[\dot{r} =-\theta(r\!-\!1)\!+\!\frac{cr}{2}\!\sum_{j=1}^{n}\!e_{j}^{2}\| \Delta_{j}\|^{2},\ r(0)>1,\] (13e) \[K =(k_{1}\!+\!k_{2}r^{2})I_{n}+\frac{cr^{2}}{2}\mathrm{diag}[\| \Delta_{1}\|^{2},\cdots,\|\Delta_{n}\|^{2}], \tag{13f}\]
where \(\psi\) is defined in (10), \(e\) is defined in (12), \(\gamma,c,\theta>0\) are positive constants satisfying \(\gamma>\frac{n}{2c}+\theta\), \(\Delta_{j}=\mathrm{diag}[\delta_{1j},\delta_{2j},\cdots,\delta_{nj}]\in\mathbb{R}^{n \times n}\) with \(\delta_{ij}\) defined in (12) for \(i,j\in[n]\), and \(k_{1},k_{2}>0\) are positive constants satisfying \(k_{2}>\frac{1}{4\gamma-2n/c-4\theta}\), then \(e_{d}\) is globally UUB.
Recall that \(e_{d}=z\cdot r\). To prove that \(e_{d}\) is globally UUB, we will first show \(z\) is globally UUB, and then \(r\) is bounded.
Substituting (13d) into (9) yields
\[\dot{z} =-\frac{\dot{r}}{r}z-\frac{\partial\beta}{\partial x}z-\frac{1}{r} \!\left(p\dot{w}+\sum_{i=1}^{l}\frac{\partial p_{i}}{\partial x}(f\!+
Recall that \(\psi\in C^{1}\). According to the fundamental theorem of calculus, one can see that
\[\frac{\partial\beta}{\partial x}=\mathrm{diag}[\psi(x_{1},\!\hat{x}_{2}, \!\cdots,\!\hat{x}_{n},\!u),\psi(\hat{x}_{1},\!x_{2},\!\hat{x}_{3},\!\cdots,\! \hat{x}_{n},\!u),\] \[\cdots,\,\psi(\hat{x}_{1},\!\hat{x}_{2},\!\cdots,\!\hat{x}_{n-1}, \!x_{n},\!u)]. \tag{15}\]
Define \(e_{\psi}\triangleq\left\|\psi(x,u)I_{n}-\frac{\partial\beta}{\partial x}\right\|\) as the "approximation error" induced by approximately solving (11) using \(\beta\) designed in (13b). Intuitively, from (15) one can see that if \(\hat{x}\) is very close to \(x\), \(e_{\psi}\) would be negligible. Note that the influence of \(e_{\psi}\) will be eliminated by \(\dot{r}\) as shown in the following analysis.
Then, substituting (12) into (15) yields
\[\frac{\partial\beta}{\partial x}=\psi(x,u)I_{n}-\sum_{j=1}^{n}\Delta_{j}e_{j}, \tag{16}\]
and substituting (10) and (16) into (14) yields
\[\dot{z}=-\frac{\dot{r}}{r}z\!-\!\gamma z\!-\!\frac{\eta}{2}\! \left[\sum_{i=1}^{l}\!\left(\!\left\|\frac{\partial p_{i}}{\partial x}\!(\!f\! +\!gu\!)\right\|^{2}\!\!\!+\left\|\frac{\partial p_{i}}{\partial x}p\right\|^ {2}\!\right)\!\!+\!\|p\|^{2}\right]\!z\] \[+\!\sum_{j=1}^{n}\!\Delta_{j}e_{j}z\!+\!\frac{1}{r}\left(\!-\!p \dot{w}-\!\sum_{i=1}^{l}\frac{\partial p_{i}}{\partial x}(f\!+\!gu\!+\!pw)w_{ i}\!\right)\!\!. \tag{17}\]
From (13e) one can easily verify that \(r\geq 1\) for any \(t>0\) because the set \(\{r:r\geq 1\}\) is invariant. Substituting (13e) into (17) gives
\[\dot{z}=\theta\frac{r-1}{r}z\!-\!\frac{c}{2}\sum_{j=1}^{n}e_{j}^{ 2}\|\Delta_{j}\|^{2}z\!-\!\left[\frac{\eta}{2}\sum_{i=1}^{l}\left(\left\| \frac{\partial p_{i}}{\partial x}(f+gu)\right\|^{2}\right.\right.\] \[\left.\left.+\left\|\frac{\partial p_{i}}{\partial x}p\right\|^{ 2}\right)+\frac{\eta}{2}\|p\|^{2}-\sum_{j=1}^{n}\Delta_{j}e_{j}\right]\!z+ \frac{1}{r}\bigg{(}-p\dot{w}\] \[-\sum_{i=1}^{l}\frac{\partial p_{i}}{\partial x}(f+gu)w_{i}-\sum _{i=1}^{l}\frac{\partial p_{i}}{\partial x}pww_{i}\bigg{)}-\gamma z. \tag{18}\]
Meanwhile, subtracting (7) from (13c) yields
\[\dot{e}=\dot{d}-d-K(x,u,\hat{x},r,e)e=rz-K(x,u,\hat{x},r,e)e. \tag{19}\]
Next, we prove that \(z\) is globally UUB. Define a candidate Lyapunov function as \(V=\frac{1}{2}z^{\top}z\), whose time derivative is
\[\dot{V} \stackrel{{\eqref{eq:UUB}}}{{=}}\theta\frac{r-1}{r} \|z\|^{2}-\frac{c}{2}\sum_{j=1}^{n}e_{j}^{2}\|\Delta_{j}\|^{2}\|z\|^{2}+z^{ \top}\sum_{j=1}^{n}\Delta_{j}e_{j}z\] \[-\frac{\eta}{2}\left[\sum_{i=1}^{l}\left(\left\|\frac{\partial p _{i}}{\partial x}(f\!+\!gu\!)\right\|^{2}\!\!+\!\left\|\frac{\partial p_{i}}{ \partial x}p\right\|^{2}\right)\!\!+\!\|p\|^{2}\right]\|z\|^{2}\] \[+\frac{z^{\top}}{r}\left(-p\dot{w}-\sum_{i=1}^{l}\frac{\partial p _{i}}{\partial x}(f+gu+pw)w_{i}\right)-\gamma\|z\|^{2}\] \[\leq\,\theta\frac{r\!-\!1}{r}\|z\|^{2}\!-\!\frac{c}{2}\!\sum_{j=1}^ {n}e_{j}^{2}\|\Delta_{j}\|^{2}\|z\|^{2}\!+\!\sum_{j=1}^{n}\|\Delta_{j}\|e_{j}\| \|z\|^{2}\] \[-\frac{\eta}{2}\left[\sum_{i=1}^{l}\!\left(\left\|\frac{\partial p _{i}}{\partial x}(f\!+\!gu\!)\right\|^{2}\!\!+\!\left\|\frac{\partial p_{i}}{ \partial x}p\right\|^{2}\right)\!\!+\!\|p\|^{2}\right]\!\|z\|^{2}\] \[-\gamma\|z\|^{2}+\frac{\|z\|}{r}\bigg{(}\|p\|\omega_{1}+\sum_{i=1} ^{l}\!\left\|\frac{\partial p_{i}}{\partial x}(f+gu)\right\|\omega_{0}\] \[+\sum_{i=1}^{l}\left\|\frac{\partial p_{i}}{\partial x}p\right\| \omega_{0}^{2}\bigg{)}\] \[\leq\,-\left(\gamma-\theta-\frac{n}{2c}\right)\|z\|^{2}+\frac{1}{2 \eta r^{2}}(\omega_{1}^{2}+\omega_{0}^{2}+\omega_{0}^{4})\] \[\leq\,-\kappa\|z\|^{2}+\omega, \tag{20}\]
where
\[\kappa=\gamma-\frac{n}{2c}-\theta>0, \tag{21a}\] \[\omega=\frac{1}{2\eta}(\omega_{1}^{2}+\omega_{0}^{2}+L\omega_{0}^ {4})>0, \tag{21b}\]
the first and second inequality arise from Cauchy-Schwarz inequality, and the last inequality comes from the fact that \(r\geq 1\). Therefore, one can see that
\[\|z\|\leq\sqrt{\|z(0)\|^{2}e^{-2\kappa t}+\frac{\omega}{\kappa}}\triangleq \varrho_{z}(t), \tag{22}\]
which indicates that \(z\) is UUB. Note that selecting a larger \(\kappa\) will result in a smaller final bound of \(\|z\|\). However, the convergence of \(\|z\|\) does not imply the convergence of \(e_{d}\) unless \(r\) is bounded. To show the boundedness of \(r\), we construct an augmented candidate Lyapunov function \(W\) as \(W=V+\frac{1}{2}e^{\top}e+\frac{1}{2}r^{2}\), whose time derivative satisfies
\[\dot{W} \stackrel{{\eqref{eq:UUB}}}{{\leq}}-\kappa\|z\|^{2}+ \omega-e^{\top}K(x,u,\hat{x},r,e)e+e^{\top}rz\] \[-\theta r(r-1)+\frac{cr^{2}}{2}\sum_{j=1}^{n}e_{j}^{2}\|\Delta_{j}\| ^{2}\] \[\stackrel{{\eqref{eq:UUB}}}{{=}}-\kappa\|z\|^{2}+ \omega\!-\!k_{1}\|e\|^{2}-k_{2}r^{2}\|e\|^{2}\!-\!\theta r(r\!-\!1)\!+\!e^{\top}rz\] \[\leq\,-\kappa\|z\|^{2}+\omega-k_{1}\|e\|^{2}-k_{2}r^{2}\|e\|^{2}+k _{2}r^{2}\|e\|^{2}\] \[\quad+\frac{1}{4k_{2}}\|z\|^{2}-\frac{\theta}{2}r^{2}+\frac{\theta} {2}\] \[=\,-\left(\kappa-\frac{1}{4k_{2}}\right)\|z\|^{2}-k_{1}\|e\|^{2}- \frac{\theta}{2}r^{2}+\left(\frac{\theta}{2}+\omega\right)\] \[\leq\,-\chi W+\left(\frac{\theta}{2}+\omega\right), \tag{23}\]
where \(\chi=\min\left\{2\kappa-\frac{1}{2k_{2}},2k_{1},\theta\right\}\). From (23) we have
\[r\leq\sqrt{2W(0)e^{-\chi t}+\frac{\theta+2\omega}{\chi}}\triangleq\varrho_{r}(t). \tag{24}\]
Recall that \(e_{d}=z\cdot r\). From (22) and (24), it is easy to conclude that \(e_{d}\) is globally UUB. Furthermore, the ultimate bound can be made arbitrarily small by selecting large \(\theta\), \(k_{1}>\theta\), \(k_{2}>1\), and \(\gamma\gg\theta\). This completes the proof.
**Remark 2**: _From (13b) one can see that \(\beta\) is obtained via calculating an (indefinite) integral, whose explicit form is hard to obtain in general. In practice, a numerical integration can be adopted to compute \(\beta\). Moreover, since \(\psi\in C^{1}\), \(\frac{\partial\beta}{\partial x}\) can be computed using the Leibniz integral rule [34] as \(\left(\frac{\partial\beta}{\partial x}\right)_{ij}=\int_{0}^{x_{i}}\frac{ \partial}{\partial x_{j}}\
rank, the proposed method can be extended to directly estimate \(w\), which will be discussed in our future work.
Before the end of this section, we show the design of an IIDOB-based tracking controller, which could be used as a nominal controller in the IIDOB-CBF-QP in Section IV. Note that \(\hat{d}\) can be expressed as
\[\dot{\hat{d}}=\dot{\xi}+\frac{\partial\beta}{\partial x}\dot{x}+\frac{\partial \beta}{\partial\dot{x}}\dot{\hat{x}}+\frac{\partial\beta}{\partial u}v\overset{ \eqref{eq:12}}{=}-r\frac{\partial\beta}{\partial x}z. \tag{25}\]
The following proposition presents an IIDOB-based tracking control law provided the right inverse of \(g\) exists.
**Proposition 1**: _Consider system (7) and suppose that all conditions of Theorem 1 hold such that the IIDOB shown in (13) exists. Suppose that \(\kappa\) defined in (21a) is greater than \(1\), and the right inverse of \(g(x)\) exists, \(\forall x\in\mathbb{R}^{n}\). Given a reference trajectory \(x_{d}(t)\) where \(x_{d}(t)\) and \(\dot{x}_{d}(t)\) are bounded, \(\forall t\geq 0\), if the control law is designed as_
\[u_{d}=-g^{\dagger}\left(f+\left(\alpha_{1}+\frac{1}{2}r^{2}\right)e_{x}+\hat {d}-\dot{x}_{d}\right), \tag{26a}\] \[v=-\alpha_{2}e_{u}+\mathcal{G}_{1}-g^{\top}e_{x}-\frac{\|\mathcal{G}_{2}\|^{2}}{ 2}e_{u}, \tag{26b}\]
_where \(e_{x}=x-x_{d}\), \(e_{u}=u-u_{d}\), \(g^{\dagger}\) is the right inverse of \(g\), \(\mathcal{G}_{1}=\frac{\partial u_{d}}{\partial t}+\frac{\partial u_{d}}{ \partial x}(f+gu+\hat{d})+\frac{\partial u_{d}}{\partial x}\dot{r},\alpha_{1},\alpha_{2}>0\) are positive constants, and \(\mathcal{G}_{2}=r\left(\frac{\partial u_{d}}{\partial\hat{d}}\frac{\partial \beta}{\partial x}+\frac{\partial u_{d}}{\partial x}\right)\), then the tracking error \(e_{x}\) is globally UUB._
Define \(V_{1}=\frac{1}{2}e_{x}^{\top}e_{x}+\frac{1}{2}z^{\top}z\) as a candidate Lyapunov function where \(z\) is defined in (8). Note that \(\dot{V}_{1}\overset{\eqref{eq:12}}{\leq}e_{x}^{\top}(f+gu+pw-\dot{x}_{d})- \kappa\|z\|^{2}+\omega\overset{\eqref{eq:12}}{\leq}-\left(\alpha_{1}+\frac{1} {2}r^{2}\right)\|e_{x}\|^{2}-re_{x}^{\top}z+e_{x}^{\top}ge_{u}-\kappa\|z\|^{2} +\omega\leq-\alpha_{1}\|e_{x}\|^{2}-\left(\kappa-\frac{1}{2}\right)\|z\|^{2}+e _{x}^{\top}ge_{u}+\omega\), where \(\omega\) is defined in (21b). Since \(u_{d}\) is a function of \(x,r,\hat{d}\) and \(t\), its derivative is \(\dot{u}_{d}=\frac{\partial u_{d}}{\partial t}+\frac{\partial u_{d}}{\partial x }(f+gu+\hat{d})+\frac{\partial u_{d}}{\partial x}\dot{r}-r\left(\frac{\partial u _{d}}{\partial\hat{d}}\frac{\partial\beta}{\partial x}+\frac{\partial u_{d}}{ \partial x}\right)z=\mathcal{G}_{1}-\mathcal{G}_{2}z\). Then, we define \(V_{2}=V_{1}+\frac{1}{2}e_{u}^{\top}e_{u}\) as an augmented Lyapunov function candidate, whose derivative satisfies \(\dot{V}_{2}\overset{\eqref{eq:12}}{\leq}\dot{V}_{1}+e_{u}^{\top}(v- \mathcal{G}_{1}+\mathcal{G}_{2}z)\leq-\alpha_{1}\|e_{x}\|^{2}-(\kappa-1)\|z\|^ {2}+e_{u}^{\top}g^{\top}e_{x}+e_{u}^{\top}(v-\mathcal{G}_{1})+\frac{\|e_{u}\|^ {2}(\partial\hat{d})}{2}+\omega\ \overset{\eqref{eq:12}}{=}+\omega\ \leq-\alpha_{1}\|e_{x}\|^{2}-(\kappa-1)\|z\|^{2}- \alpha_{2}\|e_{u}\|^{2}+\omega\). Thus, \(\dot{V}_{2}\leq-\partial V_{2}+\omega\), where \(\vartheta=\min\{2\alpha_{1},2\kappa-2,2\alpha_{2}\}\). Hence, one can see that \(\|e_{x}\|\leq\sqrt{2V_{2}(0)e^{-\vartheta t}+\frac{2\omega}{\vartheta}}\), indicating \(e_{x}\) is globally UUB. This completes the proof. \(\square\)
When \(g\) has no full row rank, an IIDOB-based tracking controller similar to Proposition 1 can still be designed by following the backstepping technique [35], provided some control Lyapunov function conditions hold. The details are omitted due to space limitation.
**Remark 5**: _The dynamic surface control [36] or command filter [37] technique can be adopted to bypass the tedious calculation of partial derivatives of \(u_{d}\). For example, the idea of the dynamic surface control is to let \(u_{d}\) defined in (26a) pass a low-pass filter \(\dot{\epsilon}\dot{u}_{d}^{f}=-u_{d}^{f}+u_{d}\), where \(u_{d}^{f}\) is the filtered signal and \(\epsilon\) is a small time constant. Then, one can replace \(u_{d}\) with \(u_{d}^{f}\) and use \(\dot{u}_{d}^{f}\) directly in the design of \(v\), instead of computing partial derivatives of \(u_{d}\)._
## IV IIDOB-CBF-QP-based Safe Controller
In this section, we will present an IIDOB-CBF-QP-based safe control design method to solve Problem 2.
We will design a safe controller \(v\) based on the augmented system shown in (7) that is used for the IIDOB design in the preceding section. Two issues need to be addressed in this design: (i) The time derivative of \(\hat{d}\) is indispensable in control design and it depends on \(z\), which is unknown since \(z\) relies on \(w\), as shown in (25); however, considering the worst-case of \(\hat{d}\) may lead to unnecessary conservatism. (ii) The minimum DRD of system (7) is lower than its minimum IRD (i.e., \(d\) appears prior to \(v\) when one differentiates \(h\)), which makes the direct decoupling of the disturbance from the system difficult even if the disturbance is precisely estimated [31].
We address the first challenge by designing a filter to obtain an alternative disturbance estimation signal whose derivative is known. Specifically, given an IIDOB shown in (13), we design the following filter:
\[\dot{\hat{d}}_{f}=-\left(T_{1}+T_{2}r^{2}\left\|\frac{\partial\beta}{ \partial x}\right\|^{2}\right)(\hat{d}_{f}-\hat{d}), \tag{27}\]
where \(\hat{d}_{f}\) denotes the filtered disturbance estimation with \(\hat{d}_{f}(0)=\hat{d}(0)\), \(r\) is governed by (13e), \(\beta\) is given in (13b), and \(T_{1},T_{2}>0\) are tuning parameters. From (27) one can see that the derivative of \(\hat{d}_{f}\) is completely known. Define the filtering error \(e_{f}\) as
\[e_{f}\triangleq\hat{d}_{f}-\hat{d}. \tag{28}\]
The following result shows that \(\hat{d}_{f}\) is close to \(\hat{d}\) in the sense that \(e_{f}\) is bounded by a known time-varying function whose ultimate bound can be arbitrarily small by choosing appropriate parameters.
**Lemma 1**: _Consider the augmented system (7), the IIDOB as shown in (13), and the filter given in (27). If Assumption 1 holds and \(T_{2}>\frac{1}{4\kappa}\), where \(\kappa\) is defined in (21a), then the filtering error \(e_{f}\) satisfies_
\[\|e_{f}(t)\|\leq\sqrt{\left(\|z(0)\|^{2}-\frac{2\omega}{\zeta}\right)e^{-\zeta t }+\frac{2\omega}{\zeta}}\triangleq E_{f}(t) \tag{29}\]
_for any \(t\geq 0\), where \(\zeta=\min\{2T_{1},2\kappa-\frac{1}{2T_{2}}\}\) and \(\omega\) is defined in (21b)._
Substituting (25) into (27) gives
\[\dot{e}_{f}=-T_{1}e_{f}-T_{2}r^{2}\left\|\frac{\partial\beta}{\partial x}\right\|^{2 }e_{f}+r\frac{\partial\beta}{\partial x}z. \tag{30}\]
Construct a candidate Lyapunov function \(V_{f}\) as
\[V_{f}=\frac{1}{2}e_{f}^{\top}e_{f}+\frac{1}{2}z^{\top}z, \tag{31}\]
whose derivative satisfies
\[\dot{V}_{f} \overset{\eqref{eq:12}}{\leq}-T_{1}\|e_{f}\|^{2}-T_{2}r^{2}\left\| \frac{\partial\beta}{\partial x}\right\|^{2}\|e_{f}\|^{2}+re_{f}^{\top}\frac{ \partial\beta}{\partial x}z\!-\!\kappa\|z\|^{2}\!+\!\omega\] (32) \[\leq -T_{1}\|e_{f}\|^{2}\!-\!T_{2}r^{2}\left\|\!\frac{\partial\beta}{ \partial x}\!\right\|^{2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!
the augmented system (7). To address the second issue above, we define a modified CBF \(\bar{h}\) as follows:
\[\bar{h}(x,u,r,\hat{d}_{f})=L_{f}h+L_{g}hu+\frac{\partial h}{\partial x}\hat{d}_{ f}-\frac{(1+r^{2})\left\|\frac{\partial h}{\partial x}\right\|^{2}}{2\tilde{ \rho}(\zeta-\tilde{\lambda})} \tag{33}\]
where \(\tilde{\rho}>0,\tilde{\lambda}>0\) are both tuning parameters, and \(\omega>0\) is the constant defined in (21b). Based on the IIDOB shown in (13), the filter given in (27), and the notaions above, the following result provides a safe controller \(v\) that ensures the forward invariance of \(\mathcal{C}\) when \(r_{I}=r_{D}=1\) for system (2).
_Theorem 2:_ Consider system (2) and the safe set \(\mathcal{C}\) defined in (1). Suppose that all conditions of Theorem 1 hold such that the IIDOB shown in (13) exists. Suppose that \(r_{I}=r_{D}=1\) for system (2), \(h(x(0))>0\) and there exist positive constants \(\lambda,\tilde{\lambda},\rho,\tilde{\rho}>0\) such that \(\lambda<2\kappa\), \(\tilde{\lambda}<\zeta\), \(h(x(0))-\tilde{\rho}V_{f}(0)>0\), and \(\bar{h}(x(0),u(0),r(0),\hat{d}_{f}(0))-\frac{\rho}{2}\|z(0)\|^{2}>0\), where \(\kappa\), \(\zeta\), and \(V_{f}\) are defined in (21a), (29), and (31), respectively. If \(\sup_{e\in\mathbb{R}}[\psi_{0}+\psi_{1}\geq 0)\) holds for any \(u\in\mathbb{R}^{m}\), \(r\in[1,\varrho_{r}(t)]\), \(\|\hat{d}\|\leq\|p(x)\|\omega_{0}+\varrho_{d}(t)+E_{f}(t)\), \(\|e_{f}\|\leq E_{f}(t)\), \(x\in\mathcal{C}\), and \(t\geq 0\), where \(\varrho_{d}(t)=\varrho_{z}(t)\varrho_{r}(t)\) with \(\varrho_{z}(t)\), \(\varrho_{r}(t)\), and \(E_{f}(t)\) defined in (22), (24), and (29), respectively, and
\[\psi_{0} =\frac{\partial\bar{h}}{\partial x}(f\!+\!gu\!+\!\hat{d})+\frac{ \partial\bar{h}}{\partial r}\hat{r}\!+\!\frac{\partial h}{\partial x}\hat{ \bar{d}}_{f}-\frac{r^{2}\|\frac{\partial\bar{h}}{\partial x}\|^{2}}{\rho(4 \kappa-2\lambda)} \tag{34a}\] \[-\rho\omega+\lambda\bar{h},\] \[\psi_{1} =L_{g}h, \tag{34b}\]
with \(\hat{r}\), \(\hat{\hat{d}}_{f}\), and \(\bar{h}\) defined in (13e), (27), and (33), respectively, then any Lipschitz controller \(v\in K_{BF}(x,u,r,\hat{d},\hat{d}_{f})\triangleq\{\mathfrak{v}\in\mathbb{R}^{m }:\psi_{0}+\psi_{1}\mathfrak{v}\geq 0\}\) will ensure \(h(x(t))\geq 0\) for all \(t\geq 0\).
_Proof:_ Define \(H_{1}(x,u,r,\hat{d}_{f},z)=\bar{h}-\frac{z}{2}z^{\top}z\), where \(\bar{h}\) is given in (33). Since
\[\hat{H}_{1} \stackrel{{\eqref{eq:20}}}{{\geq}}\frac{\partial \bar{h}}{\partial x}(f\!+\!gu\!+\!pw)+L_{g}hv\frac{\partial\bar{h}}{\partial r }\hat{r}\!+\!\frac{\partial h}{\partial x}\hat{\bar{d}}_{f}\!+\!\rho\kappa\! \|z\!\|^{2}\!-\!\rho\omega \tag{35}\] \[=\frac{\partial\bar{h}}{\partial x}(f\!+\!gu\!+\!\hat{d})\!+\!L_{ g}hv\!+\!\frac{\partial\bar{h}}{\partial r}\hat{r}\!+\!\frac{1}{2}\rho\lambda\|z\!\|^{2} \!-\!\rho\omega\] \[\quad+\!\frac{\partial h}{\partial x}\hat{\bar{d}}_{f}-r\frac{ \partial\bar{h}}{\partial x}z+\rho\left(\kappa-\frac{\lambda}{2}\right)\|z\!\|^ {2}\] \[\geq\frac{\partial\bar{h}}{\partial x}(f+gu+\hat{d})+\frac{ \partial\bar{h}}{\partial r}\hat{r}+\frac{\partial h}{\partial x}\hat{\bar{d}} _{f}-\frac{r^{2}\|\frac{\partial\bar{h}}{\partial x}\|^{2}}{\rho(4\kappa-2 \lambda)}\] \[\quad+\frac{1}{2}\rho\lambda\|z\|^{2}+L_{g}hv\!-\!\rho\omega\] \[=\psi_{0}+\psi_{1}v-\lambda H_{1},\]
any \(v\in K_{BF}\) will result in \(\hat{H}_{1}\geq-\lambda\bar{H}_{1}\). Noting that \(\bar{h}(x(0),u(0),r(0),\hat{d}_{f}(0))-\frac{\rho}{2}\|z(0)\|^{2}>0\implies H_{ 1}(x(0),u(0),r(0),\hat{d}_{f}(0),z(0))>0\), we can conclude that \(H_{1}(t)\geq 0\), which implies that \(\bar{h}(t)\geq 0\), for any \(t\geq 0\).
Define another function as \(H_{2}(x,z,e_{f})=h(x)-\tilde{\rho}V_{f}\) where \(V_{f}\) is given in (31). Note that
\[\hat{H}_{2}+\tilde{\lambda}H_{2} \tag{36}\] \[\stackrel{{\eqref{eq:20}}}{{\geq}}L_{f}h+L_{g}hu+ \frac{\partial h}{\partial x}\hat{d}_{f}-r\frac{\partial h}{\partial x}z-\frac{ \partial h}{\partial x}e_{f}-\tilde{\rho}\omega+\tilde{\lambda}h\] \[\quad+\frac{(\zeta-\tilde{\lambda})\tilde{\rho}}{2}(\|z\|^{2}+\|e_ {f}\|^{2})\] \[\geq L_{f}h\!+\!L_{g}hu\!+\!\frac{\partial h}{\partial x}\hat{d}_{f}- \frac{(1+r^{2})\left\|\frac{\partial h}{\partial x}\right\|^{2}}{2\tilde{ \rho}(\zeta-\tilde{\lambda})}-\tilde{\rho}\omega+\tilde{\lambda}h\] \[=\bar{h}(x,u,r,\hat{d}_{f})\geq 0.\]
Since \(h(x(0))-\tilde{\rho}V_{f}(0)>0\implies H_{2}(x(0),z(0),e_{f}(0))>0\), we have \(H_{2}(t)\geq 0\), which implies that \(h(x(t))\geq 0\) for all \(t\geq 0\). This completes the proof.
The safe controller \(v\) proposed in Theorem 2 can be obtained by solving the following convex IIDOB-CBF-QP:
\[\min_{v} \|v-v_{nom}\|^{2}\] (37) s.t. \[\psi_{0}+\psi_{1}v\geq 0,\] IIDOB given in (13),
where \(\psi_{0},\psi_{1}\) are given in (34) and \(v_{nom}\) is any given nominal control law that is potentially unsafe (e.g., the IIDOB-based tracking controller given in Proposition 1).
The safe control design method described above can be readily extended to system (2) with \(r_{I}=r_{D}=\rho>1\). Specifically, define functions \(h_{0},h_{1},\ldots,h_{\rho-1}\) as
\[h_{0}(x)=h(x), \tag{38a}\] \[h_{i}(x)=\left(\frac{\mathrm{d}}{\mathrm{d}t}+\lambda_{i-1}\right)\circ h_{i-1}, \ i\in[\rho-1],\] (38b) \[h(x)=\left(\frac{\mathrm{d}}{\mathrm{d}t}+\lambda_{i-1}\right)\circ h_{i-1}, \ i\in[\rho-1], \tag{38c}\]
where \(\lambda_{0},\lambda_{1}\cdots,\lambda_{\rho-2}>0\) are positive constants. Assume that \(h(x(0))>0\) and \(\lambda_{i}\) are selected such that \(h_{i}(x(0))>0\) for any \(i\in[\rho-1]\). It is easy to see that \(h_{i}\geq 0\implies h_{i-1}\geq 0\), for any \(i\in[\rho-1]\). Therefore, \(h\geq 0\) as long as \(h_{\rho-1}\geq 0\)[38]. One can easily verify that \(h_{\rho-1}=\frac{\partial h_{\rho-1}}{\partial x}f+A(x)u+B(x)w\) where \(A(x)=[L_{g_{1}}L_{f}^{\rho-1}h,L_{g_{1}}L_{g_{1}}^{\rho-1}h,\ldots,L_{g_{n}}L_{f}^{ \rho-1}h]\) and \(B(x)=[L_{p_{1}}L_{f}^{\rho-1}h,L_{p_{2}}L_{f}^{\rho-1}h,\ldots,L_{p_{L}}L_{f}^{ \rho-1}h]\). Hence, it is obvious that \(r_{I}=r_{D}=1\) for system (2) with respect to function \(h_{\
\(x_{1}\geq-1\) and \(x_{2}\leq 1\). Define \(h_{1}=x_{1}+1\) and \(h_{2}=1-x_{2}\). One can easily verify that the minimum IRD and the minimum DRD of system (39) with respect to \(h_{1},h_{2}\) are both 1, i.e., \(r_{I}=r_{D}=1\). We choose parameters \(\rho=\tilde{\rho}=1\), \(\lambda=\tilde{\lambda}=50\), \(T_{1}=100\), \(T_{2}=80\) in Theorem 2, and let other parameters the same as above. We also choose the nominal controller as the tracking controller designed above. One can verify that all conditions of Theorem 2 are satisfied, so that the IIDOB-CBF-QP-based controller (37) can ensure \(h_{1}(t)\geq 0\) and \(h_{2}(t)\geq 0\) for all \(t\geq 0\). Indeed, as shown in Fig. 3, trajectories of \(x_{1}\) (or \(x_{2}\)) always remain within the safety set \(\mathcal{C}_{1}\) (or \(\mathcal{C}_{2}\)).
We also compare the tracking performance of the proposed controller (37) with the robust CBF approach proposed in [10]. The robust CBF condition is given as \(L_{f}h+L_{g}hu-\|L_{p}h\|\omega_{0}+\gamma_{r}h\geq 0\), where \(\omega_{0}\) is the bound of the disturbance defined in Assumption 1 and \(\gamma_{r}\) is a tuning parameter. To ensure the fairness of the comparison, we select different values for \(\gamma_{h}\) as \(\gamma_{h}=10,50,100,500,1000\). As shown in Fig. 3, although the robust CBF controller can always ensure safety, its tracking performance of the reference trajectories inside the safe region is not as good as our proposed controller (37).
**Example 2:** Consider a two-linked planar robot manipulator:
\[M(q)\ddot{q}+C(q,\dot{q})\dot{q}+G(q)=\tau+J^{\top}(q)F_{d}, \tag{40}\]
where \(q=[q_{1}\ q_{2}]^{\top}\) is the joint angle, \(\dot{q}=[\dot{q}_{1}\ \dot{q}_{2}]^{\top}\) is the joint angular velocity, \(\tau\in\mathbb{R}^{2}\) is the control input, \(F_{d}\) is the external disturbance satisfying Assumption 1, and \(M(q)\in\mathbb{R}^{2\times 2},C(q,\dot{q})\in\mathbb{R}^{2\times 2},G(q)\in \mathbb{R}^{2}\), and \(J(q)\in\mathbb{R}^{2\times 2}\) denote the inertia matrix, the Coriolis/centripetal matrix, the gravity term, and the Jacobian, respectively. It can be seen (40) can be expressed in the form of (2) with \(x=[q_{1}\ q_{2}\ \dot{q}_{1}\ \dot{q}_{2}]^{\top}\), \(f=[\dot{q}^{\top}\ -(C+G)^{\top}M^{-\top}]^{\top}\), \(g=[0_{2\times 2}\ M^{-\top}]^{\top}\), and \(p=[0_{2\times 2}\ JM^{-\top}]^{\top}\). It is obvious that \(f,g,p\) are smooth functions. The expression of \(M,C,G\) and physical parameters are chosen the same as those in [39]. Note that the Jacobian \(J(q)\) is singular when \(q_{2}=0\) such that it is impossible to uniquely recover \(F_{d}\) even if \(\tilde{q}\) is available. The reference trajectories are \(q_{1d}(t)=q_{2d}(t)=2\sin(t)\); the nominal controller is designed by following Proposition 1; the disturbance \(F_{d}\) as selected as \(F_{d}=[d_{1}\ d_{2}]^{\top}\) with \(d_{1}=d_{2}=5\sin(t)+2\cos(2t)+4\sin(3t)+3\cos(4t)\) such that Assumption 1 holds with \(\omega_{0}=11,\omega_{1}=37\); four CBFs are selected as \(h_{1}=q_{1}+1\), \(h_{2}=-q_{1}+1.5\), \(h_{3}=q_{2}+1.2\), and \(h_{4}=-q_{2}+1\), which aim to ensure \(-1\leq q_{1}\leq 1.5\) and \(-1.2\leq q_{2}\leq 1\). It can be verified that the minimum IRD and the minimum DRD of system (40) with respect to \(h_{i}\), \(i\in[4]\), are 2, i.e., \(r_{I}=r_{D}=2\). We select the control parameters as \(\gamma=100,c=5,\theta=50,k_{1}=k_{2}=20,\alpha_{1}=\alpha_{2}=50,\rho=1,\lambda =30\) in Theorem 1 and 2. The simulation results are presented in Fig. 4. One can observe that the disturbance is precisely estimated by the proposed IIDOB (13), and the IIDOB-CBF-QP-based controller (37) can ensure the safety because the trajectories of \(q_{1},q_{2}\) remain within the safety sets.
## VI Conclusion
This note introduced a systematic approach for designing IIDOB for general nonlinear control-affine systems without imposing restrictive assumptions employed by existing DOB design strategies. Based on that, a filter-based IIDOB-CBF-QP safe control design method was proposed. The numeri
Figure 3: Simulation results of the IIDOB-CBF-QP-based controller (37) and the robust CBF-based controller proposed in [10] with different tuning parameters for system (39). Both controllers can ensure safety, but the proposed controller (37) has better tracking performance of the reference trajectories inside the safe region, regardless of the values of \(\gamma_{h}\).
Figure 2: Simulation results of the proposed IIDOB-based tracking controller for system (39). (top) Trajectories of the total disturbances \(d_{1},d_{2}\), and the estimated total disturbances \(\hat{d}_{1},\hat{d}_{2}\); (bottom) trajectories of the state \(x_{1},x_{2}\), and the reference signals
cal simulation results demonstrated the estimation accuracy achieved by the IIDOB and the superior performance of the proposed safe controller compared to the robust CBF-QP-based methods. Future studies include extending the results to systems with lower minimum DRD and conducting experimental studies on the proposed methods.
|
2309.08735 | On groups with Schottky set boundary | We study relatively hyperbolic group pairs whose boundaries are Schottky
sets. We characterize the groups that have boundaries where the Schottky sets
have incidence graphs with 1 or 2 components. | Peter Haïssinsky, Luisa Paoluzzi, Genevieve Walsh | 2023-09-15T19:52:52Z | http://arxiv.org/abs/2309.08735v1 | # On groups with Schottky set boundary
###### Abstract
We study relatively hyperbolic group pairs whose boundaries are Schottky sets. We characterize the groups that have boundaries where the Schottky sets have incidence graphs with \(1\) or \(2\) components.
_MSC 2020:_ Primary 20F67; Secondary 20F65, 20H10, 20E99.
_Keywords:_ Relatively hyperbolic groups, Bowditch boundaries, Kleinian groups, Schottky sets.
## 1 Introduction
Convergence group actions on the \(2\)-sphere were introduced by Gehring and Martin in [12] and have been studied extensively since then. It was conjectured in [21] that every faithful convergence group action \(G\) on \(S^{2}\) by orientation preserving homeomorphisms is _covered_ by the induced action of a discrete group of \(\mathrm{Isom}(\mathbb{H}^{3})\) on \(S^{2}\), i.e., there exist a Kleinian group \(K\), an isomorphism \(\rho:K\to G\) and a degree \(1\) map \(\phi:\widehat{\mathbb{C}}\to S^{2}\) such that the following diagram commutes :
\[\begin{array}{ccccc}K&\curvearrow&\widehat{\mathbb{C}}&\longrightarrow& \widehat{\mathbb{C}}\\ \Big{\downarrow}\rho&&\Big{\downarrow}\phi&&\Big{\downarrow}\phi\\ G&\curvearrow&S^{2}&\longrightarrow&S^{2}\end{array}\]
This remains open. This conjecture is closely related to Cannon's conjecture [8], which asserts that a hyperbolic group with \(2\)-sphere boundary is virtually a discrete group of \(\mathrm{Isom}(\mathbb{H}^{3})\). Cannon's conjecture is a particular case of the previous one when the action is faithful on its boundary and orientation preserving. Here we are dealing with the case of relatively hyperbolic groups, and specifically those whose boundaries are _topological Schottky sets_. These are defined and described in Section 5. Some familiar examples are the Sierpinski carpet and the Apollonian gasket. Our motivation for studying those groups is essentially twofold. Firstly, there are many examples of groups that admit a peripheral structure for which their boundary is a topological Schottky set, cf. Theorem D. Secondly, the relatively hyperbolic groups with these boundaries are all conjectured to be virtually discrete subgroups of \(\mathrm{Isom}(\mathbb{H}^{3})\), see [19]. Here we show that many relatively hyperbolic groups with boundaries that are topological Schottky
sets are virtually discrete subgroups of \(\operatorname{Isom}(\mathbb{H}^{3})\). Furthermore, we say which Kleinian groups arise when the boundaries are certain types of topological Schottky sets.
We say that \((G,\mathcal{P})\) is a _relatively hyperbolic group pair_ if \((G,\mathcal{P})\) acts as a geometrically finite convergence group on a hyperbolic space \(X\). See section 2 for the detailed definition. In this case, we say that the Gromov boundary of \(X\), \(\partial X\), is the Bowditch boundary of \((G,\mathcal{P})\), denoted \(\partial(G,\mathcal{P}).\) We also call \(\partial(G,\mathcal{P})\) the _relatively hyperbolic boundary_ or sometimes just "the boundary". Throughout, \((G,\mathcal{P})\) is a non-elementary relatively hyperbolic group pair (besides Proposition 3.3 where the classification of elementary convergence groups acting on \(S^{2}\) is provided), which means that \(\partial(G,\mathcal{P})\) has more than two points.
Following [1], we define a _Schottky set_ as the complement of at least three disjoint open round balls in the \(n\)-sphere \(S^{n}\), where \(S^{n}\) is equipped with the standard metric as a subset of Euclidean space. Throughout this paper, we will restrict ourselves to \(n=2\), so all our Schottky sets are planar. We actually work with the non-metric analog, which we call _topological_ Schottky sets.
Due to the properties of a topological Schottky set \(\mathcal{S}\) in Definition 5.1, every \(\mathcal{S}\) produces an _incidence graph_\(\Gamma(\mathcal{S})\), the simplicial graph whose vertices correspond to the open disks \(\{D_{i}\}_{i\in I}\) of its complement in \(S^{2}\), and whose edges correspond to (1-point) incidences between closures of the disks \(D_{i}\).
Our main results are as follows:
**Theorem A**.: _Let \(\mathcal{S}\) be a topological Schottky set with \(\mathcal{S}=\partial(G,\mathcal{P})\). Then the incidence graph \(\Gamma(\mathcal{S})\) has 1, 2 or infinitely many components. Their stabilizers are virtual surface groups._
**Theorem B**.: _Let \(\mathcal{S}\) be a topological Schottky set with \(\mathcal{S}=\partial(G,\mathcal{P}).\)_
_When the incidence graph \(\Gamma(\mathcal{S})\) has one component, then \(G\) is virtually a free product of a free group \(F_{n}\) of rank \(n\geq 0\) and some finite index subgroups of groups in \(\mathcal{P}\). Moreover, if \(G\) is finitely generated, its action is faithful and orientation preserving, then \(G\) is covered by a geometrically finite Kleinian group \(K\)._
From a topological viewpoint, \(K\) contains a finite-index torsion-free subgroup that uniformizes a 3-manifold obtained by gluing together along compression disks a handlebody and \(I\)-bundles over surfaces.
**Theorem C**.: _Let \(\mathcal{S}\) be a topological Schottky set with \(\mathcal{S}=\partial(G,\mathcal{P})\). When the incidence graph \(\Gamma(\mathcal{S})\) has exactly 2 components, \(G\) is virtually a closed surface group._
In contrast, when the incidence graph has infinitely many components, then the group is covered by a geometrically finite convergence group that may have a Sierpinski carpet boundary. Showing that these are essentially Kleinian is still a wide open question, even in the word hyperbolic case, cf. [20]. Note that Theorem D below enables us to construct examples of Schottky limit sets that have infinitely many components of their incidence graphs but that do not come from a Sierpinski carpet limit set. For example, apply the theorem to a geometrically finite Kleinian group that contains a rank-2 accidental parabolic fixed point (see for instance the first example in [6], illustrated by Figure 6 therein). So far, all the examples we know of with Sierpinski carpet boundary are virtually fundamental groups of hyperbolic 3-manifolds with totally geodesic boundary (which may have cusps), and this is consistent with conjectures in [20] and [19].
**Theorem D**.: _Let \(K\) be a geometrically finite Kleinian group with non-empty domain of discontinuity. Then there is a peripheral structure \(\mathcal{P}_{K^{\prime}}\) on a finite index subgroup
\(K^{\prime}\) of \(K\), such that \((K^{\prime},{\cal P}_{K^{\prime}})\) is a relatively hyperbolic group pair and \(\partial(K^{\prime},{\cal P}_{K^{\prime}})\) is a topological Schottky set. Moreover, \({\cal P}_{K^{\prime}}\) contains the natural peripheral structure of the Kleinian group \(K^{\prime}\subset K\)._
In Section 2 we prove some general facts about relatively hyperbolic groups, generalizing some theorems about hyperbolic boundaries to relatively hyperbolic boundaries. In Section 3 we restrict to relatively hyperbolic groups that are geometrically finite convergence groups acting on \(S^{2}\). Although the results in this section will be used later in the context of topological Schottky sets, they do not only apply to this specific context. In Section 4 we describe how to "blow up" 2-ended peripheral subgroups in geometrically finite groups acting on \(S^{2}\). This will change the peripheral structure, but not the group; moreover the group with its new peripheral structure is shown to admit again a convergence action on the 2-sphere. In Sections 5 and 6 we introduce and discuss topological Schottky sets and their incidence graphs, and prove Theorem A. Finally in Section 7 we prove Theorem B, and in Section 8 we prove Theorems C and D.
### Acknowledgements
We thank the CIRM (Centre International de Rencontres Mathematiques) in Luminy, Marseille, where this work began. The third author was partially supported by the NSF through DMS - 2005353.
## 2 Relative hyperbolicity and relative quasiconvexity
Here we provide some results about general relatively hyperbolic groups and their boundaries. References on metric spaces in the sense of Gromov include [13, 5]. Let \(G\) be a finitely generated group and a family \({\cal P}\) of subgroups consisting of finitely many conjugacy classes.
Let us first recall that a _convergence group_\(G\) is a group of homeomorphisms of a compact metric space \(Z\) such that any sequence \((g_{n})_{n}\) of distinct elements contains a convergent subsequence, i.e., up to a subsequence, there are two points \(a\) and \(b\) in \(Z\) so that \((g_{n})\) tends uniformly to the constant map \(a\) on compact subsets of \(Z\setminus\{b\}\). One may then define the limit set \(\Lambda_{G}\) as the set of limit points \(a\) of all convergence sequences in \(G\). It is a compact invariant subset of \(Z\). Its complement, \(\Omega_{G}\), is the ordinary set: the action of \(G\) on \(\Omega_{G}\) is properly discontinuous, see [12] for more properties. Note that any discrete group of isometries on a geodesic, proper, hyperbolic space \(X\) admits a convergence action on \(X\cup\partial X\).
**Definition 2.1** ([4]).: The pair \((G,{\cal P})\) is _relatively hyperbolic_ if \(G\) acts on \(X\) properly discontinuously and by isometries, where \(X\) is a proper hyperbolic geodesic metric space such that:
1. each point of \(\partial X\) is either a _conical limit point_ or a _bounded parabolic point_.
2. \({\cal P}\) is exactly the collection of maximal _parabolic subgroups_.
A conical limit point is a point \(y\in\partial X\) such that there exists a sequence \((g_{i})\) in \(G\) and distinct points \(a,b\in\partial X\), such that \(g_{i}(y)\to a\) and \(g_{i}(z)\to b\), for all \(z\in\partial X\setminus\{y\}\). A parabolic point \(y_{P}\) is a point with an infinite stabilizer that fixes no other point, i.e., the fixed point of a parabolic subgroup \(P\). It is bounded if \((\partial X\setminus\{y_{P}\})/P\) is compact. Whenever we have a properly discontinuous action by isometries and these two conditions are satisfied, we say \((G,{\cal P})\) acts _geometrically finitely_ on \(X\). If \((G,{\cal P})\) is
a relatively hyperbolic pair, then \(\partial(G,\mathcal{P})\ =\partial X\) is its _Bowditch boundary_, or _relatively hyperbolic boundary_. This depends on \(\mathcal{P}\), but is well-defined for the pair \((G,\mathcal{P})\).
As we will be using topological properties of Bowditch boundaries, we recall two topological notions that will be used several times.
**Definition 2.2** (Null sequences and \(E\)-sets).: Given a compact metric space \(Z\), a _null-sequence_ is a collection of subsets \(\mathcal{C}\) such that, for any \(\delta>0\), the collection \(\mathcal{C}\) contains at most finitely many elements of diameter at least \(\delta\).
An \(E\)_-set_ is a connected compact subset of the sphere \(S^{2}\) such that the collection of connected components of its complement is a null-sequence.
**Proposition 2.3**.: _Let \((G,\mathcal{P})\) be relatively hyperbolic._
1. _If_ \(K\) _is the limit set of a relatively quasiconvex subgroup, then the set of elements in the orbit_ \(GK\) _forms a null-sequence._
2. _Let_ \(\mathcal{C}\) _be a_ \(G\)_-invariant collection of compact subsets of_ \(\partial(G,\mathcal{P})\) _which defines a null-sequence, where each element of_ \(\mathcal{C}\) _contains more than one point. Then_ \(\mathcal{C}/G\) _is finite and, for any perfect set_ \(K\in\mathcal{C}\)_,_ \(\text{Stab}(K)\) _is a relatively quasiconvex subgroup with limit set_ \(K\)_._
Proof.: Let us first consider a geometrically finite action of the group \(G\) on a proper geodesic hyperbolic metric space \(X\) so that the stabilizers of the parabolic points are the elements of \(\mathcal{P}\). We may then identify \(\partial X\) with \(\partial(G,\mathcal{P})\) and endow it with a visual distance seen from a base point \(o\in X\).
Let \(H\) be a relatively quasiconvex subgroup of the relatively hyperbolic group \((G,\mathcal{P})\). We will prove that the orbit of its limit set \(\Lambda_{H}=K\) forms a null sequence. See [14, Corollary 2.5] for the hyperbolic case.
Fix \(\delta>0\) and let \(R>0\) denote the upper bound on the distances from the origin \(o\) to any geodesic joining points \(\delta\)-apart in the boundary. Let us pick a \(G\)-invariant collection of horoballs \(\mathcal{H}\) in \(X\) centered at the set of parabolic points in such a way that they are pairwise disjoint and that their distance to \(o\) is at least \(R+1\) (by shrinking if necessary). By abuse of notation, we will also let \(\mathcal{H}\) denote the union of the horoballs of the collection. Let \(\mathcal{C}\) denote the set of translates \(g(K)\) of diameter at least \(\delta\) and assume that \(K\in\mathcal{C}\). Since \(H\) is relatively quasiconvex, there is some \(q>0\) so that, for any geodesic \(\gamma\) joining two points in \(K\), \(\gamma\cap(X\setminus\mathcal{H})\) is contained in the \(q\)-neighborhood of \(Ho\), [18, Definition 6.6]. If \(L=g(K)\in\mathcal{C}\), then we may find a geodesic \(\gamma\) at distance at most \(R\) from \(o\) and such that \(g^{-1}(\gamma)\) is in the \(q\)-neighborhood of \(Ho\) outside \(\mathcal{H}\). Since the horoballs are at distance at least \(R+1\) from \(o\), we may find a point of \(\gamma\) at distance at most \(R\) from the origin and at distance at most \(q\) from \(gHo\). Thus, there exists \(g_{L}\in gH\) such that \(g_{L}(o)\in B(o,R+q)\). Since the action of \(G\) is properly discontinuous, there are finitely many elements \(g\in G\) with \(g(o)\in B(o,R+q)\), hence finitely many \(L\in\mathcal{C}\). This shows that \(GK\) is a null-sequence.
We now establish point 2. Let \(m>0\) be such that any distinct pair of points of \(\partial(G,\mathcal{P})\) can be \(m\)-separated by an element of \(G\), i.e., for any \(x,y\in\partial(G,\mathcal{P})\), \(x\neq y\), there is some \(g\in G\) such that \(d(g(x),g(y))\geq m\). Such \(m\) exists since the action on the set of distinct pairs is co-compact, see [31]. Given \(\delta>0\), we let \(\mathcal{C}_{\delta}\) denote the subset of elements \(K\) of \(\mathcal{C}\) such that \(\operatorname{diam}K\geq\delta\); this set is finite since \(\mathcal{C}\) is a null-sequence and non-empty for small enough \(0<\delta\leq m\).
For all \(K\in\mathcal{C}\), we can find two points \(x_{1},x_{2}\in K\) and a group element \(g\in G\) such that \(\{g(x_{1}),g(x_{2})\}\) is \(m\)-separated: this implies that \(g(K)\in\mathcal{C}_{m}\), so that \(\mathcal{C}\) is composed of finitely many orbits.
Let \(K\in\mathcal{C}\) be a perfect compact set. Since \(G_{K}=\mathrm{Stab}\ (K)\) is a subgroup of \(G\), its action on the set of distinct triples of \(K\) is automatically properly discontinuous. Let us prove that it is also geometrically finite.
Let \(x,y\in K\), \(x\neq y\), and assume that \(x\) is conical for \(G\). Let \((g_{n})\) be a sequence of \(G\) such that \((g_{n}(x))_{n}\) tends to \(a\) and \((g_{n}(y))_{n}\) tends to \(b\neq a\). This means that for all \(n\) large enough \(\mathrm{diam}\,g_{n}(K)\) is larger than some constant \(\delta>0\) (for instance \(\delta=d(a,b)/2\)) so belongs to a finite subcollection of \(\mathcal{C}\). Extracting a subsequence if necessary, we may assume that \(g_{n}(K)=L\) for some \(L\in\mathcal{C}\). It follows that \(h_{n}=g_{1}^{-1}g_{n}\) defines a sequence of \(G_{K}\) such that \((h_{n}(x))\) tends to \(g_{1}^{-1}(a)\) and \((h_{n}(y))\) tends to \(g_{1}^{-1}(b)\) for all other points \(y\). This means that \(x\) is conical for \(G_{K}\).
If \(x\in K\) is parabolic, denote by \(G_{x}\) its stabilizer and let \(L\) be a compact fundamental domain for the action of \(G_{x}\) on \(\partial(G,\mathcal{P})\setminus\{x\}\). We first prove that \(G_{x}\cap G_{K}\) is infinite, establishing that \(x\) is a also a parabolic point for \(G_{K}\). Since \(x\) is non-isolated in \(K\), we may find a sequence \((x_{n})_{n}\) in \(K\) which tends to \(x\) and a sequence \((g_{n})\) in \(G_{x}\) so that \(g_{n}(x_{n})\in L\). The collection \((g_{n})_{n}\) is infinite and \(\mathrm{diam}\,g_{n}(K)\) is at least \(d(x,L)>0\) so belongs to a finite subcollection \(\mathcal{C}_{L}\). Extracting a subsequence if necessary, we may assume that \(g_{n}(K)\) is a fixed compact subset so that \((g_{1}^{-1}g_{n})_{n}\) is an infinite sequence in \(G_{x}\cap G_{K}\). We will now prove that \(x\) is also bounded as a parabolic point for \(G_{K}\). Let us label the elements of \(\mathcal{C}_{L}\) by \(\{K_{1},\ldots,K_{N}\}\) and let us fix, for each index \(j\in\{1,\ldots,N\}\), an element \(h_{j}\in G_{x}\) such that \(h_{j}(K)=K_{j}\). Set \(L_{K}=\cup_{1\leq j\leq N}h_{j}^{-1}(L)\) that is compact in \(\partial(G,\mathcal{P})\setminus\{x\}\). For any \(y\in K\setminus\{x\}\), we may find \(g\in G_{x}\) so that \(g(y)\in L\); note that \(g(y)\in K_{j}\) for some \(j\in\{1,\ldots,N\}\), implying that \(h_{j}^{-1}g(y)\in L_{K}\). This shows that \(x\) is a bounded parabolic point. Thus, any point in \(K\) is either conical or bounded parabolic for \(G_{K}\), so that \(G_{K}\) is geometrically finite with limit set \(K\).
We observe now that a collection of compact sets forms a null-sequence if it is finite so, in particular, if it contains a single element. If the Bowditch boundary of a relatively hyperbolic group consists of more than one component, then Bowditch showed that the group must split. More precisely the following holds.
**Theorem 2.4**.: _[_4_, Theorem 10.1]_ _The boundary \(\partial\Gamma\) of a relatively hyperbolic group, \(\Gamma\), is connected if and only if \(\Gamma\) does not split non-trivially over any finite subgroup relative to peripheral subgroups._
In the case where the group splits, we have the following description which is again due to Bowditch.
**Theorem 2.5**.: _[_4_, Theorem 10.3]_ _Suppose a relatively hyperbolic group pair splits as a graph of groups with finite edge groups and relative to the peripheral subgroups. Then each vertex group is hyperbolic relative to the peripheral subgroups that it contains and its boundary is naturally identified as a closed subset of the boundary of the whole group._
The following proposition is an immediate consequence of our Proposition 2.3 and the above discussion and results by Bowditch.
**Proposition 2.6**.: _The set of components of the Bowditch boundary of a relatively hyperbolic group \((G,\mathcal{P})\) forms a null-sequence. Moreover, for each non-trivial component, the stabilizer is hyperbolic relative to conjugates of the original peripheral subgroups \(\mathcal{P}\)._
While the boundary of a relatively hyperbolic group is not always connected and sometimes contains cut points, the structure of cut points allows us to rule out a dendrite boundary:
**Lemma 2.7**.: _Let \((G,\mathcal{P})\) be a geometrically finite convergence group. Then \(\partial(G,\mathcal{P})\) is not a dendrite._
Recall that a _dendrite_ is a connected, locally connected, compact metric space containing at least two points that admits no simple closed curve.
Proof.: According to [10, Theorem 1.1], every cut point of \(\partial(G,\mathcal{P})\) is a parabolic point. This readily implies that there are at most countably many cut points in \(\partial(G,\mathcal{P})\). To reach the desired conclusion, we shall show that a dendrite contains an open path of cut points. To see this, let \(L\) be a dendrite. Then \(L\) is path connected according to [33, II.5.1], since it is a locally connected complete metric space. Let \(x\) and \(y\) be distinct points in \(L\) and \(p\) a path between them. Remove a point \(z\) on \(p\). If \(z\in L\) is a not a cut point, \(L\setminus\{z\}\) is connected. Since \(L\setminus\{z\}\) is connected and locally compact, i.e. a _generalized continuum_, the fact that it is locally connected implies that it is path-wise connected [33, II.5.2]. Thus there is another path \(p^{\prime}\) from \(x\) to \(y\) that misses \(z\). Then the set \(\{r\in[0,1]\mid p^{\prime}(r)\in p([0,1])\}\) is closed in \([0,1]\) and not all of \([0,1]\) so its complement contains an open interval, and this gives us a loop in \(L\), which is absurd.
Lemma 2.7 can also be derived using Theorem 1.2 of [10].
The statement of the next proposition is due to Susskind and Swarup for geometrically finite Kleinian groups [29, Thm 3]. The same argument applies verbatim to relatively hyperbolic groups.
**Proposition 2.8** (Susskind and Swarup).: _Let \((G,\mathcal{P})\) be relatively hyperbolic and \(H,K\) be two relatively quasiconvex subgroups. Then \(H\cap K\) is relatively quasiconvex and \(\Lambda_{H}\cap\Lambda_{K}=\Lambda_{H\cap K}\cup P\) where \(P\) is a (possibly empty) discrete set of common parabolic points._
Together with Theorem 2.5 above, we will rely on one more result regarding splittings, again by Bowditch.
**Theorem 2.9**.: _[_4_, Theorem 10.2]_ _Any relatively hyperbolic group pair can be expressed as the fundamental group of a finite graph of groups with finite edge groups and with every peripheral subgroup conjugate into a vertex group, with the property that no vertex group splits non-trivially over any finite subgroup relative to the peripheral subgroups._
We obtain in this way
**Corollary 2.10**.: _Suppose \((G,\mathcal{P})\) is a relatively hyperbolic group pair and \(\partial(G,\mathcal{P})\) is a Cantor set. The group \(G\) is the fundamental group of a finite graph of groups where all the edge groups are finite, and each vertex group is either finite or a peripheral group._
Proof.: We apply Theorem 2.9 to express \(G\) as the fundamental group of a finite graph of groups with finite edge groups and with every peripheral subgroup conjugate into a vertex group, with the property that no vertex group splits non-trivially over any finite subgroup relative to the peripheral subgroups. Theorem 2.4 tells us that this graph of groups is non-trivial since the boundary is disconnected. Since it is totally disconnected, Theorem 2.5 implies that a vertex group is the stabilizer of a point or trivial, so each vertex group is either conjugate to a peripheral subgroup or finite.
**Theorem 2.11**.: _Let \((G,\mathcal{P})\) be a relatively hyperbolic group pair with \(\partial(G,\mathcal{P})\) homeomorphic to a Cantor set. Assume that for all \(P\in\mathcal{P}\), \(P\) is residually finite. Then \(G\) is virtually \(F\ast(*_{1}^{n}P_{i})\) where \(F\) is free (possibly of rank 0) and each \(P_{i}\) is a finite index subgroup of some \(P\in\mathcal{P}\)._
Theorem 2.11 follows from Corollary 2.10 and
**Theorem 2.12**.: _Let \((G,\mathcal{P})\) be a relatively hyperbolic group pair, such that \(G\) can be written as a finite graph of groups, where every edge group is finite and each vertex group is either finite or a peripheral group. Assume that each peripheral group is residually finite. Then \(G\) is virtually the free product of a free group and finite index subgroups of peripheral groups._
Before proving Theorem 2.12 we set some notation.
We will express a splitting of a group in terms of an action of the group on a simplicial tree with finite edge stabilizers and without edge inversions. A splitting is said to be _relative_ to a certain collection of subgroups if every subgroup in this collection fixes a vertex of the tree. It is _non-trivial_ if no vertex of the tree is fixed by the whole group.
Given a group \(G\) with an action on a simplicial tree \(T\) with no edge inversions, we let \(\Gamma=T/G\) be the orbit space. For each vertex \(v\) of \(\Gamma\) we may consider a vertex group \(G_{v}\) defined as the stabilizer of a representative of the vertex in \(v\). In the same manner, we define edge groups \(G_{e}\) for edges. The action of \(G\) on \(T\) provides us with injective maps \(\phi_{0,e}:G_{e}\to G_{v}\), \(\phi_{1,e}:G_{e}\to G_{v}\) defined whenever \(e(0)\) or \(e(1)\) is \(v\).
We say the tuple \(\mathcal{G}=(\Gamma,\{G_{v}\},\{G_{e}\},\{\phi_{e,e}\})\) is a graph of groups, and \(G\) is the _fundamental group of the graph of groups \(\mathcal{G}\)_. The set of generators of \(G\) is the union of the sets of generators for all the \(G_{e}\) and the \(G_{v}\), together with a set contaning a generator \(t_{e}\) for each edge of \(\Gamma\). The relations are all the relations in each \(G_{e}\) and \(G_{v}\), \(t_{e}=1\) if \(e\) is in a fixed maximal tree, \(t_{e}^{-1}=t_{\bar{e}}\) and \(t_{e}\phi_{0,e}(x)t_{e}^{-1}=\phi_{1,e}(x)\) for all \(x\in G_{e}\).
Let \(G\) be the fundamental group of a graph of groups \(\mathcal{G}\) with underlying graph \(\Gamma\). Suppose further, as in the hypotheses of Theorem 2.12, that \(\mathcal{P}\) is a collection of subgroups of \(G\) where each subgroup is residually finite, each edge group is finite, and each vertex group is either finite or a subgroup in \(\mathcal{P}\).
Proof of Theorem 2.12.: We will map \(G\) to the fundamental group of a graph of groups \(G^{\prime}\), over the same graph \(\Gamma\) but where the vertex groups and edge groups are all finite. For each infinite vertex group \(G_{v}\), conjugate to some \(P\in\mathcal{P}\), there are finitely many edges meeting the vertex \(v\). Since \(P\) is residually finite, there is a map \(\psi_{v}:G_{v}\to C_{v}\) onto a finite group \(C_{v}\) which is injective on the union of the images \(\phi_{e,e}(G_{e})\) where \(e(\epsilon)=v\). We will define \(G^{\prime}\) as the fundamental group of \(\mathcal{G}^{\prime}=(\Gamma,\{G^{\prime}_{v}\},\{G_{e}\},\{f_{e,e}\})\) where
* \(G^{\prime}_{v}=G_{v}\) if \(G_{v}\) is finite, and \(G^{\prime}_{v}=C_{v}\) if \(G_{v}\) is infinite.
* \(f_{\epsilon,e}=\psi_{v}\circ\phi_{\epsilon,e}:G_{e}\to C_{v}\) if \(G_{e(\epsilon)}\) is infinite, and \(f_{\epsilon,e}=\phi_{\epsilon,e}\) if \(G_{e(\epsilon)}\) is finite.
The group \(G\) admits a natural surjection to \(G^{\prime}\). Furthermore, \(G^{\prime}\) admits a surjection to a finite group which is injective on every edge and vertex group of \(G^{\prime}\), by Scott and Wall [28, Chapter 7]. Then the composition of these two maps is a map from \(G\) to a finite group which is injective on every finite vertex group and every edge group. The kernel \(H\) of this composition is a finite index subgroup of \(G\) which acts on the same tree as \(G\) but with trivial edge groups. Thus, we see \(H\) as the fundamental group of a finite graph of groups where the edge groups are trivial and the vertex groups are either finite
or finite index subgroups conjugate to peripheral subgroups of \(G\). This implies that \(H\) is the free product of a free group and finite index subgroups of peripheral groups.
## 3 Geometrically finite convergence groups acting on \(S^{2}\)
A relatively hyperbolic group pair \((G,\mathcal{P})\) can have a planar boundary where the action does not extend to \(S^{2}\); see, for example [20, Section 9], where the group \(G\) is hyperbolic and virtually Kleinian. The group \(G\) need not be virtually Kleinian for \(\partial(G,\mathcal{P})\) to be planar, though, and its peripheral subgroups can be arbitrary [19]. Here we collect some general results on geometrically finite convergence groups on \(S^{2}\), which will be used for the more specific case of Schottky sets which we study here.
Let \(G\) be a convergence group acting on \(S^{2}\) with limit set \(\Lambda=\Lambda_{G}\subset S^{2}\). A relatively hyperbolic group pair \((G,\mathcal{P})\) is a _geometrically finite convergence group on \(S^{2}\)_ if every point of \(\Lambda\) is either a bounded parabolic point (with maximal parabolic group in \(\mathcal{P}\)) or a conical limit point. We are not in general assuming that the action is faithful: there could be a finite normal subgroup of \(G\) which acts as the identity on \(S^{2}\). When we know that the quotient by this finite normal subgroup is virtually a \(2\) or \(3\)-manifold group, there is a finite index subgroup of \(G\) which acts as a subgroup of \(Homeo(S^{2})\), by [16, Theorem 1.3]. In what follows we will be analyzing the quotients by the finite normal subgroup, and the results in general will be virtual.
**Lemma 3.1**.: _An infinite-order, orientation-preserving parabolic element of a geometrically finite convergence group on \(S^{2}\) is conjugate to a translation._
Proof.: Let \(g\in G\) be parabolic with fixed point \(p\in S^{2}\). Its restriction to \(S^{2}\setminus\{p\}\) is fixed-point free and its action is properly discontinuous. Hence, \((S^{2}\setminus\{p\})/\langle g\rangle\) is a surface with cyclic fundamental group and so homeomorphic to a cylinder. This implies that the action of \(g\) is conjugate to that of a translation.
**Proposition 3.2**.: _Let \(\partial(G,\mathcal{P})\) be a geometrically finite convergence group on \(S^{2}\) with \(G\) finitely generated. Then each \(P\in\mathcal{P}\) is a virtually finite type surface group, that is virtually free of rank at least \(1\) or virtually a closed surface group._
Proof.: Any maximal parabolic subgroup \(P\) is finitely generated, since we are assuming that \(G\) is finitely generated by [26, Prop. 2.29]. Since \(P\) also acts properly on \(\mathbb{R}^{2}\), this is exactly [19, Cor. 3.2]. The proof uses [16, Thm 1.3] in the case that there is a finite normal subgroup.
**Elementary action.--** A convergence action on a compact metrizable space is _elementary_ if its limit set is finite, i.e., contains at most two points. Such actions are classified on the sphere, cf. [21, Theorem 3.4, Lemma 4.2].
**Proposition 3.3**.: _Let \(G\) be a finitely generated subgroup of \(Homeo^{+}(S^{2})\) (the orientation preserving homeomorphisms of \(S^{2}\)) which is an elementary convergence group._
_Then its action is conjugate to a subgroup of Mobius transformations. More precisely,_
1. _If_ \(\Lambda_{G}=\emptyset\)_, then its action is conjugate to that of a finite subgroup of_ \(SO_{3}(\mathbb{R})\)
2. _If_ \(\Lambda_{G}\) _is a singleton, then_ \(G\) _contains a finite index subgroup whose action is conjugate to that of a Fuchsian group that defines a surface of finite type. In particular, if_ \(G\) _is two-ended, then_ \(G\) _is either isomorphic to_ \(\mathbb{Z}\) _or to_ \((\mathbb{Z}/2\mathbb{Z})*(\mathbb{Z}/2\mathbb{Z})\)_._
3. _If_ \(\Lambda_{G}\) _is a pair of points, then_ \(G\) _has an index 1 or 2 subgroup_ \(G^{\prime}\) _which is Abelian, and the action of_ \(G^{\prime}\) _is conjugate to that of_ \(\langle z\mapsto 2z,z\mapsto\zeta z\rangle\)_, with_ \(\zeta^{n}=1\) _for some_ \(n\in\mathbb{N}\)_._
Proof.: If the limit set is empty, then the action of \(G\) on \(S^{2}\) is properly discontinuous and cocompact so that \(S^{2}/G\) is naturally equipped with a good spherical orbifold structure. In other words, its action is conjugate to that of a finite subgroup of \(SO_{3}(\mathbb{R})\).
Let us now assume that the limit set consists of a single point \(p\). It must be parabolic so Proposition 3.2 implies that it is virtually a surface group of finite type.
Let us now assume furthermore that \(G\) is two-ended. Following [28, Theorem 5.12], we consider a finite normal subgroup \(F\) of \(G\). Note that, as \(F\) is normal and finite, we can find an infinite order element \(g\) in \(G\) that centralizes it. This implies that \(F\) also fixes the point \(p\), and, by the previous case, \(F\) has to be a finite cyclic group. Let us assume that \(F\) is non trivial and let \(q\) be the other fixed point under \(F\). Since \(g\) has infinite order, it acts as a translation by Lemma 3.1. Thus we should have \(g^{n}fg^{-n}(q)=f(q)=q\) for all \(n\in\mathbb{Z}\) and \(f\in F\). However, this shows that \(f\) fixes infinitely many points and cannot be non-trivial, so that we may conclude that \(F\) is trivial and that \(G\) is isomorphic to \(\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})*(\mathbb{Z}/2\mathbb{Z})\) by [28, Theorem 5.12 (iii)].
We now assume that \(\Lambda_{G}\) has two points, so \(G\) is two-ended. Now take the subgroup \(G^{\prime}\) of \(G\) which fixes pointwise \(\Lambda_{G}\). This is a subgroup of index at most two. As above, we consider a finite normal subgroup \(F\) of \(G^{\prime}\). Since \(F<G^{\prime}\) fixes \(\Lambda_{G}=\{p,q\}\) pointwise and is finite, \(F\) has to be a rotation group, i.e., a finite (cyclic) subgroup of \(SO_{2}(\mathbb{R})\). Since the action is properly discontinuous, cocompact and free on \(S^{2}\setminus\{p,q\}\), the quotient by \(G^{\prime}\) is a torus. If \(G^{\prime}\neq G\), \(G=G^{\prime}\rtimes\mathbb{Z}/2\mathbb{Z}\), where \(\mathbb{Z}/2\mathbb{Z}\) acts dihedrally on \(G^{\prime}\). The result follows.
The following is immediate from [10] and in the case when the peripheral groups are tame from previous work [2, Thm 0.1] and [3, Thm 0.2].
**Corollary 3.4**.: _Let \(\partial(G,\mathcal{P})\) be a geometrically finite convergence group acting on \(S^{2}\). Then every cut point of a component is a parabolic point. Furthermore, the components of the limit set are all locally connected._
**Lemma 3.5**.: _Let \((G,\mathcal{P})\) be a geometrically finite convergence group on \(S^{2}\) with connected Bowditch boundary \(\partial(G,\mathcal{P})=\Lambda\). The ordinary set \(S^{2}\setminus\Lambda\) is made of finitely many orbits, each with stabilizer which is a 2-orbifold group. When \(G\) is finitely generated, these are finite-type surface groups._
Proof.: We start by observing that the compact connected set \(\Lambda\) is also locally connected by Corollary 3.4. According to [33, Theorem VI.4.4], local connectivity of \(\Lambda\) assures that the components of the ordinary set \(S^{2}\setminus\Lambda\) form a null-sequence (\(\Lambda\) is an \(E\)-set) and that the boundary of each component is locally connected. Moreover, each component \(\Omega\) is simply connected. This follows from the fact that every simple closed curve contained in the surface \(\Omega\) separates the sphere into two disks, one containing the connected set \(\Lambda\supset\partial\Omega\) and the other contained in \(\Omega\).
If \(S^{2}\setminus\Lambda\) has only finitely many components, then \(G\) contains a finite index subgroup that stabilizes each component, i.e., a relatively quasiconvex subgroup. Of course, in this case \(S^{2}\setminus\Lambda\) is made of finitely many orbits.
If \(S^{2}\setminus\Lambda\) has infinitely many components forming a null sequence, we may apply part 2 of Proposition 2.3 to conclude that \(S^{2}\setminus\Lambda\) is made of finitely many orbits and their boundaries are stabilized by relatively quasiconvex subgroups. We claim that the stabilizer \(H\) of a component \(\Omega\) of \(S^{2}\setminus\Lambda\) is of finite index (at most 2) in the stabilizer of its boundary \(\partial\Omega\). This shows that \(H\) is relatively quasiconvex. The claim follows by observing that the elements of the stabilizer of \(\partial\Omega\) that do not leave \(\Omega\) invariant must permute the components of \(S^{2}\setminus\Lambda\) that have the same boundary as \(\Omega\). If \(\Omega\) is a Jordan domain, that is, if its closure is an embedded disk, then its boundary is a Jordan curve and either bounds one or two components of \(S^{2}\setminus\Lambda\). If \(\Omega\) is not a Jordan domain, the boundary of \(\Omega\) is not an embedded circle. However, since it is locally connected, the Caratheodory-Torhorst theorem applies: we can find a homeomorphism of the open disk onto \(\Omega\) which extends continuously to the boundaries, \(f:D^{2}\to\bar{\Omega}\). Since \(\partial\Omega\) is not a circle, such map cannot be injective. We can thus find a simple closed curve \(\gamma\) which is contained in the closure of \(\Omega\) and meets \(\partial\Omega\) in a single cut point. The curve \(\gamma\) is the image of a simple arc joining two points of the boundary of the closed disk which are mapped to the same point in \(\partial\Omega\). Since every other component of \(S^{2}\setminus\Lambda\) must sit on either side of \(\gamma\), another component cannot have the same boundary as \(\Omega\). We thus see that \(H=Stab(\Omega)\) coincides with the stabilizer of \(\partial\Omega\) in this case. Since \(\Omega\) is an open disc and \(H\) acts properly discontinuously on \(\Omega\), \(H\) is a orbifold group by [19, Cor. 3.2].
When \(G\) is a finitely generated convergence group acting on \(S^{2}\), the peripheral subgroups are finite-type orbifold subgroups by Proposition 3.2. We claim that the peripheral subgroups of \(H\) are finitely generated, hence \(H\) is finitely generated, since it is relatively quasiconvex, and hence hyperbolic relative to the induced peripheral subgroups. Any peripheral subgroup \(Q\) of \(H\) is \(P\cap H\), where \(P\) is a peripheral subgroup of \((G,\mathcal{P})\). This is exactly the subgroup of \(P\) that takes \(\Omega\) to itself. Then \(\Omega/Q\) embeds in the finite-type orbifold \((S^{2}\setminus\{p\})/P\), so is an embedded sub-orbifold of a finite-type orbifold, and hence of finite type. Since the peripheral subgroups are finitely generated, so is \(H\).
**Lemma 3.6**.: _Let \((G,\mathcal{P})\) be a geometrically finite, non elementary, convergence group on \(S^{2}\) with limit set \(\Lambda\). Let \(\Omega\) be a simply connected component of \(S^{2}\setminus\Lambda\) and \(h:\overline{\mathbb{D}}\to\overline{\Omega}\) the extension of the homeomorphism conjugating the action of the stabilizer \(H\) as above. Let \(p\in\partial\Omega\) be a parabolic point with stabilizer \(P\), and set \(Q=h^{-1}\circ(P\cap H)\circ h\). Then the limit set \(\Lambda_{Q}\) is exactly the non-empty set \(h^{-1}(\{p\})\)._
Proof.: Let \(K\subset S^{2}\setminus\{p\}\) be a compact subset containing a fundamental domain for the action of \(P\) on \(\Lambda\setminus\{p\}\). We may find \(g\in P\) such that \(g(\partial\Omega)\cap K\neq\emptyset\). Thus, \(g(\Omega)\) contains in its closure the point \(p\) and at least one point of \(K\). Such components form a finite set since \(\Lambda\) is an \(E\)-set (Def. 2.2). By considering a sequence of points in \(\partial\Omega\) tending to \(p\), we may pick an infinite sequence \((g_{n})\) in \(P\) that maps \(\Omega\) to components whose closures intersect both \(\{p\}\) and \(K\). As there are only finitely many of them, we may assume that \(g_{n}(\Omega)=V\) for a fixed component \(V\), and all \(n\geq 1\). Therefore, \((g_{1}^{-1}g_{n})_{n}\) is an infinite collection of elements of \(H\cap P\) which proves that \(\Lambda_{Q}\) is not empty.
Since \(hQ\subset Ph\), it follows that \(h^{-1}(\{p\})\) is \(Q\)-invariant and compact, hence it contains \(\Lambda_{Q}\) which by definition is the minimal compact invariant subset under the action of \(Q\).
The equality will follow from the fact that \(p\) is a bounded parabolic point. We first rule out the case that \(h^{-1}(\{p\})\) contains an interval. If this was the case, then it would be the whole circle by invariance so that we would have \(H=P\); this contradicts that \(\Lambda_{P}=\{p\}\) and \(\Lambda_{H}=\partial\Omega\). Therefore, \(h^{-1}(\{p\})\) is nowhere dense in \(\mathbb{S}^{1}\).
Let \(\Omega_{1},\ldots\Omega_{k}\), be the \(P\)-translates of \(\Omega\) whose closures intersect both \(\{p\}\) and \(K\) and let us fix \(g_{1},\ldots g_{k}\in P\) such that \(g_{j}(\Omega_{j})=\Omega\). Set \(L=h^{-1}(\cup_{1\leq j\leq k}g_{j}(K))\). This is a compact subset of \(\overline{\mathbb{D}}\) disjoint from \(h^{-1}(\{p\})\), hence from \(\Lambda_{Q}\). Let \(x\in h^{-1}(p)\). We want to prove that the action of \(Q\) is not equicontinuous at \(x\). With that in mind, pick a point \(y\in\mathbb{S}^{1}\setminus h^{-1}(\{p\})\) arbitrarily close to \(x\). Note that \(h(y)\in\partial\Omega\setminus\{p\}\) and since \(K\) is a fundamental domain, we may find \(g\in P\) and \(j\in\{1,\ldots,k\}\) so that \(g(h(y))\in K\cap\partial\Omega_{j}\). It follows that \(g_{j}g\in(H\cap P)\) so that we may find \(q=h^{-1}g_{j}gh\in Q\) with \(q(y)\in L\). This implies that \(x\in\Lambda_{Q}\). Indeed, considering now a sequence \((y_{n})\) in \(\mathbb{S}^{1}\setminus h^{-1}(\{p\})\) tending to \(x\), we obtain in this way a sequence \((q_{n})\) in \(Q\) such that \((q_{n}(x),q_{n}(y_{n}))\in h^{-1}(\{p\})\times L\): as \(L\) and \(h^{-1}(\{p\})\) are disjoint compact subsets, \((q_{n})_{n}\) cannot be equicontinuous at \(x\).
As already observed in the proof of Lemma 3.5, if \(h\) is not injective, i.e., if \(h(x)=h(y)\) for some pair of points \(x,y\) in \(\mathbb{S}^{1}\), then \(h(x)\) is a cut point of \(\Lambda_{G}\) (we may build a Jordan arc in \(\overline{\mathbb{D}}\), a crosscut, that maps under \(h\) to a separating Jordan curve), and so \(h(x)\) is parabolic.
Given a parabolic point \(p\) with stabilizer \(P\) and a component \(\Omega\) of the ordinary set which contains \(p\) in its boundary, we will say that \(p\) is _uniquely accessible from \(\Omega\)_ if the above map \(h:\overline{\mathbb{D}}\to\overline{\Omega}\) is injective over \(p\), i.e., \(h^{-1}(\{p\})\) is a singleton. Likewise, we say that \(p\) is _doubly accessible from \(\Omega\)_ if \(h^{-1}(\{p\})\) consists of two points. We expect that in general \(h^{-1}(p)\) will be a Cantor set if \(p\) is not uniquely or doubly accessible.
**Corollary 3.7**.: _Let \(p\) be a parabolic point with stabilizer \(P\) of a geometrically finite convergence group acting on \(S^{2}\), \((G,\mathcal{P})\). Assume that the component of its Bowditch boundary containing \(p\) is not a singleton. Let \(\Omega\) be any component such that \(\partial\Omega\) contains \(p\). If \(P\) is two-ended, then \(p\) is either uniquely or doubly accessible from \(\Omega\)._
Proof.: Recall the notation of Lemma 3.6: \(Q\) is defined as \(h^{-1}\circ(P\cap H)\circ h\), where \(h\) is the extension of the homeomorphism conjugating the action of the stabilizer \(H\) of \(\Omega\). The number of accesses to \(p\) from \(\Omega\) are in bijection with the cardinality of \(\Lambda_{Q}\). By Lemma 3.6, the limit set is non-empty so \(Q\) is infinite.
Since we assume that \(P\) is two-ended, this is also the case of \(Q\). Hence there is a finite index cyclic subgroup in \(Q\) that is generated either by a loxodromic element, implying the point \(p\) is doubly accessible, or by a parabolic element, implying the point \(p\) is uniquely accessible.
We note that the converse of Corollary 3.7 does not hold in full generality. Here is a counter-example: pick a convex-cocompact Kleinian group \(G\) that uniformizes a hyperbolic \(3\)-manifold with totally geodesic boundary; consider one component \(F\) of its boundary and choose a compact \(\pi_{1}\)-injective proper subsurface \(S\) in \(F\), with a non-Abelian free fundamental group \(P\), such that each component of the complement of \(S\) has also non-Abelian free fundamental group. The pair \((G,\mathcal{P})\), where \(\mathcal{P}\) consists of the conjugates of \(P\), is a planar relatively hyperbolic group pair. To see this, \(P\) stabilises a component \(\Omega_{F}\) of the ordinary set, hence the hyperbolic convex hull \(K\) of \(\Lambda_{P}\) in
\(\Omega_{F}\) is connected and simply connected in \(\Omega_{F}\), and precisely invariant under \(P\), i.e., if \(g(K)\cap K\neq\emptyset\) for some \(g\in G\), then \(g\in P\). Therefore, as \(\Lambda_{G}\) is a Sierpinski carpet, \(G(K)\) is a null sequence that satisfies the assumptions of Moore's Theorem 4.3: by collapsing each component of \(G(K)\) we obtain a geometrically finite convergence group action on \(S^{2}\) for which \(P\) is parabolic with fixed point \(p\). Moreover, the parabolic point \(p\) is on the boundary of countably many components \(\Omega\) such that \(\operatorname{Stab}(\Omega)\cap P\) is cyclic but \(P\) is not, and \(p\) is uniquely accessible from each component.
**Proposition 3.8**.: _Let \(p\) be a parabolic point with stabilizer \(P\) of a geometrically finite convergence group on \(S^{2}\), \((G,\mathcal{P})\). We assume that the component of \(\partial(G,\mathcal{P})\) containing \(p\) is not a singleton. Let \(\Omega_{p}\) denote the union of the ordinary components which contain \(p\) on their boundary. The action of \(P\) on \(S^{2}\setminus(\{p\}\cup\Omega_{p})\) is cocompact and the set of components of \(\Omega_{p}\) forms finitely many orbits._
_In particular, if \(p\) is in the boundary of no ordinary component, then \(P\) acts cocompactly on \(S^{2}\setminus\{p\}\)._
Proof.: We may assume that \(\Lambda_{G}\) is connected according to Proposition 2.6. Let \(K\subset S^{2}\setminus\{p\}\) be a compact subset containing a fundamental domain for the action of \(P\) on \(\Lambda_{G}\setminus\{p\}\).
Since \(\Lambda_{G}\) is an \(E\)-set, it follows that the closure of the union of ordinary components \(\Omega\in\pi_{0}(\Omega_{G}\setminus\Omega_{p})\) with \(K\cap\partial\Omega\neq\emptyset\) is a compact subset \(L\) of \(S^{2}\setminus\{p\}\). Consider any component \(\Omega\) disjoint from \(\Omega_{p}\), we may find \(g\in P\) such that \(g(\partial\Omega)\cap K\neq\emptyset\). It follows that the action is cocompact on \(S^{2}\setminus(\{p\}\cup\Omega_{p})\).
We now consider components \(\Omega\) which contain \(p\) on their boundary. As above, we may find \(g\in P\) such that \(g(\partial\Omega)\cap K\neq\emptyset\). Thus, \(g(\Omega)\) contains in its closure the point \(p\) and at least one point of \(K\). Such components form a finite set since \(\Lambda_{G}\) is an \(E\)-set.
We conclude with some general properties of the ordinary set, which are proved as for Kleinian groups. The next proposition was already known, but we were unable to find a formal proof in the literature.
**Proposition 3.9**.: _Let \(G\) be a convergence group acting on \(S^{2}\). Then the ordinary set has zero, one, two or infinitely many components._
Proof.: The conclusion is obvious if the limit set \(\Lambda_{G}\) is empty: in this case the ordinary set is connected and the action is elementary.
Let us first consider the case when \(\Lambda_{G}\neq\emptyset\) is not connected. If all of its components are points, in particular if \(\Lambda_{G}\neq\emptyset\) is finite and the action of \(G\) is elementary, then \(\Omega_{G}\) is connected. Otherwise, there are infinitely many components of \(\Lambda_{G}\) which are non-trivial, so there are infinitely many components of the ordinary set by Lemma 2.7.
We may now assume that \(\Lambda_{G}\) is an infinite connected compact set. Let us assume furthermore that the ordinary set has at least two but finitely many components.
Considering a finite-index subgroup if necessary, one may assume that the group \(G\) fixes each component. Therefore, \(\Lambda_{G}\) is the boundary of each component of the ordinary set. This is the main point and follows from the fact that the boundary of each component is closed, contained in \(\Lambda_{G}\), and \(G\)-invariant.
By density of loxodromic fixed points, the group \(G\) contains a loxodromic element \(g\) with fixed points \(a\) and \(b\) in \(\Lambda_{G}\).
Consider a component \(\Omega\) of the ordinary set and a point \(x\in\Omega\). We may find a path \(c_{0}\) in \(\Omega\) that joins \(x\) to \(g(x)\). The \(g\)-orbit \(c_{n}=g^{n}(c_{0})\), \(n\in\mathbb{Z}\), defines a path which joins
\(a\) and \(b\) in \(\Omega\) by the convergence property. Its image contains an arc \(c\) also joining \(a\) and \(b\), i.e. a path without self-intersections.
Since, as remarked above, \(\Lambda_{G}\) is the boundary of every component, we can proceed similarly with a second component \(\Omega^{\prime}\) and denote by \(c^{\prime}\) an arc in \(\Omega^{\prime}\) which joins \(a\) and \(b\). Then \(\{a,b\}\cup c\cup c^{\prime}\) is a Jordan curve that separates \(\Lambda_{G}\), for there are points of both \(\Omega\) and \(\Omega^{\prime}\) on each side of the Jordan curve. If there were a third component in the complement of \(\Lambda_{G}\) it would sit on one side of this Jordan curve \(\{a,b\}\cup c\cup c^{\prime}\). Because of this, the boundary of this new component could not be \(\Lambda_{G}\) as it should. Therefore, if there are more than two components, there must be infinitely many.
**Corollary 3.10**.: _Let \((G,\mathcal{P})\) be a geometrically finite convergence group acting on \(S^{2}\). If \(\Omega_{G}\) is non empty and connected, then \(\Lambda_{G}\) is totally disconnected. If furthermore \(G\) is finitely generated, then \(G\) is covered by a Kleinian group. If \(\Omega_{G}\) has exactly two components, then \(\Lambda_{G}\) is a circle and the action of \(G\) is either isomorphic to a Fuchsian group of finite coarea, or to a degree 2 extension of such a group._
Proof.: Let us assume that \(\Omega_{G}\) is connected and let us assume for contradiction that \(\Lambda\) is a component of the limit set with at least two points. Then \(\Lambda\) is the limit set of its stabilizer \(H\) which is also hyperbolic relative to virtual surface groups, cf. Proposition 2.6. Since \(\Lambda\) does not separate the plane and does not contain an open disk, it is simply connected. It follows that \(\Lambda\) cannot contain a simple closed curve and, since it is locally connected by Corollary 3.4, it is a dendrite, which is impossible by Lemma 2.7. So \(\Lambda_{G}\) is totally disconnected. Now, when \(G\) is finitely generated, it follows from [21, Corollary 5.4] that \(G\) is covered by a Kleinian group.
Assume \(\Omega_{G}\) has two components \(\Omega_{\pm}\). Taking an index 2 subgroup if necessary, we may assume that both components are invariant under \(G\) so that \(\overline{\Omega_{+}}\cap\overline{\Omega_{-}}=\Lambda_{G}\), the minimal \(G\)-invariant set. This implies that \(\Lambda_{G}\) is their common boundary and is connected by [33, Cor. VI.2.11], hence locally connected by Corollary 3.4, and that \(\Omega_{\pm}\) are simply connected. Thus, by the Caratheodory-Torhorst theorem, there are continuous onto maps \(\varphi_{\pm}:\overline{\mathbb{D}}\to\overline{\Omega_{\pm}}\) from the closed unit disk that restrict to homeomorphisms between their interiors. Since both images of the unit circle coincide, reasoning as in the proof of Lemma 3.5, we may conclude that \(\Lambda_{G}\) is a Jordan curve. Finally Lemma 3.5 enables us to conclude in this case. For more general results, see [23].
## 4 Blowing-up rank-one parabolic points
**Definition 4.1**.: Let \((G,\mathcal{P})\) be a geometrically finite convergence group on \(S^{2}\). We write \(\mathcal{P}=\mathcal{P}_{1}\cup\mathcal{P}_{2}\) where \(\mathcal{P}_{1}\) consists of all stabilizers of rank 1 parabolic points.
**Theorem 4.2**.: _Let \((G,\mathcal{P})\) be a geometrically finite convergence group on \(S^{2}\), and \(\mathcal{P}=\mathcal{P}_{1}\cup\mathcal{P}_{2}\) as in Definition 4.1. Then \((G,\mathcal{P}_{2})\) is a geometrically finite convergence group on \(S^{2}\) and there is an equivariant degree \(1\) continuous map \(\phi:S^{2}\to S^{2}\) mapping the Bowditch boundary of \((G,\mathcal{P}_{2})\) onto that of \((G,\mathcal{P})\)._
Before giving the proof, we pause for some topological facts, starting with the following particular case of Moore's theorem [24].
**Theorem 4.3** (Moore).: _Let \(\mathcal{C}\) be a pairwise disjoint collection of compact and connected subsets of the sphere \(S^{2}\) such that each \(K\in\mathcal{C}\) is not a point. Assume that each
element has a connected complement and the set \(\mathcal{C}\) forms a null-sequence. Let \(\sim\) be the equivalence relation generated by \(x\sim y\) if there is some \(K\in\mathcal{C}\) that contains \(\{x,y\}\). Then \(Z=S^{2}/\sim\) is a topological sphere when endowed with the quotient topology._
We add some further properties that will be used in the proof of Theorem 4.2.
**Proposition 4.4**.: _Under the assumptions of Theorem 4.3, set \(Y=S^{2}\setminus\cup_{K\in\mathcal{C}}K\). For any connected open subset \(U\) of \(S^{2}\) such that \(\partial U\subset Y\), the set \(Y\cap U\) is arcwise connected. In particular \(Y\) is arcwise connected, and every point of \(Y\) admits a basis of neighborhoods such that the boundaries of these neighborhoods are disjoint from \(S^{2}\setminus Y=\cup_{K\in\mathcal{C}}K\)._
Proof.: Denote by \(\pi:S^{2}\to Z\) the canonical projection and note that for all \(y\in Y\), \(\pi^{-1}(\pi(\{y\}))=\{y\}\), in particular the restriction of \(\pi\) to \(Y\) is injective. We first justify that if \(A\) is a compact arc or a Jordan curve in \(\pi(Y)\), then so is \(B=\pi^{-1}(A)\). To see this, note that \(B\) is compact and that \(\pi:B\to A\) is bijective and continuous since \(B\subset Y\).
Since the projection \(\pi:S^{2}\to Z\) maps \(Y\) to the complement of a countable set, we may find arcs joining any two points in \(\pi(Y)\) and then lift them back to \(Y\). This proves that \(Y\) is arcwise connected, as well as \(U\cap Y\) for any connected open set with \(\partial U\subset Y\), for in this case \(U\) is saturated and \(\pi(U)\) is open. Similarly, if \(x\in Y\), then we may construct a basis of disk-neighborhoods of \(\pi(x)\) in \(Z\) with their boundaries contained in \(\pi(Y)\). They lift as disk neighborhoods of \(x\) in \(S^{2}\). Since \(\mathcal{C}\) is a null sequence and \(x\) is disjoint from the collection \(\mathcal{C}\), these disk-neighborhoods form a basis.
Proof of Theorem 4.2.: The proof goes as follows. We first define a set \(\widehat{Y}\) that plays the role of a blown-up of \(S^{2}\) over the parabolic fixed points coming from \(\mathcal{P}_{1}\). This is a planar compact set bounded by Jordan curves. Then we prove that the action of the group \(G\) induces a geometrically finite convergence group action whose maximal parabolic subgroups are exactly those of \(\mathcal{P}_{2}\), and we extend the action to the whole sphere.
Let \(\mathbb{P}_{1}\) denote a set of representatives of each conjugacy class in \(\mathcal{P}_{1}\).
**Definition of the set \(\widehat{Y}\).** Fix \(P\in\mathbb{P}_{1}\) with parabolic point \(p\) and let us define two disjoint horoballs in \(\Omega_{G}\) attached to \(p\) as follows. Since \(P\) is two-ended, \(P\) is either isomorphic to \(\mathbb{Z}\) or to \((\mathbb{Z}/2\mathbb{Z})*(\mathbb{Z}/2\mathbb{Z})\) according to Proposition 3.2, and there is an element \(\gamma\) that acts as a translation on \(S^{2}\setminus\{p\}\) and that generates a subgroup of minimal index in \(P\), cf. Lemma 3.1. Let us consider a chart that identifies \(S^{2}\setminus\{p\}\) with \(\mathbb{C}\) and \(\gamma\) with the translation by \(1\). Note that the action of \(\gamma\) on \(\Lambda_{G}\setminus\{p\}\) is cocompact since \(\gamma\) generates a finite index subgroup of \(P\) and that \(p\) is a bounded parabolic point. Therefore, we may enclose \(\Lambda_{G}\setminus\{p\}\) into a horizontal open strip of bounded width. The complement of the strip in \(\mathbb{C}\) is the union of two half-planes contained in \(\Omega_{G}\), each of which defines a closed horoball attached to \(p\). Let \(H_{P}\) denote their union, and note that the fact that \(P\) is the stabilizer of \(p\) implies that one can choose the two half planes so that the stabilizer of \(H_{P}\) is exactly \(P\). Set \(\mathcal{C}=\cup_{P\in\mathbb{P}_{1}}GH_{P}\), the collection of all translates by \(G\) of the finite collection \(\{H_{P}\mid P\in\mathbb{P}_{1}\}\).
Let us check that \(\mathcal{C}\) forms a null sequence, by contradiction: we consider a sequence \((g_{n})_{n\in\mathbb{N}}\) of \(G\) such that \(\operatorname{diam}g_{n}(H_{P})\geq\delta\) for some \(\delta>0\) and some fixed \(P\in\mathbb{P}_{1}\) and associated point \(p\). Up to taking a subsequence, by the convergence property, we may assume that \((g_{n})\) tends uniformly towards the constant map with image \(b\in\Lambda_{G}\) on the compact subsets of \(S^{2}\setminus\{b^{\prime}\}\), where \(b,b^{\prime}\in\Lambda_{G}\). We now remark that we must have
\(b^{\prime}=p\). Indeed, if that was not the case, the closure of \(H_{P}\) would be a compact set in the complement of \(b^{\prime}\), intersecting \(\Lambda_{G}\) only in \(p\). By the convergence property its images by the elements of the sequence should shrink to \(\{b\}\), against the hypothesis that their diameter is bounded from below.
Pick \(c\in\Lambda_{G}\setminus(\{b\}\cup Gp)\). Since \(c\neq b\), it follows that \((g_{n}^{-1}(c))\) tends to \(p\). As \(p\) is a bounded parabolic point and \(c\notin Gp\), up to passing to a subsequence, we may find a sequence \((h_{n})\) in \(P\) such that \((h_{n}(g_{n}^{-1}(c))\) tends to a point \(a\in\Lambda_{G}\setminus\{p\}\). Pick a neighborhood \(V\) of \(a\) that is disjoint from \(H_{P}\). It follows that \((g_{n}\circ h_{n}^{-1})_{n}\) tends uniformly to the constant map \(b\) on \(S^{2}\setminus V\). Since \(h_{n}(H_{P})=H_{P}\), it follows that we have uniform convergence of \((g_{n}|_{H_{P}})_{n}\) to the constant map \(b\), contradicting our assumptions.
Choose the horoballs small enough so that the collection is pairwise disjoint in \(\Omega_{G}\). This is possible since the action of \(G\) is properly discontinuous on \(\Omega_{G}\) and \(\mathbb{P}_{1}\) is finite.
Set \(Y=S^{2}\setminus\cup_{K\in\mathcal{C}}K\) and observe that we are under the assumptions of Proposition 4.4. In particular, \(Y\) is arcwise connected.
It will be convenient to endow \(S^{2}\) with a distance \(d_{S}\) compatible with its topology. We define, on \(Y\), \(d_{Y}(x,y)=\inf\operatorname{diam}_{S}L\) where \(L\) runs over all continua of \(Y\) which contain \(\{x,y\}\). This defines a metric. Let us denote by \(\widehat{Y}\) its completion.
**Properties of the set \(\widehat{Y}\).--** We claim that \(\widehat{Y}\) is a planar, locally connected and arcwise connected compact set with open disks as complementary components. To see this, we define a notion of _regular neighborhoods_ for points in \(\overline{Y}\subset S^{2}\).
* By Proposition 4.4, every point in \(y\in Y\) admits a basis of neighborhoods in \(S^{2}\) whose boundaries are disjoint from the elements of \(\mathcal{C}\). We call such neighborhoods regular of type \((Y)\) for \(y\). Let \(K\in\mathcal{C}\) be the element associated to a rank \(1\) parabolic point \(p\). Note that \(K\) is a union of two closed disks attached at \(p\) and that \(K\setminus\{p\}\) is contained in \(\Omega_{G}\), so isolated from the other components.
* It follows from the fact that the components in \(\mathcal{C}\) are disjoint in \(\Omega_{G}\) that any point \(x\in\partial K\setminus\{p\}\) admits a basis of neighborhoods which are discs in \(\overline{Y}\) bounded by the union of an arc in \(\partial K\) and an arc in \(\Omega_{G}\setminus K\). We call such neighborhoods regular of type \((K)\) for \(x\).
* For the point \(p\), we may consider a basis of Jordan disks that is regular for the collection \(\mathcal{C}\setminus\{K\}\) and that intersects \(K\) in exactly two arcs, one in each horoball. We call such neighborhoods regular of type \((P)\) for \(p\).
By construction, if \(V\) is a regular neighborhood of type \((Y)\) or \((K)\), then \(Y\cap V\) is arcwise connected, whereas if \(V\) is of type \((P)\), then \(Y\cap V\) has exactly two arcwise connected components.
Recall that \(d_{S}\) is the metric on \(S^{2}\) used to define \(d_{Y}\) above so that \(d_{S}\leq d_{Y}\). Thus every Cauchy sequence for \(d_{Y}\) is a Cauchy sequence for \(d_{S}\). This ensures the existence of a canonical continuous map \(\pi:\widehat{Y}\to(\overline{Y},d_{S})\).
Let \((x_{n})_{n}\) be a Cauchy sequence in \((Y,d_{S})\) with limit \(x\in\overline{Y}\). If \(x\) is not a rank-one parabolic point, then it admits a basis of regular neighborhoods in \(\overline{Y}\) of types \((Y)\) or \((K)\) that intersect \(Y\) in an arcwise connected set, so that \((x_{n})_{n}\) is also a Cauchy sequence in \((Y,d_{Y})\) that defines a unique limit point in \(\widehat{Y}\). If \(x\) is a rank-one parabolic point with stabilizer \(P\), then \(\overline{Y}\setminus\{x\}\) has two ends associated to the arcwise connected components of its regular neighborhoods of type \((P)\). It follows that \(\pi^{-1}(\{x\})\) has exactly two preimages that correspond to each end. Thus, \(\pi\) is also surjective and a point has two
preimages if it is a parabolic point of rank one and one preimage otherwise. Moreover, we may define regular neighborhoods \(\widehat{V}\) for points in \(\widehat{Y}\) that will be connected by lifting regular neighborhoods of types \((Y)\) and \((K)\) and half neighborhoods of type \((P)\).
All these observations enable us to conclude that \(\widehat{Y}\) is arcwise connected, locally connected, compact, with no local cut points and that each component of \(\widehat{Y}\setminus Y\) is a Jordan curve.
It remains to check that \(\widehat{Y}\) is planar. Claytor's theorem [7] asserts that a continuum without local cut points is embeddable in the sphere if and only if it contains neither a copy of the complete graph on five vertices \(K_{5}\) nor of the complete bipartite graph with six vertices \(K_{3,3}\).
Let us consider a finite connected graph \(L\) and an embedding \(j:L\hookrightarrow\widehat{Y}\). We will modify the embedding \(j\) so that \(\pi\circ j\) is also injective, implying that \(L\) cannot be one of the forbidden graphs.
Let \(T\subset L\) denote the closure of the set of points \(z\in L\) for which we may find \(w\neq z\) in \(L\) such that \((\pi\circ j)(z)=(\pi\circ j)(w)\). Note that \(T\) is a compact subset of \(L\). If \(T\) is empty, then there is nothing to be done. Let us assume it is not empty. Let \(z\in T\). If \(z\) belongs to an edge, we consider an open interval neighborhood \(J_{z}\subset L\) contained in the same edge; if \(z\) is a vertex, then we consider a star-shaped open neighborhood \(J_{z}\) contained in the union of the edges incident to \(z\). Since \(L\setminus J_{z}\) is compact, we have \(d_{Y}(j(z),j(L\setminus J_{z}))>0\) so that we may find a regular neighborhood \(\widehat{V}_{z}\subset\widehat{Y}\) of \(j(z)\) such that \(j^{-1}(\widehat{V}_{z})\subset J_{z}\); we let \(V_{z}\subset\overline{Y}\) be the corresponding regular neighborhood of \((\pi\circ j)(z)\).
We now extract a finite subcover of \((\pi\circ j)(T)\) given by the above regular neighborhoods that we order \(V_{1},\dots,V_{n}\). Each \(V_{k}\) comes with a point \(z_{k}\in L\), a neighborhood \(J_{k}\subset L\) and a regular neighborhood \(\widehat{V}_{k}\) of \(j(z_{k})\). We modify the embedding \(j\) inductively on the neighborhoods \(V_{k}\). Let us fix \(1\leq k\leq n\), and let us assume that \(\pi\circ j\) is injective on \(L\setminus(\cup_{k\leq i\leq n}J_{z_{i}})\). We note that \(j(L)\cap\overline{\widehat{V}_{k}}\subset j(J_{k})\) and \(\widehat{V_{k}}\cap Y\) is homeomorphic to the complement of a countable subset of a Jordan domain. Therefore, we may modify \(j|_{J_{k}}\) so that its image in \(\widehat{V}_{k}\) is contained in \(Y\). As \(\pi|_{Y}\) is injective, the map \(\pi\circ j\) is now injective on \(L\setminus(\cup_{k<i\leq n}J_{z_{i}})\).
In conclusion, given any embedding of a finite graph \(L\) in \(\widehat{Y}\), there is an embedding of \(L\) in \(\overline{Y}\), hence in \(S^{2}\). As the latter space is planar, we may conclude that \(L\) is not isomorphic to \(K_{5}\) nor \(K_{3,3}\), and so \(\widehat{Y}\) is planar.
**Extension of the action of \(G\) to \(\widehat{Y}\).--** Let us now consider the action of \(G\) on \(\widehat{Y}\): regular neighborhoods enable us to conclude that the action on \((Y,d_{Y})\) extends continuously to \(\widehat{Y}\).
Let us check that the action remains a geometrically finite convergence action. For this, we pick a sequence of distinct elements \((g_{n})\). We may as well assume that there are two points \(a\) and \(b\) in \(\overline{Y}\) such that the sequence \((g_{n})\) of homeomorphisms of the sphere tends uniformly to the constant map \(a\) on the compact subsets of \(S^{2}\setminus\{b\}\). When both \(a\) and \(b\) are distinct from the rank \(1\) parabolic points, then this property lifts to \(\widehat{Y}\).
Let us assume that \(a\) is a rank \(1\) parabolic point and write \(\{x,x^{\prime}\}=\pi^{-1}(a)\). Let us consider a regular neighborhood of type \((P)\). It defines two disjoint connected neighborhoods \(W\) and \(W^{\prime}\) in \(\widehat{Y}\) of \(x\) and \(x^{\prime}\) respectively. We may pick a point \(z\in\widehat{Y}\) and assume that \((g_{n}(z))\) tends to \(x\) for instance, implying that \(g_{n}(z)\in W\) for all \(n\) large enough. Note that we may exhaust \(\widehat{Y}\setminus\pi^{-1}(\{b\})\) by connected compact subsets. Since \(\pi^{-1}(a)\) is discrete, for any connected compact subset \(K\subset\overline{Y}\setminus\{b\}\) containing
\(\pi(z)\), the convergence property implies that \(g_{n}(K)\) has to be contained in \(\pi(W)\) for \(n\) large enough. This implies that \((g_{n})\) tends to the constant \(x\) in \(\widehat{Y}\setminus\pi^{-1}(b)\). If \(b\) is not parabolic, then we are done.
On the other hand, if \(\pi^{-1}(b)=\{y,y^{\prime}\}\), then the same reasoning for \((g_{n}^{-1})_{n}\) shows that we may also assume that all compact subsets disjoint from \(\{x,x^{\prime}\}\) tend to \(y\) under \((g_{n}^{-1})\). Let \(V\subset\widehat{Y}\) be a disk-neighborhood of \(y\) disjoint from \(W\cup W^{\prime}\) and \(K=\widehat{Y}\setminus(W\cup W^{\prime})\). Note that, for any \(n\) large enough, \(g_{n}^{-1}(K)\) is contained in \(V\) so that the connected set \(\widehat{Y}\setminus V\) is covered by the two disjoint open sets \(g_{n}^{-1}(W)\) and \(g_{n}^{-1}(W^{\prime})\). The connectedness of \(\widehat{Y}\setminus V\) implies that \(g_{n}^{-1}(W^{\prime})\subset V\) since \((g_{n})\) pushes points into \(W\subset(\widehat{Y}\setminus V)\). Therefore, we have uniform convergence of \((g_{n}^{-1})\) on \(W^{\prime}\) to the constant map \(y\). By symmetry, we get uniform convergence of \((g_{n})\) to the constant map \(x\) on compact subsets disjoint from \(y\). This shows that \(G\) has also a convergence action on \(\widehat{Y}\).
Let us note that since the action of \(G\) on \(\Lambda_{G}\cap Y\) is invariant and minimal, its closure \(\widehat{\Lambda}\) in \(\widehat{Y}\) will be a minimal invariant subset, hence the limit set of this new action.
We may check that the action on it is geometrically finite with maximal parabolic subgroups in \(\mathcal{P}_{2}\). Since \(\widehat{Y}\) is planar, we may now consider it as a subset of \(S^{2}\) and extend the action to the whole sphere using [15, Thm. 5.8].
## 5 Topological Schottky sets
**Definition 5.1**.: A _topological Schottky set_\(\mathcal{S}\) is a proper compact subspace of \(S^{2}\) defined by the following topological properties enjoyed by Schottky sets.
1. the set of components \(\{D_{i}\}_{i\in I}\) of \(S^{2}\setminus\mathcal{S}\) is countable and not empty;
2. for each \(i\), \(\bar{D}_{i}=D_{i}\cup\partial D_{i}\) is a closed disc; that is \(D_{i}\) is a Jordan domain.
3. for each pair \(i\neq j\in I\)\(\bar{D}_{i}\) and \(\bar{D}_{j}\) meet in at most one point.
4. for each triple of distinct indices \(i,j,k\in I\), \(\bar{D}_{i}\cap\bar{D}_{j}\cap\bar{D}_{k}=\emptyset\),
5. for every open cover \(\mathcal{U}\) of \(S^{2}\) and for all but finitely many \(i\in I\) there is a \(U_{i}\in\mathcal{U}\) such that \(D_{i}\subset U_{i}\).
**Remark 5.2**.: If the 2-sphere is endowed with a metric, the purely topological condition (S5) is equivalent to asking that \(\mathcal{S}\) is an \(E\)-set (Def. 2.2). This is an easy consequence of the Lebesgue number lemma.
The most well-known topological Schottky sets are the Sierpinski carpet and the Apollonian Gasket. These both occur as the limit sets of geometrically finite Kleinian groups, [20], [17]. Hence they are also the (Bowditch) boundaries of relatively hyperbolic groups. Observe that in contrast to the definition of a Schottky set, the cardinality of \(I\) is not required to be at least 3. However, if \(|I|\leq 2\) then \(\mathcal{S}\) has non empty interior and cannot be the boundary of a relatively hyperbolic group.
**Proposition 5.3**.: _A topological Schottky set is connected, locally connected, hence arcwise connected, with no cut points and no cut pairs._
**Lemma 5.4**.: _Let \(\mathcal{S}\) be a topological Schottky set and \(\Omega\) a non-empty open connected subset of \(S^{2}\) such that, for each \(i\in I\), \(\partial D_{i}\cap\Omega\) is connected. The set \(X=\mathcal{S}\cap\Omega\) is connected._
Note that the proof of this lemma does not use the \(E\)-set condition (S5).
Proof.: Let us consider two open subsets of \(\Omega\), \(R\) and \(B\), for red and blue, such that \(X\subset R\cup B\), \(X\cap R\) and \(X\cap B\) are not empty, but \(X\cap R\cap B=\emptyset\). We may assume that each component of \(R\) and \(B\) intersects \(X\), by removing any components that do not intersect \(X\) (note that the sphere is locally connected so every component of an open subset is itself open).
We will increase these sets (by adding in disks associated to the two components) into two open and disjoint subsets that cover \(\Omega\): this will prove that one of them has to be empty, hence that \(X\) is connected. With this in view, we split the set of components \(\{D_{i}\}_{i\in I}\) into three sets \(I=I_{0}\sqcup I_{R}\sqcup I_{B}\).
Let \(i\in I\), and let us write \(C_{i}=\partial D_{i}\). If \(C_{i}\cap X=\emptyset\), since \(\Omega\) is a connected set intersecting \(\mathcal{S}\) and \(C_{i}\) is a Jordan curve, then \(\Omega\cap D_{i}=\emptyset\) and we let \(i\) belong to \(I_{0}\). If not, \((C_{i}\cap X)\) is connected by assumption, and covered by \(R\) and \(B\). Hence, \(R\cap(C_{i}\cap X)=\emptyset\) or \(B\cap(C_{i}\cap X)=\emptyset\). In the former case, \(D_{i}\cap R=\emptyset\) as each component of \(R\) intersects \(X\), but not \(C_{i}\), so we let \(i\) belong to \(I_{B}\); in the latter, we let \(i\) belong to \(I_{R}\). Thus \(i\in I_{R}\) if and only if \((C_{i}\cap X)\subset R\) and \(i\in I_{B}\) if and only if \((C_{i}\cap X)\subset B\).
We let
\[R^{\prime}=R\cup(\cup_{i\in I_{R}}D_{i}\cap\Omega)\quad\text{and}\quad B^{ \prime}=B\cup(\cup_{i\in I_{B}}D_{i}\cap\Omega)\,.\]
We obtain in this way a cover of \(\Omega\) by two disjoint open sets, so that one of them has to be empty. Therefore, one of \(R\) or \(B\) has to be empty as well, establishing the connectedness of \(X\).
Proof of Proposition 5.3.: To show that \(\mathcal{S}\) is connected and with no cut points, we apply Lemma 5.4 twice: with \(\Omega=S^{2}\) first and then with \(\Omega=S^{2}\setminus\{x\}\), for any \(x\in\mathcal{S}\). As each boundary component of \(\mathcal{S}\) is a closed simple curve, it cannot be disconnected by removing at most one point and it follows that \(\mathcal{S}\) and \(\mathcal{S}\setminus\{x\}\) are both connected.
Now let \(x,y\in\mathcal{S}\) be two points and consider \(\Omega=S^{2}\setminus\{x,y\}\). If no boundary component of \(\mathcal{S}\) contains both \(x\) and \(y\), the previous argument applies and we see that \(x,y\) cannot form a cut pair. We can thus assume that there is an \(i\in I\) such that \(x,y\in C_{i}\). Let \(\gamma\) be a properly embedded arc in \(\bar{D}_{i}\) connecting \(x\) to \(y\). \(D_{i}\setminus\gamma\) is the union of two open disks, \(D\) and \(D^{\prime}\), each adjacent to precisely one connected component of \(C_{i}\setminus\{x,y\}\). We can now repeat the same strategy used in the proof of Lemma 5.4 with \(\Omega=S^{2}\setminus\gamma\) to conclude that \(\mathcal{S}\setminus\{x,y\}\) must be connected. This shows that \(\mathcal{S}\) has no cut pairs.
As \(\mathcal{S}\) is an \(E\)-set, we deduce from [33, Theorem VI.4.4] that it is also locally connected, hence arcwise connected [33, Theorem II.5.1].
Our first key result is that the boundaries of the \(D_{i}\) are topologically distinguished, generalizing the case of a Sierpinski carpet.
**Proposition 5.5**.: _Let \(\mathcal{S}\) be a topological Schottky set with \(\mathcal{S}\simeq S^{2}\setminus\cup(D_{i})\), where each \(D_{i}\) is open. Then the non-separating embedded circles of \(\mathcal{S}\) are exactly the \(C_{i}=\bar{D}_{i}\cap\mathcal{S}\)._
Proof.: Let \(C\) be an embedded circle in \(\mathcal{S}\subset S^{2}\). By the Jordan curve theorem, the complement of \(C\) consists of two open discs \(O\) and \(O^{\prime}\). Assume that \(C\) is contained in \(\mathcal{S}\). By construction \(C\) is a \(C_{i}\) if and only if either \(O\) or \(O^{\prime}\) coincides with \(D_{i}\). If this is not the case, both \(O\) and \(O^{\prime}\) contain points of \(\mathcal{S}\) and \(C\) separates \(\mathcal{S}\).
We want to show that if \(C=C_{i}\) then \(C\) does not separate \(\mathcal{S}\). Let us consider \(\Omega=S^{2}\setminus\bar{D}_{i}\). Condition (S3) ensures that Lemma 5.4 applies to prove the connectedness of \(\mathcal{S}\setminus C_{i}\).
**Corollary 5.6**.: _Any homeomorphism \(h:\mathcal{S}_{1}\to\mathcal{S}_{2}\) between two topological Schottky sets is the restriction of a self-homeomorphism \(H:S^{2}\to S^{2}\) of the sphere._
This implies that we may define a topological Schottky set as an abstract compact subset homeomorphic to that of an embedded topological Schottky set as above
Proof.: By Proposition 5.5, \(h\) maps boundary components \(\{C_{i}^{1}\}\) to boundary components \(\{C_{i}^{2}\}\). As these components are Jordan curves, one may extend \(h:C_{i}^{1}\to C_{i}^{2}\) as a homeomorphism \(H_{i}:D_{i}^{1}\to D_{i}^{2}\) for each \(i\in I\). Since topological Schottky sets are \(E\)-sets, these local homeomorphisms induce a global homeomorphism \(H:S^{2}\to S^{2}\).
**Proposition 5.7**.: _Let \((G,\mathcal{P})\) be a relatively hyperbolic pair. If its Bowditch boundary is homeomorphic to a topological Schottky set, then \((G,\mathcal{P})\) is a geometrically finite convergence group on \(S^{2}\)._
Proof.: We may assume that \(G\) acts as a convergence group action on a topological Schottky \(\mathcal{S}\subset S^{2}\). Proposition 5.5 implies that \(G\) preserves the collection of boundary circles. Therefore, we may apply [15, Thm. 5.8] and extend in this way the action as a global convergence of the sphere.
**Corollary 5.8**.: _Let \((G,\mathcal{P})\) be a relatively hyperbolic group pair with Bowditch boundary a topological Schottky set. The set \(\cup_{i\neq j\in I}(\bar{D}_{i}\cap\bar{D}_{j})\) corresponds to the set of parabolic points whose stabilizers are 2-ended._
Proof.: Let \(p\) be a parabolic point. Let \(\Omega_{p}\) denote the union of components of the ordinary set that contain \(p\) on their boundaries: according to the definition of a topological Schottky set, \(\Omega_{p}\) is either empty, or has one or two components. By Proposition 3.8, the action on \(S^{2}\setminus(\{p\}\cup\Omega_{p})\) is cocompact.
If \(\Omega_{p}=\emptyset\), then \((S^{2}\setminus\{p\})/\mathrm{Stab}(p)\) is a compact surface orbifold. If \(\Omega_{p}\) has a single component, then \(\mathrm{Stab}(p)\) is cyclic since it preserves \(\partial\Omega_{p}\), but this prevents the quotient to be compact on its complement as the action of the cyclic group is generated by a translation by Lemma 3.1. Therefore, if \(\Omega_{p}\) is non-empty, then it is the union of two discs. Conversely, if two boundary components intersect, then Proposition 2.8 implies that the intersection of their stabilisers is a parabolic point \(p\). Up to index 2, \(\mathrm{Stab}(p)\) fixes each component, hence is a rank 1 parabolic point.
## 6 Incidence graphs for topological Schottky sets
We recall Definition 5.1. A _topological Schottky set_\(\mathcal{S}\) is a connected, locally connected, 1-dimensional subset of the sphere such that the complement is a union of pairwise disjoint Jordan domains. The closure of each component of the complement is homeomorphic to a disc \(\bar{D}_{i}\). The intersection \(\bar{D}_{i}\cap\bar{D}_{j}\) is at most one point and a point of \(\mathcal{S}\) belongs to at most two \(\overline{(}D_{i})\). A topological Schottky set has no cut points nor cut pairs.
In this situation, we can draw more conclusions from the above construction in Section 4.
**Definition 6.1**.: We define the _incidence graph \(\Gamma(\mathcal{S})\) of the topological Schottky set \(\mathcal{S}\)_. Let \(\Gamma\) be the bipartite graph with vertex set the union of vertices \(\{v_{i}\}_{i\in I}\), associated to the components \(\{D_{i}\}_{i\in I}\) or, equivalently by Proposition 5.5, to the embedded non separating circles in \(\mathcal{S}\), and vertices \(v_{p}\), associated to intersections \(\bar{D}_{i}\cap\bar{D}_{j}\), such that
there is a non oriented edge between \(v_{i}\) and \(v_{p}\) if and only if \(p\in\partial D_{i}\). Since we are working with a topological Schottky set, \(\Gamma\) embeds into \(S^{2}\). To see this we pick for each component \(D_{i}\) a base-point \(v_{i}\in D_{i}\) and join \(v_{i}\) to each \(p\in\partial D_{i}\cap\partial D_{j}\) with an arc in \(\bar{D}_{i}\).
If \((G,\mathcal{P})\) is a relatively hyperbolic group pair whose boundary is a topological Schottky set, we will often denote this graph by \(\Gamma(G)\) or \(\Gamma(G,\mathcal{P})\). As observed in Lemma 6.3, each edge corresponds to a rank-1 parabolic point. Also, we may ignore the vertices corresponding to the rank-1 parabolic points since this will not change the topology of the graph.
The following is a consequence of Proposition 5.5.
**Lemma 6.2**.: _If \(\partial(G,\mathcal{P})\) has Bowditch boundary a topological Schottky set, then \(G\) acts on \(\Gamma(G)\)._
The following was established in the proof of Corollary 5.8
**Lemma 6.3**.: _Let \((G,\mathcal{P})\) be a relatively hyperbolic group pair with Bowditch boundary a topological Schottky set. The intersection of two \(\partial D_{i}\), which corresponds to an edge in the incidence graph, is a parabolic point with a 2-ended stabiliser. All other parabolic points have stabilisers isomorphic to a compact surface orbifold group._
**Corollary 6.4**.: _Let \(\mathcal{P}=\mathcal{P}_{1}\cup\mathcal{P}_{2}\) be as in Definition 4.1. The components of the ordinary set of \(\partial(G,\mathcal{P}_{2})\) are in bijection with the components of \(\Gamma\). Each cycle in \(\Gamma\) separates the Bowditch boundary of \((G,\mathcal{P}_{2})\)._
Proof.: We will use the same notation introduced in Section 4 for the proof of Theorem 4.2: \(Y\) is the complement of the union of pairs \(K\) of closed horoballs attached to each rank-1 parabolic point \(p\) and \(\widehat{Y}\) its completion after blowing-up, so that there is a natural quotient map \(\pi:\widehat{Y}\longrightarrow\widehat{Y}\). Let \(\Gamma_{T}=\Gamma\cap Y\) and let us consider its closure \(\Gamma^{\prime}_{T}\) in \(\widehat{Y}\). This graph is disjoint from the limit set, and each edge is cut into two pieces by a Jordan domain \(D=\pi^{-1}(K)\), \(K\in\mathcal{C}\). We may then connect both sides of the edge in \(D\) to reconstruct a graph \(\Gamma^{\prime}\) isomorphic to \(\Gamma\) which will now be disjoint from the limit set.
Let us observe that this edge separates in \(\overline{D}\) the preimages of the parabolic point, so that any cycle in \(\Gamma^{\prime}\) separates the limit set. By construction each connected component of the new ordinary set contains a component of \(\Gamma^{\prime}\) (which might be reduced to a single point). To see there is at most one, we may proceed by contradiction as follows: if two components of \(\Gamma^{\prime}\) belonged to the same component of \(\widehat{\Omega}_{G}\), we could consider a curve joining them in \(Y\): a contradiction.
**Theorem A**.: _Let \(\mathcal{S}\) be a topological Schottky set with \(\mathcal{S}=\partial(G,\mathcal{P})\). Then the incidence graph \(\Gamma(\mathcal{S})\) has 1, 2 or infinitely many components. Their stabilizers are virtual surface groups._
Proof.: According to Proposition 5.5, boundary components do not separate a topological Schottky set so the group \(G\) maps boundary components to themselves. Therefore, [15, Thm. 5.8] enables us to extend the action onto the whole sphere. The parabolic points are either surface groups that are not accessible from any components or rank 1 parabolic points, which correspond to two intersecting disks, as observed in Lemma 6.3. According to Corollary 6.4, the components of the graph are thus in bijection with those of the blown-up ordinary set. Since the action is geometrically finite, there are 1, 2 or infinitely many components as seen in Proposition 3.9. By Proposition 2.3 we deduce
that the stabilizers of components are relatively quasiconvex subgroups. In addition they have no parabolics. Since these stabilizers stabilize disks, they are virtually closed surface groups.
## 7 One component in the incidence graph
Here we prove Theorem B:
**Theorem B**.: _Let \(\mathcal{S}\) be a topological Schottky set with \(\mathcal{S}=\partial(G,\mathcal{P})\)._
_When the incidence graph \(\Gamma(\mathcal{S})\) has one component, then \(G\) is virtually a free product of a free group \(F_{n}\) of rank \(n\geq 0\) and some finite index subgroups of groups in \(\mathcal{P}\). Moreover, if \(G\) is finitely generated, its action is faithful and orientation preserving, then \(G\) is covered by a geometrically finite Kleinian group \(K\)._
Recall from Theorem 4.2 that if \((G,\mathcal{P})\) is a relatively hyperbolic group pair and \(\mathcal{P}^{\prime}\) is the set of non 2-ended subgroups of \(\mathcal{P}\), then \((G,\mathcal{P}^{\prime})\) is a relatively hyperbolic group pair and the Bowditch boundary \(\partial(G,\mathcal{P}^{\prime})\) is obtained from \(\partial(G,\mathcal{P})\) by unpinching the parabolic points of \(\partial(G,\mathcal{P})\) with two-ended stabilizers. Furthermore in our situation (in fact whenever \(\partial(G,\mathcal{P})\) is planar) the unpinched boundary \(\partial(G,\mathcal{P}^{\prime})\) is also planar.
There are three cases to consider for relatively hyperbolic group pairs with Schottky set boundary. The first is when the incidence graph has one component.
**Theorem 7.1**.: _Let \((G,\mathcal{P})\) be a relatively hyperbolic group pair such that \(\partial(G,\mathcal{P})\) is a Schottky set with connected incidence graph. Let \((G,\mathcal{P}^{\prime})\) be the relatively hyperbolic group pair where \(\mathcal{P}^{\prime}\) consists of the subgroups in \(\mathcal{P}\) that are not two-ended. Then \(\partial(G,\mathcal{P}^{\prime})\) is a Cantor set._
Proof.: We will prove that if \(\partial(G,\mathcal{P}^{\prime})\) has a non-trivial component, it is a dendrite. However, this is impossible according to Lemma 2.7. The theorem will then follow.
Take a component \(L\) of \(\partial(G,\mathcal{P})\). Suppose that \(L\) contains at least two points \(x\) and \(y\).
* \(L\) is a connected, locally connected compact metrizable space. The component is connected by definition. A component \(L\) is itself the boundary of a relatively hyperbolic subgroup pair: the subgroup stabilizing \(L\) along with the peripheral subgroups whose fixed points belong to \(L\)[4]. Thus \(L\) is compact and a metric space. Furthermore, the set of peripheral subgroups is a subset of \(\mathcal{P}^{\prime}\), each of whose elements is a closed surface group. Therefore by Bowditch [3] the boundary \(L\) is locally connected.
* The component \(L\) contains no simple closed curve. Any simple closed curve bounds two discs in \(S^{2}\) which are either contained in \(L\) or not. At least one must be contained in \(L\) as the complementary region would correspond to an additional component of the incidence graph, which is connected. If the simple closed curve bounds a disk in \(L\), then the boundary has non-empty interior but then it must be all of \(S^{2}\), since it is the boundary of a relatively hyperbolic group (limit points of loxodromic elements are dense).
Thus, \(L\) should be a drendrite, so we may now conclude that there are no non-trivial components.
Proof of Theorem B.: Let \((G,\mathcal{P}^{\prime})\) be the relatively hyperbolic group pair where \(\mathcal{P}^{\prime}\) consists of the subgroups in \(\mathcal{P}\) that are not two-ended. According to Theorem 7.1, the Bowditch boundary of the group pair \((G,\mathcal{P}^{\prime})\) is a Cantor set, hence its ordinary set on \(S^{2}\) is connected. We are now in a position to apply Theorem 2.11 and conclude that, in this case, the group \(G\) is virtually a free product of infinite cyclic groups and finite index subgroups of peripheral groups, which are virtual surface groups. It follows from Corollary 3.10 that \(G\) is covered by a Kleinian group if it is finitely generated. The conclusion follows.
**Remark 7.2**.: In the same circle of ideas, Otal proves that if \((\mathbb{F},\mathcal{P})\) is a free relatively hyperbolic group pair such that its Bowditch boundary is a topological Schottky set, then there exists a handlebody with fundamental group \(\mathbb{F}\) and disjoint homotopy classes of simple curves on its boundary that represent the peripheral structure \(\mathcal{P}\)[27].
## 8 More components in the incidence graph
In the previous section, under the hypothesis that \(\partial(G,\mathcal{P})\) is a topological Schottky set with connected incidence graph, we determined the structure of the group \(G\). Since the incidence graph has \(1\), \(2\), or infinitely many components, we now analyze what happens in the latter two cases.
**Theorem C**.: _Let \(\mathcal{S}\) be a topological Schottky set with \(\mathcal{S}=\partial(G,\mathcal{P})\). When the incidence graph \(\Gamma(\mathcal{S})\) has exactly 2 components \(G\) is virtually a closed surface group._
Proof.: We recall that the rank 1 parabolic points in \(\mathcal{P}\) correspond to the edges of the incidence graph by Lemma 6.3 and the very definition of the incidence graph.
Then, we unpinch the rank-1 parabolic points as in Theorem 4.2. This results in a different geometrically finite action of the group \(G\). For every parabolic point removed, the two components of the domain of discontinuity that corresponded to the endpoints of the edge are contained in the same component. So when there are no more rank-1 parabolic points, there are two components of the domain of discontinuity. Then by Corollary 3.10, \(G\) is virtually Fuchsian with limit set \(S^{1}\).
Since we already removed all of the rank-1 parabolic points, \(G\) is virtually a closed surface group and \(P_{2}=\emptyset\).
When the incidence graph has infinitely many components, the topology of the blown-up limit set can be extremely varied so there is no hope of getting a meaningful description of the underlying group. Indeed, the next theorem shows in particular that the limit set of any finitely generated Kleinian group with infinitely many components in its regular set and no two-ended parabolic subgroups is isomorphic to the boundary of some \((G,\mathcal{P}_{2})\) obtained by blowing up all the rank-one parabolics of a relatively hyperbolic group \((G,\mathcal{P}_{1}\cup\mathcal{P}_{2})\) where \(\partial(G,\mathcal{P}_{1}\cup\mathcal{P}_{2})\) is a topological Schottky set.
**Theorem D**.: _Let \(K\) be a geometrically finite Kleinian group with non-empty domain of discontinuity. Then there is a peripheral structure \(\mathcal{P}_{K^{\prime}}\) on a finite index subgroup \(K^{\prime}\) of \(K\), such that \((K^{\prime},\mathcal{P}_{K^{\prime}})\) is a relatively hyperbolic group pair and \(\partial(K^{\prime},\mathcal{P}_{K^{\prime}})\) is a topological Schottky set. Moreover, \(\mathcal{P}_{K^{\prime}}\) contains the natural peripheral structure of the Kleinian group \(K^{\prime}\subset K\)._
Proof.: We choose \(K^{\prime}\) to be a torsion-free finite-index subgroup of \(K\) contained in \(PSL(2,\mathbb{C})\). Below we will define a peripheral structure \(\mathcal{P}_{K^{\prime}}\) of \(K^{\prime}\) that will contain all the parabolic subgroups of \(K^{\prime}<PSL(2,\mathbb{C})\) but will in general be larger.
In this situation, there is an irreducible and orientable manifold with boundary \(M_{K^{\prime}}\) obtained as the quotient of the \(1\)-neighborhood of the convex hull of \(\Lambda_{K^{\prime}}\), the limit set of \(K^{\prime}\), by the action of \(K^{\prime}\). There is at least one geometrically finite end, as the group is geometrically finite and its limit set is not all of \(S^{2}\). This manifold comes equipped with a natural pared structure, given by the parabolic structure on \(K^{\prime}\). This realizes the boundary of \(M_{K^{\prime}}\) as a union of connected surfaces with boundary, which corresponds to the rank-\(1\) cusps in the hyperbolic structure. We will add curves to the peripheral structure so that the resulting pared manifold contains no essential annuli or disks, and thus admits a hyperbolic structure with totally geodesic boundary [25, Theorem B' page 70].
We will first consider the case when these surfaces are incompressible. Now, in this situation, \(M_{K^{\prime}}\) admits a JSJ-decomposition along a finite family of pairwise disjoint and non parallel incompressible annuli \(A_{i}\) into "geometric pieces" (see [32] for a description): \(I\)-bundles over surfaces (Seifert fibered pieces) and anannular manifolds with boundary (hyperbolic pieces). By taking a further cover if necessary, that is by taking a further finite-index subgroup, we assume no twisted \(I\)-bundle appears in the decomposition. Note that a piece can have different structures. For instance, a solid torus can be seen as a circle bundle over a disk, an interval times an annulus, as well as a twisted \(I\)-bundle over a Mobius band. We only require each piece to admit some product structure.
The characteristic submanifold \(C_{K^{\prime}}\) in \(M_{K^{\prime}}\) consists of all the surface-times-interval components together with small neighborhoods of the JSJ annuli \(A_{i}\), which are solid tori \(T_{i}\). Note that if \(C_{K^{\prime}}\) is empty, \(M_{K^{\prime}}\) with its natural pared structure admits a hyperbolic metric with totally geodesic boundary so that \(\partial(K^{\prime},\mathcal{P}_{K^{\prime}})\) is a Sierpinski carpet and hence a topological Schottky set.
Otherwise, we observe that the boundary of each solid torus \(T_{i}\) is partitioned into four annuli: two of them contained in \(\partial M_{K^{\prime}}\) and two others properly embedded in \(M_{K^{\prime}}\) and parallel to \(A_{i}\). For each \(T_{i}\), we mark two points on each of the four circles that delimit the four annuli in its boundary. We then connect these pairs of points with two arcs in \(\partial T_{i}\cap\partial M_{K^{\prime}}\) running from one circle to the other.
Remark that \(\partial C_{K^{\prime}}\setminus\partial M_{K^{\prime}}\) consists of properly embedded annuli contained in the boundary of some tori \(T_{i}\). The rest of the boundary \(\partial C_{K^{\prime}}\) in \(\partial M_{K^{\prime}}\) is a union of subsurfaces, possibly with boundary or cusps.
For each complementary piece of \(\partial M_{K^{\prime}}\setminus\partial C_{K^{\prime}}\), we connect all the marked points on its boundary components with an embedded collection of essential (pairwise non-parallel) arcs. Next, if some component of \(\partial M_{K^{\prime}}\setminus\cup_{i}T_{i}\) is an annulus, (for instance, if a piece is a solid torus) we connect the pair of points on one boundary component directly with the pair of points on the other boundary component. Each remaining component of \(C_{K^{\prime}}\setminus\cup_{i}T_{i}\) is a surface times an interval \(S\times I\). In this case again we first connect the marked points on the boundary circles along an embedded collection of essential arcs in \(S\times I\cap\partial M_{K^{\prime}}\) (as was done in the complementary components). Then we take a pair of pants decomposition of each remaining component after cutting along these arcs. The pair of pants decomposition for the pieces of \(S\times\{0\}\) should be different from the decomposition for \(S\times\{1\}\), in particular, the curves of the pants decomposition for \(S\times\{0\}\) should be transverse to curves going through \(S\times\{1\}\). Since there are two arcs meeting at each marked point, the union of these arcs and curves is a collection of curves so that any essential annulus in \(\partial C_{K^{\prime}}\) is transverse to some
curve in this collection. Since these curves are essential and non-parallel, we can make this collection peripheral. The resulting pared manifold with this peripheral structure will admit a Kleinian representation where the quotient of the 1-neighborhood of the convex hull of the limit set is a hyperbolic manifold with totally geodesic boundary. Therefore its limit set can be realized as a Schottky set.
Assume now that \(\partial M_{K^{\prime}}\) is compressible. In this case, the limit set of \(K^{\prime}\) is not connected. We can choose a finite family \(\mathcal{D}\) of properly embedded pairwise disjoint essential disks such that (the closure of) each component of the complement of the disks, \(M_{K^{\prime}}\setminus\cup_{D\in\mathcal{D}}D\), has incompressible boundary and the family \(\mathcal{D}\) is minimal with respect to this property. As we did with the JSJ-annuli in the previous case, for each disk \(D\) we remove small cylindrical neighborhood \(C_{D}\) and mark two points on each of the circles delimiting the two disks on the boundary of \(C_{D}\). We then connect the two pairs of points by two arcs in the annulus contained in \(\partial C_{D}\). For each component \(N\) of \(M_{K^{\prime}}\setminus\cup_{C_{D}\in\mathcal{D}}D\) let us denote \(C_{N}\) the characteristic submanifold of \(N\). Note that we can assume that the annuli of the JSJ-decomposition of \(N\) are disjoint from the disks of the family \(\mathcal{D}\). We can now repeat the previous argument keeping in mind that this time we need to connect also the marked points on the boundary of the disks.
|
2309.07406 | Secure and Scalable Circuit-based Protocol for Multi-Party Private Set
Intersection | We propose a novel protocol for computing a circuit which implements the
multi-party private set intersection functionality (PSI). Circuit-based
approach has advantages over using custom protocols to achieve this task, since
many applications of PSI do not require the computation of the intersection
itself, but rather specific functional computations over the items in the
intersection.
Our protocol represents the pioneering circuit-based multi-party PSI
protocol, which builds upon and optimizes the two-party SCS
\cite{huang2012private} protocol. By using secure computation between two
parties, our protocol sidesteps the complexities associated with multi-party
interactions and demonstrates good scalability.
In order to mitigate the high overhead associated with circuit-based
constructions, we have further enhanced our protocol by utilizing simple
hashing scheme and permutation-based hash functions. These tricks have enabled
us to minimize circuit size by employing bucketing techniques while
simultaneously attaining noteworthy reductions in both computation and
communication expenses. | Jiuheng Su, Zhili Chen | 2023-09-14T03:20:33Z | http://arxiv.org/abs/2309.07406v1 | # Secure and Scalable Circuit-based Protocol for Multi-Party Private Set Intersection
###### Abstract
We propose a novel protocol for computing a circuit which implements the multi-party private set intersection functionality (PSI). Circuit-based approach has advantages over using custom protocols to achieve this task, since many applications of PSI do not require the computation of the intersection itself, but rather specific functional computations over the items in the intersection.
Our protocol represents the pioneering circuit-based multi-party PSI protocol, which builds upon and optimizes the two-party SCS [9] protocol. By using secure computation between two parties, our protocol sidesteps the complexities associated with multi-party interactions and demonstrates good scalability.
In order to mitigate the high overhead associated with circuit-based constructions, we have further enhanced our protocol by utilizing simple hashing scheme and permutation-based hash functions. These tricks have enabled us to minimize circuit size by employing bucketing techniques while simultaneously attaining noteworthy reductions in both computation and communication expenses.
+
Footnote †: publication: November 8, 2021
+
Footnote †: publication: November 8, 2021
## I Introduction
Two-party Private Set Intersection (PSI) enables two parties, denoted as \(P_{1}\) and \(P_{2}\) with respective input sets \(X\) and \(Y\), to compute the intersection \(I=X\cap Y\) without revealing any other information about the items outside the intersection. Currently, there are numerous constructions of protocols for computing two-party PSI with concretely efficient and secure implementations. The problem of multi-party PSI (mPSI) naturally extends the concept of two-party PSI, i.e. \(n\) parties collaborate to securely compute the intersection of their private input sets \(S_{1},S_{2},...,S_{n}\) while ensuring the confidentiality of all other information.
Secure protocols for computing PSI, applicable to both two-party and multi-party scenarios, can be broadly categorized into two classes. The first category encompasses constructions specifically designed to address the PSI problem, yielding highly efficient protocols tailored to this particular task. On the other hand, the second category involves the utilization of generic secure multi-party computation (MPC) techniques, employing circuit representations of the desired functionality. This allows for the integration of PSI protocols into larger, composite secure computations. In this study, our focus is primarily directed towards the latter category of protocol constructions. These constructions offer the advantage of maintaining the secrecy of the intersection itself from the participating parties, while securely evaluating a symmetric function \(f(S_{1}\cap S_{2}\cap...\cap S_{n})\), which could include operations such as set intersection sum or cardinality computation. For clarity, the functionality \(F_{mPSI,f}\) is formally depicted in Figure 1.
In the context of two-party circuit-based PSI, we suppose that each party possesses an input set containing \(n\) items. A most naive circuit requires \(O(n^{2})\) pairwise comparisons between the items. However, optimizations have been proposed by leveraging local computation capabilities of the parties. For instance, in the work of [9], several two-party circuit-based PSI protocols were introduced. For small universes, the parties can represent their input sets as bit-vectors and compute the intersection using bit-wise AND operations (referred to as the Bit-Wise And (BWA) protocol). On the other hand, for larger universes, [9] presented the Sort-Compare-Shuffle (SCS) design. This design involves local sorting of the respective sets by each party, followed by the computation of the sorted list of the union of the two sets using the bitonic sorting network. Consequently, items in the intersection will appear twice adjacently in the sorted list, allowing for the identification of the intersection by comparing adjacent items. To preserve privacy, the sorted result of the intersection is then shuffled using a Waksman permutation network, effectively concealing the positional information of the items. The overall circuit computation requires only \(O(n\log n)\) comparisons, primarily stemming from the initial stage of merge sort. The two-party circuit-based PSI protocols proposed by [9] serve as
Fig. 1: Functionality \(F_{mPSI,f}\)
the foundation for our research. In fact, the two-party circuit-based PSI protocols proposed by [9] is the starting point of our work.
### _Overview of our Protocol_
Our protocols are built upon the foundation of the two-party circuit-based PSI protocols proposed by [9]. One notable difference is that, in order to overcome the challenges of complex interactions and scalability in the multi-party setting, our protocol adopts a generic two-party secure computation protocol. As a result, prior to conducting circuit computations, the private inputs of the parties need to be securely distributed between the two parties engaged in the secure computation process. These two parties then reconstruct the items and perform secure computation of multi-party PSI (mPSI) within the circuit. The construction of our protocols uses generic secure multi-party computation (MPC) protocols, such as Yao's garbled-circuit protocol and the GMW protocol, to evaluate boolean circuits that compute the desired functionality. By relying on these established MPC protocols, which possess proven security properties, we can focus on designing circuits that effectively implement the desired functionality. In our study, we consider the multi-party setting involving \(m\) parties denoted as \(P_{1},...,P_{m}\). Each party possesses an input set consisting of \(n\) items, which are represented using \(\sigma\) bits.
In our first protocol, namely multi-party Bitwise-AND (mBWA), the input sets are represented as bit-vectors of length \(2^{\sigma}\). The protocol incorporates a secret sharing scheme, where each party securely distributes their respective bit-vectors to two designated parties, denoted as \(P_{1}\) and \(P_{2}\). Subsequently, \(P_{1}\) and \(P_{2}\) reconstruct the bit-vectors within the circuit, enabling the computation of the intersection by performing bit-wise AND operations on the corresponding bit-vectors.
It is important to highlight that the practical applicability of this protocol is limited to scenarios involving smaller universes. The exponential growth of AND gates within the circuit imposes constraints on its scalability when dealing with larger datasets.
Our second protocol, multi-party Sort-Compare-Shuffle (mSCS), follows the Sort-Compare-Shuffle (SCS) paradigm while extending it to the multi-party setting. In this protocol, each party independently performs a local sorting operation on their respective input sets. The sorted sets are then distributed among the designated parties, \(P_{1}\) and \(P_{2}\), ensuring that the order of the secret shares aligns with the order of the items in the sorted sets.
Upon distribution, \(P_{1}\) and \(P_{2}\) reconstruct the sets within the circuit and merge them securely using a \(k\)-bitonic sorting network based on the \(k\)-Bitonic sort algorithm [7]. The intersection of the sets can be subsequently determined by identifying the elements that occur \(m\) times consecutively in the merged list. This identification process involves comparing adjacent elements. Finally, before revealing the sorted intersection, a shuffling procedure is applied to conceal the positional information of the items, thus preserving privacy and confidentiality.
Our research is also motivated by the work presented in [13], which introduced a method for comparing items mapped to each bin. In light of this, we propose a novel approach that employs simple hashing scheme for each party to distribute their elements into distinct bins. This approach aims to reduce communication overhead by utilizing multi-party circuit-based PSI protocols to compare the elements within each bin. Through our exploration, we have discovered substantial advantages in employing individual instances of the aforementioned protocol within each bin, as opposed to directly utilizing a single, large circuit for computation. These advantages extend beyond the realm of communication overhead and also encompass the facilitation of parallel computation. By leveraging this approach, our protocols exhibit improved efficiency and scalability, making them well-suited for practical deployment in multi-party settings.
### _Motivation for multi-party Circuit PSI_
#### Ii-B1 Circuit
Currently, the predominant focus of research efforts lies in addressing the PSI problem itself, which aims to reveal the intersection to the involved parties. These protocols have demonstrated high levels of efficiency, even achieving linear communication costs. However, in many practical applications, PSI functions as a module, and it is crucial to maintain the privacy of the intersection. In fact, these PSI applications often require the ability to compute any function based on the intersection.
Moreover, modifying or altering a custom MPC protocol has been shown to be prohibitively expensive and sometimes even infeasible. In contrast, generic MPC protocols offer greater flexibility in supporting additional computations through circuit expansion. They can leverage existing code bases and software packages, allowing users to focus on circuit design rather than developing an entirely new protocol. Clearly, the latter option is more challenging.
#### Ii-B2 Multi-party
The multi-party PSI problem constitutes a more general case of the two-party PSI problem and presents greater potential in the context of massive data sharing. The multi-party scenario is better suited for various applications, such as identifying a target audience for an advertising campaign that involves several companies sharing data on their common users. It should be noted that generic Multi-Party Computation (MPC) protocols tend to be computationally expensive in the multi-party setting. Consequently, there exists a research gap in the context of multi-party circuit-based PSI. Nevertheless, as discussed earlier, there is significant motivation to address this gap and develop efficient solutions in this context.
### _Related Work_
The problem of PSI has always been a hot issue in the field of MPC. We focus on the discussion of the state-of-the-art of semi-honest PSI protocols and simply classify previous works into two-party PSI and multi-party PSI.
#### I-C1 Two-party PSI
The earliest Private Set Intersection protocols were built upon public-key cryptography, specifically relying on the Diffie-Hellman assumptions, which can be traced back to the 1980s [19]. Subsequently, more efficient PSI protocols were developed based on oblivious transfer (OT) extension, which require minimal public-key cryptography computation and can be efficiently instantiated with symmetric key cryptography. Circuit-based PSI protocols use generic MPC protocols to perform the necessary computations.
A basic PSI circuit computes \(O(n^{2})\) element comparisons, resulting in \(O(\sigma n^{2})\) gates, where \(\sigma\) represents the bit-length of the elements. The number of comparisons performed by the circuit is a crucial factor that impacts the overhead, as it directly affects the communication volume in the protocol. The Sort-Compare-Shuffle (SCS) PSI circuit introduced by [9] reduces the number of element comparisons to \(O(n\log n)\). The Circuit-Phasing PSI protocol proposed by [13] hashes input items into \(O(n)\) bins using Cuckoo hashing and simple hashing, enabling independent operations on each bin. Each bin typically contains at most \(O(\log n/\log\log n)\) elements. Consequently, the Circuit-Phasing PSI circuit computes \(O(n\log n/\log\log n)\) comparisons.
The first circuit-based PSI protocol to achieve linear communication complexity is presented in [14]. This protocol relies on the use of oblivious programmable pseudo-random functions (OPPRF). Parties need to evaluate a circuit per bin to compare the programmed value with the output of the OPRF. As a result, this circuit only needs to compute one single comparison per bin.
#### I-C2 Multi-party PSI
The first multi-party PSI protocol was introduced by [6], which utilized oblivious polynomial evaluation (OPE) techniques like additively homomorphic encryption. Subsequent works by [4, 10, 17] focused on optimizing the computation and communication overhead of these protocols. The mPSI protocol proposed in [11] was the first implementation of multi-party PSI and introduced a novel primitive called Oblivious Programmable Pseudo-Random Functions (OPPRF). This protocol successfully avoids computationally expensive public-key operations.
To the best of our knowledge, the exploration of multi-party circuit-based PSI remains largely unexplored in the existing literature.
### _Our Contributions_
In summary, in this paper we present the following contributions:
* We provide the first multi-party circuit-based PSI achieving \(O(mnlog^{2}(mn))\) asymptotic communication overhead. Our protocol is a natural generalization of the two-party circuit-based PSI protocol, which simplifies the complexity of multi-party interactions and can be easily expanded.
* We integrate simple hashing scheme into our multi-party circuit-based protocols. By using the permutation-based hashing function, the elements can be represented in the form of shorter bits in the bins. Grouping data into bins results in a reduction in circuit size and enables parallel computation. This approach achieves the goal of simultaneously decreasing both communication and time overheads.
## II Preliminaries
### _Setting_
There are \(m\) parties, which we denote as \(P_{1}\),..., \(P_{m}\), where \(P_{1}\) and \(P_{2}\) are typically the two parties for security computation. Each of these parties is in possession of respective input sets, \(S_{1},S_{2},\ldots,S_{m}\), each of which contains \(n\) items represented by \(\sigma\) bits. It is assumed that \(P_{1}\) and \(P_{2}\) agree on a circuit \(C\) that receives the secret shares of input sets and computes the intersection of \(\hat{n}\) elements. They also agree on a symmetric function \(f\) and can compute \(f(S_{1}\cap S_{2}\cap...\cap S_{m})\) securely. We denote the computational and statistical security parameters by \(\kappa\) and \(\lambda\). We use \(\gamma\) to denote a parameter that determines the probability of hashing failure, which is employed in optimization schemes. We use \(S_{i}[j]\) to denote the \(j\)-th item in the set \(S_{i}\). We also denote the set \(\{1,...,c\}\) as \([c]\).
### _Security Model_
In this work, similar to most protocols for private set intersection, we focus on the semi-honest model, also known as the honest-but-curious model, which assumes that all parties will follow the protocol, but adversaries may attempt to extract as much information as possible from the protocol execution. This is different from malicious adversary model, where adversaries can deviate from the protocol steps arbitrarily. While protocols designed for malicious adversaries offer more security, they tend to be less efficient than those designed for the semi-honest setting. In most scenarios, semi-honest security is sufficient, as it is currently difficult for adversaries to modify software with attestation or business restrictions. However, for most recent optimization of circuit-based PSI protocols that rely on Cuckoo hashing, it is still difficult to ensure that such operation of hashing is secure and correct.
The SCS protocol of [9] is a unique circuit-based PSI protocol that can be easily modified to provide security against malicious adversaries by expanding the circuit to verify that the elements are sorted, while maintaining an overall complexity of \(O(nlogn)\). It is worth noting that this advantage is also present in our protocol.
### _Secure Two-Party Computation_
In contemporary MPC research, there exist two primary methods for the secure computation of boolean circuits: Yao's garbled circuit (GC) protocol [20] and the GMW protocol [8].
Yao's garbled circuit protocol presents a constant round complexity and implements the free XOR gates technique [12]. Through the optimization techniques developed in [21], the protocol requires at least two ciphertext transmissions to evaluate an AND gate. Similarly, the GMW protocol also implements the free XOR technique and necessitates two ciphertext transmissions to evaluate each AND gate using
OT extension [2]. However, the GMW protocol offers an additional benefit in the form of its ability to perform symmetric cryptographic computations in advance during the pre-computation phase, thereby improving the efficiency of the online phase.
The main advantage of generic protocols is that it can easily extend the functionality of the protocol without having to change the security of the protocol. As such, we use generic secure two-party computation protocol to implement our protocols.
### _Secret Sharing_
In cryptography, an \((n,t)\)-secret sharing scheme has been proposed for distributing a secret \(s\) among \(n\) parties in such a way that any \(t+1\) parties can collectively reconstruct the secret \(s\) from their shares, while preventing any collusion of \(t\) parties from learning any information about \(s\)[3, 18]. This scheme provides a secure and efficient way to distribute secret information among multiple parties without compromising its confidentiality.
In this context, our protocols employ an additive \((n,n-1)\)-secret sharing scheme. In our protocols, all parties have to distribute their input sets among two designated parties, \(P_{1}\) and \(P_{2}\). This distribution ensures that neither \(P_{1}\) nor \(P_{2}\) can obtain any information about the input sets of other parties except their own data. During the computation phase of the circuit, \(P_{1}\) and \(P_{2}\) reconstruct the inputs of the parties in the circuit and calculate the intersection of the input sets.
The use of the additive secret sharing scheme in our protocols provides an additional layer of security to the distribution of secret information among multiple parties. The data pre-processing phase ensures that the input sets of parties are kept confidential, and the computation phase guarantees that the secret information is reconstructed securely without revealing any information to unauthorized parties. This approach is beneficial for applications that require the distribution of confidential information among multiple parties, such as secure multi-party computation, privacy-preserving data analysis, and secure cloud computing.
### _Simple Hashing_
We have incorporated a hashing scheme in our protocols to optimize them. The literature on hashing schemes is extensive and covers a range of methods for handling collisions, complexities associated with insertion, deletion, and look-up operations, as well as utilization of storage space. Previous works such as [5, 13, 15] have used hashing to improve the number of comparisons performed in Private Set Intersection (PSI) protocols. Similarly, our protocols allow the use of simple hashing schemes to split the computation by mapping input items to bins.
The simple hashing scheme typically utilizes a table \(T\) containing \(\beta\) bins. We make the assumption that the number of bins \(\beta\) is a power of 2. In cases where \(\beta\) is not a power of 2, [1] proposes a method to handle this situation. Using a hash function \(H\) which maps an element \(e\) to an address \(a=H(e)\) within the range \([0,\beta-1]\), the element \(e\) is then placed into the corresponding bin \(T[a]\). The simple hashing approach allows for multiple elements to be stored in each bin, with the maximum number of elements that can be stored in each bin depending on the total number of elements and the number of bins. This problem has been analyzed in detail in [16], which showed that when randomly mapping \(n\) items to \(n\) bins using \(H\), the most populated bin would have at most \(\frac{lnn}{lnlnn}(1+o(1))\) items with high probability.
In summary, by adopting simple hashing scheme, our protocols allow for efficient computation by reducing the number of comparisons while enabling parallel computation.
### _Permutation-based Hashing_
When dividing elements into bins, it is possible to reduce the bit-length of stored items through permutation-based hashing, a technique introduced in prior literature [1, 13]. In this work, we apply permutation-based hashing to improve the memory usage of our simple hashing scheme. Specifically, by utilizing permutation-based hashing, the elements stored in bins can be represented using fewer bits, resulting in a reduction of the number of gates required during the circuit computation stage. This reduction can lead to significant efficiency improvements in terms of communication costs and computation time. Notably, the permutation-based hashing technique can be applied in all hashing-based privacy-preserving set intersection (PSI) protocols, including the one proposed in this study.
In general, a traditional hash function \(h:\{0,1\}^{\sigma}\rightarrow\{0,1\}^{log\beta}\) maps an element \(x\) to bin \(h(x)\), where \(|x|\) represents the bit-length of \(x\). We notice that the bin index \(h(x)\) is able to carry \(log\beta\) bits of information. So, it is possible to reduce the information stored in the bins from \(\sigma\) to \(\sigma-log\beta\), which means that secure computations are done on elements with smaller representations. If we choose the hash function carefully, we can realize that if two items have the same representation stored in the bin and are in the same bin, they must be equal. Permutation-based hashing provides uses a Feistel-style trick to implement this purpose.
In details, we split the bit representation of an input item \(x\) into \(x_{L}||x_{R}\), where \(x_{L}\) has \(log\beta\) bit-length, which is equal to the big-length of the bin index in the hash table, and \(|x_{R}|\) has \(\sigma-log\beta\) bit-length. Then we define \(f()\) be a random function whose range is \([0,\beta-1]\), represented by \(log\beta\) bits. We define \(h(x)=x_{L}\oplus f(x_{R})\). Then the input item \(x\) is stored in the bin \(x_{L}\oplus f(x_{R})\), and the value stored in the bin is \(x_{R}\), which is reduced to \(\sigma-log\beta\) bit-length. We observe that if two elements \(x\) and \(y\) are stored in the same bin, and the stored values \(x_{R}\) and \(y_{R}\) are also the same, then that means \(f(x_{R})=f(y_{R})\). Since the bin \(h(x)=h(y)\), then \(x_{L}=y_{L}\). So, we can conclude that \(x=y\). Note that if \(|x|\) is not much longer than \(log\beta\), the overhead will have a great improvement.
### _Circuit PSI based on Hashing_
The first circuit-based PSI protocol was introduced in [9]. Prior to performing computations in the circuit, the parties are required to sort their input sets and input them to the circuit.
In the protocol proposed in [13], the parties must also utilize hash schemes, such as Cuckoo hashing and simple hashing, to map their input items into bins. Generally, these circuit-based PSI protocols necessitate that the parties conduct operations (e.g., reordering and hashing) on their input sets beforehand.
It was shown in [6] that if the parties map their input items into bins, then they only need to compare the items that stored in the same bins. Nevertheless, the number of items mapped into each bin may reveal information about their input sets. Thus, to ensure that this information is concealed from other parties, it is necessary to pad all bins with random dummy values, without revealing the number of items in each bin. It is assumed that the parties agree on the maximum number of items that can be mapped into a bin, denoted as \(B\). Following the mapping of all items into bins, the parties must pad each bin with dummy values until it contains \(B\) items. If the two parties compare the items in the bins using pairwise comparisons, the total number of the comparisons will reduce from \(O(n^{2})\) to \(O(\beta\cdot B^{2})\), where \(n\) is the number of items in each input set and \(\beta\) is the number of the bins.
In our proposed protocols, we combine simple hashing scheme with the Sort-Compare-Shuffle protocol to compare items in each bin. By carefully selecting the number of bins \(\beta\) and the upper bound \(B\) for the maximum number of items that can be mapped to a bin, this approach can offer computational and communicational advantages over previous schemes. In this scenario, the parties need to map their input items into bins using a hash function and locally sort the items in each bin for subsequent merge sorting.
## III Multi-party Bitwise-AND Protocol
As illustrated in Figure 2, the mBWA protocol is exclusively viable for small-scale universes. In such circumstances, the input sets can be succinctly represented as bit-vectors of length \(2\sigma\). In a two-party context, the intersection can be computed by applying a simple bit-wise AND operation between the bit-vectors of the two parties. When it comes to multi-party situation, each party must first distribute their respective input sets among two designated parties (typically denoted as \(P_{1}\) and \(P_{2}\)) through the use of a secret-sharing scheme prior to the computation phase. Subsequently, \(P_{1}\) and \(P_{2}\) must reconstruct the respective bit-vectors of each party. The intersection can then be computed by performing a bit-wise AND operation between the bit-vectors.
The whole computation process of the circuit is relatively clear and the circuit can be obtained by instantiating a binary XOR gate \(m2^{\sigma}\) times and a binary AND gate \((m-1)2^{\sigma}\) times. The utilization of the free XOR gates technique permits the XOR gates in the circuit to be evaluated without incurring any communication or cryptographic operations, resulting in a bit-vectors reconstruction stage that is free of such operations. Despite the exponential number of AND gates, the small constant factor leads to favorable performance, particularly after integrating the optimization of the hash scheme. Detailed information about the experimental results can be found in the "Experimental Results" section.
## IV Multi-party Sort-Compare-Shuffle Protocol
It is worth noting that while the cost of a multi-party protocol increases exponentially with sigma, the constant factor is relatively small, leading to satisfactory performance in small universes. However, for larger universes, it is necessary to further reduce the protocol overhead. To this end, we propose the mSCS protocol, following the SCS paradigm. The protocol leverages the local computing capabilities of each party to minimize overhead.
The mSCS protocol comprises three parts as shown in Figure 3. Firstly, each participant, including \(P_{1}\) and \(P_{2}\), performs local sorting on the input set and then distributes the sorted set to \(P_{1}\) and \(P_{2}\) via a additive secret sharing scheme. Secondly, \(P_{1}\) and \(P_{2}\) utilize generic secure two-party computation to reconstruct the input sets and sort the union of the two sets with an oblivious merging network. As the sequence is already sorted, it is not necessary to compare every adjacent element, as was done in the scenario involving two parties. Rather, comparison of elements at specified positions in the sorted
Fig. 2: Multi-party Bitwise-AND Protocol
sequence suffices for finding the intersection elements. For example, in a scenario with three participating parties, if the first and third elements are equal, then the element must be one of the elements in the intersection. Nonetheless, direct output of the intersection elements is not viable, as this may reveal positional information. Further details can be found in [9]. Therefore, it is imperative to shuffle the matched elements to conceal any positional information from the resulting order.
### _Sort_
Referring to the first protocol for distributing and reconstructing the input sets, we assume that the input set of each participant has already been reconstructed in the circuit. Following this, participants \(P_{1}\) and \(P_{2}\) are required to implement an oblivious merging network to sort the union of the input sets, making use of the fact that the input sets are already sorted. The term "oblivious" implies that regardless of the order of the input elements, the circuit remains fixed, i.e., the sequence of comparison between elements is predetermined. In order to merge the \(m\) sorted lists into a fully sorted sequence, a merge-sort network is designed based on the \(k\)-Bitonic sort algorithm [7]. The resulting number of comparisons is \(O(mnlog^{2}(mn))\).
The concatenation of a sequence sorted in ascending order with a sequence sorted in descending order yields a bitonic sequence, which can be obtained after local sorting. In the context of sorting networks, the 2-Sorter module serves as a fundamental component. As shown in Figure 4, a design for 2-Sorter is presented in [9], which comprises a \(\sigma\)-bits Comparator and a \(\sigma\)-bits CondSwap, resulting in a requirement of only \(2\sigma\) non-free gates to compare two \(\sigma\)-bits elements. We can recursively utilize the Batcher's bitonic sorting network, as described [9], in a tree-like manner. However, the resulting high complexity caused by excessive redundant computations is not acceptable.
In our work, we introduce a \(k\)-Bitonic sort algorithm, denoted as Algorithm 1, which is a generalized version of the bitonic sort introduced by Batcher in 1968. We assume that \(k=\lceil\frac{m}{2}\rceil\) to indicate that there are \(k\) bitonic sequences in total in the case of \(m\) parties. It is important to note that \(k\)-Bitonic sort reduces to Batcher's bitonic sort when \(k\) is equal to 1. Specifically, the code executed until the 10th line of the algorithm is identical to Batcher's bitonic sorting algorithm.
We assume that the input \(V=V[0:N-1]\) is a \(k\)-bitonic sequence, where \(N=mn\) is the length of the sequence, and the ouput \(U[0:N-1]\) is an incremental sequence. By performing odd-even splitting on a sequence, the resulting sub-sequences can still exhibit the characteristic of \(k\)-bitonic sequence. Thus, the \(k\)-bitonic sort algorithm KBS(\(V,N,U\)) is a recursive procedure. We denote \(Q(a_{0},a_{1},...,a_{n-1})\) is the permutation of the sequence \(A=\{a_{0},a_{1},...,a_{n-1}\}\) ordered in an ascending fashion. The fundamental operation in the execution process of the KBS algorithm is to compare and exchange two elements.
```
1:if N = 1 then
2: RETURN;
3:endif
4:\(Y[0:\frac{N}{2}-1,0:1]=V[0:N-1]\); (\(Y[i,j]=V[2i+j]\))
5:for\(j=0\) to 1 do
6:KBS(\(Y[*,j],\frac{N}{2},B[*,j]\));
7:endfor
8:for\(i=0\) to \(\frac{N}{2}-1\)do
9:\(C_{0}[i,*]=Q(B[i,*])\);
10:endfor
11:\(d=\lceil log_{2}(N)\rceil\)
12:if\(N\leq 2k\)then
13:\(d=d-1\);
14:endif
15:for\(t=1\) to \(d\)do
16:\(\delta=2^{d-t}\);
17:for\(i=0\) to \(\frac{N}{2}-1-\delta\)do
18:\((C_{t}[i,1],C_{t}[i+\delta,0])=Q(C_{t-1}[i,1],C_{t-1}[i+\delta,0])\);
19:endfor
20:endfor
21:\(U=C=C_{d}\); (\(U[2i+j]=C[i,j]\))
```
**Algorithm 1**\(k\)-_Bitonic Sort KBS(\(V,N,U\))_
Based on the complexity analysis of the recursive function, we deduce that the time complexity of the KBS algorithm is \(O(mnlog^{2}(mn))\). To provide a more detailed analysis, we use the recurrence formula to work out the number of comparisons performed by the merge sort circuit. The resulting expression is \(\frac{mn}{4}log(mn)log(\frac{mn}{2})+mn-1\). Thus, we can construct a circuit that merges \(m\) sequences of \(n\) elements, each consisting
Fig. 3: Example of multiparty Sort-Compare-Shuffle protocol where \(m=3\) (any stage labeled as ”Oblivious” in the figure indicates the parts that require two-party secure computation).
of \(\sigma\) bits, into a single sorted sequence of \(mn\) elements. The circuit requires \(2\sigma(\frac{mn}{4}log(mn)log(\frac{mn}{2})+mn-1)\) non-free gates.
### _Compare_
After sorting the \(mn\) elements in a list, the next step in the secure set intersection protocol involves comparing adjacent \(m\) elements to identify the elements in the intersection.
In this phase, we employ a duplicate-selection circuit to identify all elements in the intersection. Specifically, the circuit filters out the elements in the intersection by computing whether a consecutive set of \(m\) elements are all equal. If they are equal, it outputs the value of the element; otherwise, it outputs a dummy value. Our investigation focused on exploring the properties of sorted lists and their relevance to the protocol at hand. We aimed to gain a deeper understanding of the characteristics that could be leveraged to optimize the execution of the protocol. After thorough analysis, we identified two key properties that are particularly significant:
#### Iv-B1 Non-adjacent element comparison
By exploiting the inherent order of the sorted list, we discovered that it is unnecessary to compare adjacent elements during the computation. Instead, we can employ a larger stride when comparing elements. In a scenario involving three parties, we denote the the sorted list as \(l\). So we can compare elements \((l_{i},l_{i+2})\) rather than evaluating pairs \((l_{i},l_{i+1})\) and \((l_{i+1},l_{i+2})\). This strategic adjustment effectively reduces the computational complexity of the circuit, leading to improved efficiency.
#### Iv-B2 Obliviousness of matched elements
Through our investigation, we observed an intriguing pattern within the sorted sequence. In any consecutive group of \(2m-1\) elements, there can be at most one matched element, and it will always be positioned in the middle of the group. This critical characteristic aligns with the requirement of the circuit's obliviousness, ensuring that only relevant and meaningful elements are considered in the intersection computation.
Based on these two properties, we are able to streamline the circuit design and enhance its overall performance. To implement and leverage the two important properties we have discovered, as shown in Figure 6, we design a \(5\)-duplicate-selection circuit to acquire the matched elements in the intersection, if any. Specifically, the circuit takes as input \(5\) consecutive elements and outputs the matched element or a dummy value \(0^{\sigma}\). In fact, the combination of the \(5\)-duplicate-selection circuits ensures that every continuous \(5\) elements of the sequence in the circuit will be compared.
The \(5\)-duplicate-selection circuit is constructed using \(4\sigma-1\) non-free gates. The combination of duplicate-selection circuits employs \(n-1\)\(5\)-duplication-selection circuits and one \(3\)-duplication-selection circuit which can be constructed using \(2\sigma-1\) non-free gates. Consequently, the total number of non-free gates required for this phase is \((4n-2)\sigma-n\). By leveraging the property of non-adjacent element comparison, in a scenario involving three participants, we can achieve a reduction in the circuit's size by approximately 25% for this particular stage. Furthermore, the effectiveness of this optimization becomes more pronounced when \(m\) is a relatively large value (more than 10). Moreover, this particular circuit design is capable of producing \(n\) elements rather than \(3n-2\) which would be produced using adjacent element comparison. Therefore, this optimization also results in a reduction of approximately 60% in the size of the circuit for the next stage (Shuffle).
This design can be generalized to \(m\) participants, where the total number of non-free gates required for the comparison phase is \([(m+1)\sigma-1](n-1)+2\sigma-1\). Therefore, we have devised and employed a combination of duplicate-selection circuits, which compare each of the \(m\) consecutive elements in the sorted sequence, in order to figure out the elements in the intersection. This process involves the use of \(O(mn)\) non-free binary gates.
### _Shuffle_
Upon figuring out the intersection, the matched \(n\) elements (and dummy values) are arranged in a specific positional order in the circuit. This ordering has the potential to compromise the confidentiality of parties' input sets. However, if we only
Fig. 4: Basic modules of a sorting network.
Fig. 5: Example of comparing three consecutive elements.
need to perform subsequent computations based on the elements in the intersection, this step can be omitted. Therefore, to preserve privacy, it is essential to shuffle the elements before their disclosure. The shuffling process is crucial to ensuring privacy in secure multi-party computation, especially when dealing with sensitive data. In the absence of a proper shuffling algorithm, the positional information of the elements in the circuit may allow an adversary to deduce the corresponding parties' input sets. Specific examples can be obtained in [9]. In line with [9], we also utilize an oblivious shuffling network, which is constructed using \(O(mnlog(mn))\) non-free gates, to implement the random permutation needed to obliterate the positional information.
An oblivious shuffling network employs a set of gates that do not reveal any information about the input values while transforming the input sequence into an output sequence. The network achieves this by iteratively swapping pairs of elements in the sequence according to a predetermined permutation. Since the permutation is randomly generated, the resulting sequence is uniformly distributed over all possible permutations. Consequently, the positional order of the elements in the circuit is destroyed, and parties' input sets remain private.
The computational complexity of the shuffling process is a critical consideration, as it determines the scalability of the multi-party computation protocol. The \(O(mnlog(mn))\) complexity of the oblivious shuffling network used in this protocol can be expensive for large inputs. Nonetheless, it remains a practical solution for most use cases, and ongoing research aims to develop more efficient shuffling algorithms.
## V Hashing to Bins
In this section, we present an enhanced iteration of the previously proposed protocol. Our approach combines the simple hashing scheme with circuit-based multi-party private set intersection to minimize communication overhead and enable parallel computing. We consider the earlier proposed protocol as a sub-protocol within our optimization framework. Specifically, we employ the simple hashing scheme to partition the data into multiple bins, and subsequently employ the circuit-based sub-protocol within each bin to compute the set intersection efficiently. This optimized approach aims to improve the overall performance and scalability of the protocol while maintaining the privacy guarantees provided by the original scheme.
To further improve the efficiency of the protocol, we also use the permutation-based hashing function. These hash functions allow for a more compact representation of the elements stored in each bin, which can further reduce communication costs, as the communication cost mainly depends on the number of non-free gates in the circuit.
We provide a detailed description of the main ideas behind our optimization approach and the overall protocol flow. To prove the effectiveness of our optimization approach, we also conduct extensive calculations and analyses on both the efficiency and security of our protocol after incorporating the hashing scheme.
### _Construction_
As shown in Figure 7, each participant follows simple hashing scheme, where each bin can store more than one element, using a public permutation-based hash function \(h\) to hash elements from the input set to their respective bins. We assume that each participant possesses \(\beta\) bins, with each bin having a capacity of \(B\), including both dummy values and actual elements.
Once the elements are hashed into the bins, the generic circuit-based multi-party protocol is executed in each corresponding bin to perform the set intersection operation. The circuits in each bin are independent of each other, allowing for simple and efficient parallel computation.
### _Security_
In our proposed scheme, each party employs simple hash scheme to map their respective input set into multiple bins. In simple hash scheme, if the number of items mapped to a bin exceeds its capacity, the use of bins with a constant size may result in hashing failures.
When hashing fails, the party responsible for the hashing operation has two possible options. The first option is to ignore the unmapped element and remove it from its input set, which may result in an incorrect final computed result, albeit
Fig. 6: Design and use of 5-duplicate-selection circuits.
are. The second option is to attempt to use an alternative hash function to remap the unmapped element. However, this approach requires informing the other parties involved of the use of the new hash function, which introduces a potential privacy leak. For instance, the other party could infer whether the input set \(S\) of the first party could be equal to a set \(S^{\prime}\) by checking if \(S^{\prime}\) did not encounter hash failure, indicating that \(S^{\prime}\) and \(S\) are not identical. Thus, it is essential to carefully set the capacity of each bin to minimize the probability of hashing failures to be negligibly small. However, a large bin capacity may inevitably result in an excessive number of dummy values in each bucket, leading to a reduction in the overall efficiency of the protocol.
The most desirable scenario is when all elements are uniformly mapped to bins and each bin is fully occupied. In this case, we denote the ideal capacity of each bin is \(b=\frac{n}{\beta}\), where \(n\) is the total number of elements and \(\beta\) is the number of bins. Next, the actual capacity \(B\) should be as close as possible to \(b\) while maintaining a negligible probability of hashing failures. We assume that \(h\) is a random uniform hash function, and \(X_{i},i\in[n]\) are independent random variables. From the perspective of a fixed party, observing a fixed bin, \(X_{i}=1\) means that the \(i\)-th element is mapped to the bin by \(h\), otherwise \(X_{i}=0\). Then, \(X=\sum_{i=1}^{n}X_{i}\) represents the number of the participant's elements mapped to that bin. Thus, we can obtain \(E(X)=b\).
In probability theory, _Chernoff Bound_ is a statistical concept that helps us understand the probability of the sum of independent random variables deviating from its expected value. Thus, according to Chernoff Bound, we can obtain that for any \(\delta\in[0,1]\), it holds that:
\[Pr[X\geq(1+\delta)b]\leq e^{-\frac{\delta^{2}h}{3}}\]
We define the event \(A_{i}\) as \(X\geq(1+\delta)b\), which represents the occurrence of hashing failure in a particular bin. Using the Boolean inequality \(P(\bigcup_{i}A_{i})\leq\sum_{i}P(A_{i})\), given that there are \(m\) parties and each party has \(\beta\) bins, the probability of the whole protocol experiencing hashing failure satisfies:
\[Pr[Failure] \leq m\beta e^{-\frac{\delta^{2}h}{3}}\] \[=m\frac{n}{b}e^{-\frac{\delta^{2}h}{3}}\] \[=2^{log_{2}e^{-\frac{\delta^{2}h}{3}}+log_{2}\frac{m_{2}}{3}}\] \[<2^{-\frac{\delta^{2}h}{3}+logn}\]
Therefore, if \(B\) is set to \((1+\delta)b\) and the right-hand side of the inequality is a negligible function, it suffices to prove that the proposed scheme will result in negligible probability of hashing failures. So, if we want to keep the overall failure probability less than \(2^{-\gamma}\):
\[2^{-\frac{\delta^{2}h}{3}+logn}<2^{-\gamma}\]
We can get:
\[b>\frac{3(logn+\gamma)}{\delta^{2}}\]
and
\[\beta=\frac{n}{b}<\frac{n\delta^{2}}{3(logn+\gamma)}\]
### _Complexity Analysis_
In fact, we can set \(\delta=1\) and \(b=\log^{2}n\) to make the probability of overall hashing failure a negligible function of the input set size \(n\). If we consider the mSCS protocol and use it to handle the elements in each bin, we need to perform \(O(mb\log^{2}(mb))\) comparisons between elements. Thus, the total number of comparisons for \(\beta\) bins is \(O(\beta mb\log^{2}(mb))\), which can be simplified as \(O(mnlog^{2}(mlogn))\).
We assume that all the elements can be represented using \(\sigma\) bits. By using permutation-based hashing function to map elements to corresponding bins, the elements stored in the bin can be represented with shorter bits. Specifically, with \(\beta\) bins, the corresponding element can be stored using \(l=\sigma-log\beta\) bits, and it can be guaranteed that if two elements have the same value stored in the same bin, then these two elements must be equal. Therefore, the asymptotic communication complexity of our final protocol reduces from \(O(\sigma mnlog^{2}(mn))\) to \(O(lmnlog^{2}(mlogn))\). By summing up, simplifying, and scaling down the number of non-free gates across the three
Fig. 7: Combining simple hashing and circuit-based multi-party private set intersection protocol
stages, we obtain an upper bound on the number of non-free gates required for computations within each bin:
\[\sigma[\frac{mn}{2}log^{2}(mn)+\frac{8mn}{3}+n]\]
In our proposed optimization scheme, each bin stores \(B=(1+\delta)b\) elements, and each element has a length of \(\sigma-\log\beta\) bits. Therefore, we can obtain an upper bound on the number of non-free gates for each bin required after incorporating the hashing scheme using the above equation (with \(n=(1+\delta)b\) and \(\sigma\) is set to \(\sigma-\log\beta\)). That is, for given \(m\), \(n\), \(\sigma\), and \(\gamma\), we can minimize the number of non-free gates required by setting the value of \(\delta\) and \(b\).
## VI Experimental Results
In this section, we provide a comprehensive evaluation of the performance and costs associated with our protocols, considering specific values for the security parameters. Specifically, we set the computational security parameter to \(\kappa=128\), and the statistical security parameter to \(\lambda=80\). As the work of [9] has demonstrated, we find that the BWA protocol is the optimal choice when the size of the element space is small (up to approximately \(\sigma\) = 20), as verified through experimental validation. Therefore, to accommodate more general scenarios, we have exclusively implemented mSCS protocol, and measured its performance on a range of inputs.
To carry out our experiments, we utilized two standard desktop computers equipped with high-performance 12th Gen Intel(R) Core(TM) i5-12400 2.50GHz processors and 16GB RAM. These computers were connected via a local area network (LAN) with a bandwidth capacity of 100 Mbps. All the experiments were done using two standard desktop computers equipped with 12th Gen Intel(R) Core(TM) i5-12400 2.50GHz processors and 16GB RAM. These computers were connected via a local area network (LAN) with a bandwidth capacity of 100 Mbps. In our experiments, all the elements were randomly generated from some fixed universe, and each party's input set was guaranteed to have no duplicate elements. The time taken by the protocol includes both the execution of oblivious transfer (OT) and the execution phase of garbled circuit.
### _Plain mSCS_
Table 1 presents the results of a experimental evaluation conducted to assess the performance of the plain mSCS protocol in computing the intersection of large-scale input sets. Our findings reveal that the plain mSCS protocol incurs a significantly higher communication overhead compared to a custom multi-party private set intersection protocol, such as the one proposed by [11]. Specifically, the plain mSCS protocol necessitates between 100 and 1000 times more communication than [11], which discloses the intersection in plaintext to the participating parties. Despite the increased communication overhead, the results obtained highlight the feasibility of employing the plain mSCS protocol for privacy-preserving set intersection in large-scale scenarios. Furthermore, the protocol demonstrates potential utility in non-real-time applications where private set intersection serves as a submodule.
### _Hashing-mSCS_
Table 2 presents a comprehensive analysis of the minimum numbers of non-free gates required for each element in the Hashing-mSCS protocol, with \(\delta\) and \(b\) values set at \(m=3\) and \(\gamma=40\). The number of non-free gates serves as a crucial metric for evaluating the protocol's complexity, as it significantly influences the performance of circuit-based Multi-Party Computation protocols. Notably, the number of non-free gates remains independent of the specific implementation details of the MPC framework, distinguishing it from other benchmarks such as communication overhead or runtime.
By conducting a meticulous analysis and comparing the theoretical computational results of our proposed Hashing-mSCS protocol with the empirical findings from the naive mSCS protocol, we have discovered a substantial reduction of approximately 20% in communication overhead when employing the Hashing-mSCS protocol. This reduction can be attributed to the utilization of hashing techniques, which partition the elements into distinct bins. Importantly, each bin operates independently, and the intersection of computation results within each bin forms a subset of the final intersection. As a consequence, the Hashing-mSCS protocol facilitates parallel computation, effectively mitigating the challenge of excessive memory consumption associated with large-scale circuits.
This improvement in communication overhead underscores the advantages offered by the Hashing-mSCS protocol in terms of efficiency and scalability. The parallel computation capability, achieved through bin-based partitioning and independent processing, enables more efficient resource utilization. By avoiding the need for storing and processing the entire intersection in a single circuit, the protocol alleviates the burden on memory resources, making it particularly well-suited for scenarios involving large circuit scales. These findings contribute to the growing body of knowledge in the field of secure multi-party computation, paving the way for enhanced privacy-preserving protocols in large-scale settings.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline m & & 3 & & & & & & & & & \\ \hline n & \(2^{8}\) & \(2^{12}\) & \(2^{16}\) & \(2^{8}\) & \(2^{12}\) & \(2^{16}\) & \(2^{8}\) & \(2^{12}\) & \(2^{16}\) & \(2^{8}\) & \(2^{12}\) & \(2^{16}\) \\ \hline Time & 0.37 & 7.07 & 160.39 & 0.52 & 12.30 & 272.80 & 0.73 & 19.78 & 357.26 & 0.97 & 32.90 & 593.54 \\ \hline Comm. & 13.68 & 430.46 & 11457.65 & 48.97 & 1539.40 & 41018.387 & 100.14 & 3151.91 & 83869.97 & 165.53 & 5208.56 & 138637.57 \\ \hline \end{tabular}
\end{table} TABLE I: Total runtime (in seconds) and communication (in MB) of Plain mSC protocol. All the parties have n 32-bit elements as input. |
2308.00040 | Wheeler DeWitt States of a Charged AdS$_4$ Black Hole | We solve the Wheeler DeWitt equation for the planar Reissner-Nordstr\"om-AdS
black hole in a minisuperspace approximation. We construct semiclassical
Wheeler DeWitt states from Gaussian wavepackets that are peaked on classical
black hole interior solutions. By using the metric component $g_{xx}$ as a
clock, these states are evolved through both the exterior and interior
horizons. Close to the singularity, we show that quantum fluctuations in the
wavepacket become important, and therefore the classicality of the
minisuperspace approximation breaks down. Towards the AdS boundary, the Wheeler
DeWitt states are used to recover the Lorentzian partition function of the dual
theory living on this boundary. This partition function is specified by an
energy and a charge. Finally, we show that the Wheeler DeWitt states know about
the black hole thermodynamics, recovering the grand canonical thermodynamic
potential after an appropriate averaging at the black hole horizon. | Matthew J. Blacker, Sirui Ning | 2023-07-31T18:00:27Z | http://arxiv.org/abs/2308.00040v2 | # Wheeler DeWitt States of a Charged AdS\({}_{4}\) Black Hole
###### Abstract
We solve the Wheeler DeWitt equation for the planar Reissner-Nordstrom-AdS black hole in a minisuperspace approximation. We construct semiclassical Wheeler DeWitt states from Gaussian wavepackets that are peaked on classical black hole interior solutions. By using the metric component \(g_{xx}\) as a clock, these states are evolved through both the exterior and interior horizons. Close to the singularity, we show that quantum fluctuations in the wavepacket become important, and therefore the classicality of the minisuperspace approximation breaks down. Towards the AdS boundary, the Wheeler DeWitt states are used to recover the Lorentzian partition function of the dual theory living on this boundary. This partition function is specified by an energy and a charge. Finally, we show that the Wheeler DeWitt states know about the black hole thermodynamics, recovering the grand canonical thermodynamic potential after an appropriate averaging at the black hole horizon.
## 1 Introduction
In recent decades, the holographic principle has made it possible to study the bulk dynamics of gravitational theories via processes in a dual quantum theory. This dual quantum theory lives on some slice, usually the boundary, of the spacetime. For asymptotically AdS (anti de Sitter) spacetimes, an important part of the holographic toolbox has been the holographic renormalization group flow. This tells us that events further from the boundary correspond to lower energy processes in the dual theory [1; 2]. The location in the bulk can be quantified by the metric function \(g_{tt}\). Specifically, the AdS boundary lives at \(g_{tt}\rightarrow-\infty\) and corresponds to the ultraviolet limit of the dual theory, and moving to lower energy processes corresponds to increasing \(g_{tt}\). Reaching a horizon in the spacetime corresponds to \(g_{tt}\to 0\), or equivalently the far infrared of the renormalization group. At this point, the usual interpretation from the dual theory perspective is that there are no more modes to integrate out.
However, it was recently emphasised in [3] that it is natural to extend the renormalization group flow in the bulk through the horizon, including as far as the singularity.
Subsequent work has further explored extending the renormalizaton group flow through the horizon in various setups [4; 5; 6; 7; 8; 9], including when the black hole has charge [10; 11; 12]. The upshot of these developments is that the renormalization group flow might be added to our current list of tools [13; 14; 15; 16] for probing black hole interiors. What remains to be achieved is using the flow to explicitly construct the interior from the exterior.
A hint on how to do this is offered by the observation that the natural language for describing the renormalization group flow is via the Hamilton-Jacobi equation of classical mechanics [17; 18; 19]. The Hamilton-Jacobi equation arises in the context of quantum cosmology as the classical limit of the Wheeler DeWitt equation [20; 21]. Furthermore, the Wheeler DeWitt equation arises from the canonical quantization of the Einstein-Hilbert action so is a natural tool for studying quantum aspects of spacetime. This point has been emphasised recently in [22; 23]. Therefore, Wheeler DeWitt states constructed from Hamilton-Jacobi functions are a probe of semiclassical aspects of spacetime that naturally connect with the holographic renormalization group flow. They may therefore offer a window into the black hole interior.
This was indeed the approach of [24], which studied how the (semiclassical) quantum state of the interior of an AdS-Schwarzschild black hole is prepared by the AdS boundary. In that work, the interior wavefunction was identified with the boundary partition function extended to \(g_{tt}>0\). This was achieved by constructing Wheeler DeWitt states from solutions to the Hamilton-Jacobi equation for a number of different clocks. The most convenient of these solutions was linear in \(g_{tt}\), so it was natural to extend from \(g_{tt}<0\) to \(g_{tt}>0\) when passing through the horizon. Indeed, in the setup of [24]\(g_{tt}\) was monotonic and a well-defined clock from the boundary to the singularity. When a non-monotonic clock such as volume is used, solutions to the Wheeler-DeWitt equation involve, for example, superpositions of spacetimes close to the singularity and close to the horizon - both of which have small spatial volumes - and are hence harder to interpret. Non-monotonic clocks are therefore a less convenient choice for probing the full space time.
What happens, then, if \(g_{tt}\) is not monotonic? This arises in spacetimes with more than one horizon. It is natural to ask what choice of clock would be best for applying the framework of [24] to these spacetimes. One such case is the Reissner-Nordstrom-AdS (RN AdS) spacetime, where the charge of the black hole introduces a second (Cauchy) horizon in the black hole interior.
Exploring the procedure of [24] in the (planar) RN AdS spacetime is the aim of this work. As is emphasised in Figure 1, in the RN AdS spacetime \(g_{tt}\) is not monotonic from the AdS boundary to the singularity, whereas \(g_{xx}\) is. We therefore solve the Wheeler DeWitt equation in the black hole interior, and use \(g_{xx}\) as a clock to extend these solutions to the cauchy interior and the exterior. In the cauchy interior, we study whether these states can probe the singularity. In the exterior, we identify the Wheeler DeWitt states with the partition function of the dual theory living on the AdS boundary via a renormalization group flow in \(g_{xx}\).
Figure 1: The regions of the Penrose diagram of the RN-AdS spacetime that will play a role in our discussion. The black dotted lines indicate the black hole and cauchy horizons, where \(g_{xx}=g_{xx,+}\) and \(g_{xx}=g_{xx,-}\) respectively. At the AdS Boundary \(g_{xx}=\infty\) and at the singularity \(g_{xx}=0\). The value of \(g_{tt}\) at each of these points is indicated to emphasise that it is not monotonic. We solve the Wheeler DeWitt equation in the black hole interior on constant \(g_{xx}\) slices, and extend these to the cauchy interior in Section 4 and the exterior in Section 5.
This paper is organised as follows. In Section 2, we recover the RN-AdS black hole from Hamilton-Jacobi equation. Using the corresponding Hamilton-Jacobi function, we construct semiclassical Wheeler DeWitt states in Section 3. We then show three main results. Firstly, in Section 4 we use \(g_{xx}\) as a clock to show that the variance in \(\langle g_{tt}\rangle\) blows up as the singularity is approached. This result indicates that minisuperspace classicality breaks down. Secondly, in Section 5 we show that the Wheeler DeWitt states are the continuation of the boundary partition function along \(g_{xx}\) slices. That is, the Wheeler DeWitt states are part of the renormalization group flow of a dual theory living on the AdS boundary. The dual theory is specified by an energy and a charge. Finally, in Section 6 we show that the Wheeler DeWitt states know about the thermodynamics of the black hole horizon. We do so by averaging over states of fixed extrinsic curvature at the black hole horizon in the spirit of [25], to recover the grand canonical thermodynamic potential.
## 2 Reissner-Nordstrom-AdS from the Hamilton-Jacobi equation
As will be elaborated upon in Section 3, the classical limit of the Wheeler-DeWitt equation is the Hamilton-Jacobi equation for the Einstein-Hilbert action. These classical solutions can then be used to construct semi-classical wavepackets. We will thus begin by recovering the RN-AdS black hole using the Hamilton-Jacobi formulation of gravity, following the approach of [24; 25].
The action for four dimensional gravity with a negative cosmological constant and a Maxwell field is
\[I\left[g,A\right]=\int d^{4}x\sqrt{-g}(R+6)-\frac{1}{4}\int d^{4}x\sqrt{-g}F^ {2}+2\int d^{3}x\sqrt{h}K. \tag{1}\]
Here, we work in units \(16\pi G_{N}=1\). The second term in (1) is the Gibbons-Hawking boundary term, and the final term is the action of a Maxwell field of strength \(F=dA\) for a vector potential \(A\). Throughout this paper, we will consider the ansatze
\[ds^{2}=-N^{2}dr^{2}+\frac{g_{tt}}{\left(\Delta t\right)^{2}}dt^{2}+\frac{g_{ xx}}{\left(\Delta x\right)^{2}}\left(dx^{2}+dy^{2}\right),\text{ and }A=\frac{\phi_{t}}{\Delta t}dt, \tag{2}\]
where \(N\), \(g_{tt}\), \(g_{xx}\) and \(\phi_{t}\) are functions of \(r\) only. As in [25], we explicitly include in the metric factors of \(\Delta t\) and \(\Delta x\), the extent of the \(t\) and \(\{x,y\}\) coordinates respectively. These factors simply re-scale our metric functions in a way that is convenient for the rest of this work.
We are interested classically in the black hole interior, as depicted in Figure 1. On the ansatz (2), the action is
\[I[N,g_{tt},g_{xx},\phi_{t}]=\int dr\mathcal{L}, \tag{3}\]
where the Lagrangian density is
\[\mathcal{L}=\frac{g_{xx}^{2}\left(\left(\partial_{r}\phi_{t}\right)^{2}+(12/L^{ 2})g_{tt}N^{2}\right)-2g_{xx}\partial_{r}g_{tt}\partial_{r}g_{xx}-g_{tt}\left( \partial_{r}g_{xx}\right)^{2}}{2\sqrt{g_{tt}}g_{xx}N}. \tag{4}\]
From this action we define the momenta \(\pi_{i}=\{\pi_{tt},\pi_{xx},\pi_{\phi}\}\) as conjugate to \(g_{i}=\{g_{tt},g_{xx},\phi_{t}\}\), and construct a Hamiltonian in the usual way. \(N\) plays the role of a Lagrange multiplier, and imposes the Hamiltonian constraint
\[-\frac{1}{2}g_{tt}\pi_{tt}^{2}+g_{xx}\pi_{tt}\pi_{xx}+\frac{6}{L^{2}}g_{xx}^{ 2}-\frac{\pi_{\phi}^{2}}{2}=0. \tag{5}\]
To reconstruct the Hamilton-Jacobi equation, we introduce the Hamilton-Jacobi function \(S(g_{i})\) such that \(\pi_{i}=\partial_{g_{i}}S(g_{i})\). From (5) we have
\[g_{xx}\left(\partial_{g_{xx}}S\right)\left(\partial_{g_{tt}}S\right)-\frac{1 }{2}\left(\partial_{\phi_{t}}S\right)^{2}-\frac{1}{2}g_{tt}\left(\partial_{g_ {tt}}S\right)^{2}+\frac{6}{L^{2}}g_{xx}^{2}=0. \tag{6}\]
The classical equations of motion are obtained from the Hamilton-Jacobi equation by first finding one member of the family of solutions to (6). As there are three variables \(g_{i}\), we expect the solution to have three constants of integration. Two of these are non-trivial, and the third leads to an overall shift in in \(S\) which does not contribute to the equations of motion and we therefore do not consider. The first such solution with which we will be concerned is
\[S_{1}\left(g_{tt},g_{xx},\phi_{t};k_{0},c_{0}\right)=-\frac{e^{-k_{0}}\left(c _{0}^{2}+4g_{xx}^{2}/L^{2}\right)}{2\sqrt{g_{xx}}}+c_{0}\phi_{t}+2g_{tt}\sqrt {g_{xx}}e^{k_{0}}, \tag{7}\]
where \(\{k_{0},c_{0}\}\) are the non-trivial constants of integration. The classical solution (i.e. the solution to Euler-Lagrange equations corresponding to (4)) is obtained by introducing another pair of constants \(\{\epsilon,\mu\}\) such that
\[\partial_{k_{0}}S_{1}=\epsilon,\text{ and }\partial_{c_{0}}S_{1}=\mu. \tag{8}\]
Solving this pair of equations we can rewrite two of our coordinates \(\{g_{tt},\phi_{t}\}\) in terms of the other and the constants of integration, as
\[g_{tt}=\frac{e^{-2k_{0}}\left(-c_{0}^{2}L^{2}-4g_{xx}^{2}+2\sqrt{g_{xx}}L^{2} e^{k_{0}}\epsilon\right)}{4L^{2}g_{xx}},\text{ and }\phi_{t}-\mu=\frac{c_{0}e^{-k_{0}}}{\sqrt{g_{xx}}}. \tag{9}\]
The upshot of (2.9) is that we can now recover the RN-AdS solution. Substituting (2.9) into the equation of motion for \(N\) from (2.4), we obtain
\[N^{2}dr^{2}=-\frac{dg_{xx}^{2}}{c_{0}^{2}-2e^{k_{0}}\epsilon\sqrt{g_{ xx}}+4g_{xx}^{2}/L^{2}}. \tag{2.10}\]
Finally, it will be convenient to define \(z=1/\sqrt{g_{xx}}\), so that when substituting (2.10) into (2.2) we obtain
\[ds^{2}=\frac{1}{z^{2}}\left(-f(z)e^{-2k_{0}}\frac{dt^{2}}{(\Delta t )^{2}}+\frac{dz^{2}}{f(z)}+\frac{1}{(\Delta x)^{2}}\left(dx^{2}+dy^{2}\right) \right),\text{ and }A=\frac{\Phi(z)}{\Delta t}dt, \tag{2.11}\]
where
\[f(z)=\left(\frac{1}{L^{2}}-\frac{z^{3}}{2}\epsilon e^{k_{0}}+ \frac{c_{0}^{2}}{4}z^{4}\right),\text{ and }\Phi(z)=\mu+c_{0}e^{-k_{0}}z. \tag{2.12}\]
We recognise (2.14) as the RN-AdS solution. The solution has a Cauchy and black hole horizon where \(f(z_{h})=0\). The function \(f(z)>0\) in the black hole exterior where \(t\) is the timelike direction, changes sign to \(f(z)<0\) after crossing the black hole horizon, and changes sign again to \(f(z)>0\) in the interior of the Cauchy horizon. In the coordinate \(z\), we have that (on-shell)
\[g_{tt}=-\frac{f(z)}{z^{2}}e^{-2k_{0}},\text{ and }\phi_{t}- \mu=c_{0}e^{-k_{0}}z. \tag{2.13}\]
Therefore, a consequence of having two horizons is that \(g_{tt}\) is only monotonic when restricted to either \(z>z^{*}\) or \(z<z^{*}\), where \(z^{*}=3\epsilon e^{k_{0}}/2c_{0}^{2}\). This means that \(g_{tt}\) is not a suitable candidate for a clock to define a relational notion of time. However, \(g_{xx}\) is trivially monotonic, and from (2.13) we can see that \(\phi_{t}\) is also monotonic, and so are both alternative choices of clock. This point will be discussed further in Section 4.
The radial functions should satisfy the well-known asymptotic properties of charged black hole solutions to Einstein-Maxwell-AdS theory [26]. To recognise our solution with these, it is instructive for a moment to consider the re-scaled time coordinate \(t^{\prime}=e^{-k_{0}}t/\Delta t\) where
\[ds^{2}=\frac{1}{z^{2}}\left(-f(z)dt^{\prime 2}+\frac{dz^{2}}{f(z)}+ \frac{1}{(\Delta x)^{2}}\left(dx^{2}+dy^{2}\right)\right),\text{ and }A=\Phi(z)e^{k_{0}}dt^{\prime}. \tag{2.14}\]
We therefore expect to recover from the electromagnetic potential the boundary chemical potential \(\mu_{B}\)
\[\lim_{z\to 0}\Phi(z)e^{k_{0}}=\mu_{B}, \tag{2.15}\]
and the boundary charge density \(\rho\)
\[\rho=-\lim_{z\to 0}\partial_{z}\Phi(z)e^{k_{0}}. \tag{16}\]
In particular, for the classical solution (13) we recover
\[\mu_{B}=\mu e^{k_{0}}\text{ and }\rho=-c_{0}. \tag{17}\]
We can compute the associated charge \(Q\) via the four-current \(J_{\mu}\) and associated four vector \(n^{\mu}\) on an \(r\) slice
\[Q=\int d^{3}x\sqrt{-g}J_{a}n^{a}=\rho e^{-k}=-c_{0}e^{-k_{0}}. \tag{18}\]
We can additionally fix the value of \(\mu\) (or equivalently \(\mu_{B}\)). In the RN-AdS geometry, if one rotates to Euclidean signature only the outer horizon remains and the thermal circle shrinks to zero there [26]. Thus, for a Wilson loop of the Maxwell field to be regular, we require \(\Phi(z_{+})=0\), where \(z_{+}\) is the outer horizon. This regularity condition in the bulk fixes the chemical potential of the boundary theory to
\[\mu_{B}=-c_{0}z_{+}, \tag{19}\]
where \(z_{+}\) is the value of \(z=1/\sqrt{g_{xx}}\) at the black hole horizon.
So far, we have only considered one member of the family of solutions to (6). We could have, of course, constructed a classically equivalent solution where \(\{\epsilon,\mu\}\) at the constants of integration. In fact, provided that we do not use both elements of a conjugate pair \(\{k_{0},\epsilon\}\) or \(\{c_{0},\mu\}\), we could construct a solution using any combination of these constants. In particular, we could construct the three following solutions
\[S_{\epsilon,c_{0}}= c_{0}\phi_{t}+\frac{F_{1}-\epsilon^{2}+\epsilon\sqrt{ \epsilon^{2}-F_{1}}}{\sqrt{\epsilon^{2}-F_{1}}-\epsilon}-\epsilon\log\frac{ \epsilon-\sqrt{\epsilon^{2}-F_{1}}}{4g_{tt}\sqrt{g_{xx}}}, \tag{20a}\] \[S_{\epsilon,\mu}= -F_{2}-\epsilon\log\frac{4g_{xx}^{3/2}}{L^{2}(F_{2}+\epsilon)},\] (20b) \[S_{k_{0},\mu}= \frac{e^{-k_{0}}\sqrt{g_{xx}}}{2}\left(e^{2k_{0}}\left(\left(\mu -\phi_{t}\right)^{2}+4g_{tt}\right)-\frac{4g_{xx}}{L^{2}}\right). \tag{20c}\]
Here, it has been convenient to define
\[F_{1}\left(g_{tt},g_{xx};c\right)= 4g_{tt}\left(c^{2}+4g_{xx}^{2}/L^{2}\right), \tag{21a}\] \[F_{2}\left(g_{tt},g_{xx},\phi_{t};\epsilon,f\right)= \sqrt{\epsilon^{2}-4\left(\left(\mu-\phi_{t}\right)^{2}+4g_{tt} \right)g_{xx}^{2}/L^{2}}. \tag{21b}\]
We have labelled each solution by the constants of integration it depends on. We wish to emphasise again here that each of (20a), (20b), and (20c) is classically equivalent to \(S_{1}\), in the sense that they lead to the same general solutions to the equations of motion. We have chosen to first focus on \(S_{1}\) because in Section 3 it can be used to construct exact solutions to the Wheeler DeWitt equation. However, the relevance of each of (20) will become apparent as we proceed.
## 3 Semiclassical Wheeler-DeWitt States
### From classical to semiclassical
We can use these classical results to construct semiclassical quantum states, following a procedure we briefly review below. For a more detailed introduction to the procedure which follows, the interested reader should consult [20; 21].
For the gravitational degrees of freedom, we have an action of the form
\[I[g]=\int dr\mathcal{L}=\int drN\left[\frac{1}{2N^{2}}G_{ab}\dot{g}^{a}\dot{g} ^{b}-V(q)\right], \tag{21}\]
where \(G_{ab}(g)\) are the minisuperspace metric and \(V(g)\) an effective potential respectively, and are functions of some metric functions \(g\). In the quantum theory, the Hamiltonian constraint is promoted to the Wheeler-DeWitt equation for a wavefunction \(\Psi(g)\)
\[\left(-\frac{1}{2}\nabla^{2}+V(g)\right)\Psi\left(g\right)=0, \tag{22}\]
where \(\nabla^{2}\) is the Laplacian on the inverse DeWitt metric \(\sqrt{-G}\nabla^{2}=\partial_{g^{a}}\left(\sqrt{-G}G^{ab}\partial_{g^{b}}\right)\). Strictly, there is an ordering ambiguity in quantizing the Hamiltonian constraint, which we discuss momentarily. If we restore units and consider solutions of the form \(\Psi=\exp\left(iS(q)/\hbar\right)\), we find
\[\mathcal{O}\left(\hbar^{0}\right): \left(\frac{1}{2}\left(\nabla S\right)^{2}+V\right)\Psi=0, \tag{23a}\] \[\mathcal{O}\left(\hbar^{1}\right): \nabla^{2}S\Psi=0. \tag{23b}\]
Observe that the \(\mathcal{O}\left(\hbar^{0}\right)\) constraint is nothing more than the Hamilton-Jacobi equation. Moreover, the ordering ambiguity in defining the Laplacian only arises at order \(\mathcal{O}\left(\hbar^{1}\right)\). Thus, if we are only concerned with the leading order semiclassical physics, we can form a basis of states by exponentiating the Hamilton-Jacobi function.
In addition to the gravitational degrees of freedom, we also have electromagnetic terms, which amount to an additional term proportional to \(\left(\partial_{r}\phi_{t}\right)^{2}\) in our action. In this case the above reasoning still holds, with the \(\mathcal{O}\left(\hbar^{0}\right)\) equation of motion still being the Hamilton-Jacobi equation and the \(\mathcal{O}\left(\hbar^{1}\right)\) equation containing an additional \(\partial_{\phi_{t}}^{2}S\) term.
### Constructing states of the charged black hole
With the above discussion in mind, in our setup the Wheeler-DeWitt equation becomes
\[\frac{\partial}{\partial g_{tt}}\left(g_{tt}\frac{\partial\Psi}{\partial g_{tt }}-2g_{xx}\frac{\partial\Psi}{\partial g_{xx}}\right)+\frac{\partial^{2}\Psi} {\partial\phi_{t}^{2}}+\frac{12}{L^{2}}g_{xx}^{2}\Psi=0. \tag{3.4}\]
From section 3.1, we know that \(e^{\pm iS}\) form a basis of semiclassical solutions to (3.4) if \(S\) is a solution to (2.6). In fact, for \(S_{1}\) (2.7), the solution
\[\Psi\left(g_{tt},g_{xx},\phi_{t};k,c\right)=e^{iS_{1}\left(g_{tt},g_{xx},\phi_ {t};k,c\right)}, \tag{3.5}\]
is an exact solution to the Wheeler DeWitt equation (3.4). Note that for the other Hamilton-Jacobi functions in (2.20), \(e^{\pm iS}\) are only solutions to leading order. We will thus first consider the basis of exact solutions to (3.4) constructed from \(S_{1}\). From the basis (3.5), we construct a general solution
\[\Psi\left(g_{tt},g_{xx},\phi_{t}\right)=\int_{-\infty}^{\infty}\frac{dk}{2\pi }\int_{-\infty}^{\infty}\frac{dc}{2\pi}\beta\left(k,c\right)e^{iS_{1}\left(g_ {tt},g_{xx},\phi_{t};k,c\right)}. \tag{3.6}\]
Here \(\beta\left(k,c\right)\) is an arbitrary function. By letting our constants of integration \(\left\{k,c\right\}\) have either positive or negative sign, we only need to consider the \(+iS_{1}\) solution.
As in [24; 25], we can obtain another set of semiclassical solutions by fourier transforming to the basis corresponding to the classical constants \(\left\{\epsilon,\mu\right\}\). We define
\[\beta\left(k,c\right)=\int\int\frac{d\epsilon}{\sqrt{2\pi}}\frac{d\mu}{\sqrt{ 2\pi}}\alpha\left(\epsilon,\mu\right)e^{-i\epsilon k}e^{-i\mu c}, \tag{3.7}\]
where \(\alpha\left(\epsilon,\mu\right)\) is an arbitrary function. The solution (3.6) then becomes
\[\begin{split}\Psi\left(g_{tt},g_{xx},\phi_{t}\right)=\int\frac{ d\epsilon}{2\pi}\int\frac{d\mu}{2\pi}\alpha\left(\epsilon,\mu\right)& \frac{2\left(-g_{xx}/L^{2}\right)^{1/2+i\epsilon/2}}{\sqrt{2 \pi}}\left(\frac{\left(\mu+\phi_{t}\right)^{2}+4g_{tt}}{4}\right)^{-\frac{1}{ 2}\left(\frac{1}{2}+i\epsilon\right)}\\ &\times K_{\frac{1}{2}+i\epsilon}\left(\frac{2\sqrt{(\mu+\phi_{t })^{2}+4g_{tt}}g_{xx}}{L}\right),\end{split} \tag{3.8}\]
where \(K_{1/2+i\epsilon}\) is a modified bessel function of the second kind. To make use of this Fourier transform in the semiclassical limit, we want to relate it to a solution of the Hamilton-Jacobi equation. To do so, we evaluate the Fourier transform using a stationary phase approximation and obtain the sum of two solutions
\[\Psi\left(g_{tt},g_{xx},\phi_{t}\right)=\int\frac{d\epsilon}{2\pi}\int\frac{d \mu}{2\pi}\alpha\left(\epsilon,f\right)\left[\psi_{+}\left(g_{tt},g_{xx},\phi _{t};\epsilon,\mu\right)+\psi_{-}\left(g_{tt},g_{xx},\phi_{t};\epsilon,\mu \right)\right]. \tag{3.9}\]
These solutions are
\[\psi_{\pm}\left(g_{tt},g_{xx},\phi_{t};\epsilon,\mu\right)=\frac{2g_{xx}}{L \sqrt{F_{2}^{2}\pm\epsilon F_{2}}}\exp\left\{i\left(\pm S_{\pm\epsilon,\mu}+ \frac{\pi}{2}\right)\right\}, \tag{3.10}\]
where \(S_{\pm\epsilon,f}\) is the solution to the Hamilton-Jacobi equation introduced in (2.20b). As expected, taking the limit of no charge (i.e. the AdS-Schwarzschild solution), one finds that \(\psi_{\pm}\) recover the solutions obtained in [24]. As in [24; 25], we only consider \(e^{iS_{+\epsilon,\mu}}\) to ensure we have positive norm solutions in the interior. That is, in the semiclassical regime, we will find it useful to consider states
\[\Psi_{+}\left(g_{tt},g_{xx},\phi_{t}\right)=\int\frac{d\epsilon}{2\pi}\int \frac{d\mu}{2\pi}\alpha\left(\epsilon,\mu\right)e^{iS_{+\epsilon,\mu}}. \tag{3.11}\]
### Gaussian wavepackets
We are yet to consider a particular form for \(\beta\left(k,c\right)\). As in [24; 25], it is natural to consider Gaussian wavepackets, because these are strongly supported on the classical solution. In this work, we consider a Gaussian wavepacket
\[\begin{split}\beta\left(k,c\right)=& N_{\beta}\exp \left\{-i\epsilon_{0}\left(k-k_{0}\right)-\frac{\Delta_{k}^{2}}{2}\left(k-k_{0 }\right)^{2}\right\}\\ &\times\exp\left\{-i\mu_{0}\left(c-c_{0}\right)-\frac{\Delta_{c} ^{2}}{2}\left(c-c_{0}\right)^{2}\right\},\end{split} \tag{3.12}\]
where \(N_{\beta}=\Delta_{c}\Delta_{k}/4\pi\). These wavepackets are strongly peaked around \(\left\{k=k_{0},c=c_{0}\right\}\) if \(\Delta_{c},\Delta_{k}\gg 1\). In that case, we can then perform the \(k\) and \(c\) integrals in (3.6) by a 2D stationary phase approximation. The wavefunction is strongly peaked on values of the metric function where
\[\left.\frac{\partial S_{1}}{\partial k}\right|_{k=k_{0}}=\epsilon_{0},\text{ and }\left.\frac{\partial S_{1}}{\partial c}\right|_{c=c_{0}}=\mu_{0}. \tag{3.13}\]
That is, the wavefunction is strongly supported on the classical solution (2.8) with \(\epsilon=\epsilon_{0}\) and \(\mu=\mu_{0}\). As when computing the fourier transform, we have a contribution from two stationary points (corresponding to \(S_{\pm\epsilon,\mu}\)). As noted above, we only consider
the branch corresponding to \(S_{+\epsilon,\mu}\) to ensure positivity of the norm. Therefore, to leading semiclassical order the wavefunction (3.6) becomes
\[\begin{split}\Psi\left(g_{tt},z,\phi_{t};k_{0},c_{0},\epsilon_{0}, \mu_{0}\right)=&\delta\left(g_{tt}=-\frac{f(z)}{z^{2}}e^{-2k_{0}} \right)\delta\left(\phi_{t}-\mu_{0}=c_{0}e^{-k_{0}}z\right)\\ &\times\exp\left\{-i\left(\epsilon_{0}-4e^{-k_{0}}/L^{2}z^{3} \right)+ic_{0}\mu_{0}\right\},\end{split} \tag{3.14}\]
where
\[f(z)=\frac{1}{L^{2}}-\frac{z^{3}}{2}\epsilon_{0}e^{k_{0}}+\frac{c_{0}^{2}}{4}z ^{4}. \tag{3.15}\]
Here, as in [25], we have taken the limit in which the strongly peaked Gaussian that arises after integration becomes a delta function, localising our state on the classical solution. We have used \(z=1/\sqrt{g_{xx}}\) to make explicit the connection with (2.9). The phase in (3.14) is equal to the onshell action, up to an additional \(c_{0}\mu_{0}\) term which could be incorporated into our choice of \(\beta(c)\) without affecting the classical dynamics.
One could complete a similar computation in the basis \(\{\epsilon,\mu\}\), by evaluating (3.8) on the Fourier transform of (3.12)
\[\alpha\left(\epsilon,f\right)=N_{\alpha}\exp\left\{ik_{0}\epsilon-\frac{\left( \epsilon-\epsilon_{0}\right)^{2}}{2\Delta_{k}^{2}}\right\}\exp\left\{ic_{0} \mu-\frac{\left(\mu-\mu_{0}\right)^{2}}{2\Delta_{c}^{2}}\right\}, \tag{3.16}\]
where \(N_{\alpha}=1/\left(4\pi\Delta_{c}\Delta_{k}\right)\). To obtain the classical solution (3.14), we require these \(\alpha\left(\epsilon,\mu\right)\) wavepackets to be strongly peaked on \(\{\epsilon=\epsilon_{0},\mu=\mu_{0}\}\), and thus that \(\epsilon_{0}\gg\Delta_{k}\) and \(\mu_{0}\gg\Delta_{c}\). That is, the semiclassical regime is obtained by requiring
\[\epsilon_{0}\gg\Delta_{k}\gg 1,\text{ and }\mu_{0}\gg\Delta_{c}\gg 1. \tag{3.17}\]
Physically, this is because if the wavepacket is to strongly localised on any coordinate (i.e. \(k\) or \(c\)) the variance in its conjugate momentum (i.e. \(\epsilon\) or \(\mu\)) will become large. The constraint (3.17) is therefore necessary to preserve the classicality of the minisuperspace, as will be discussed in Section 4.
## 4 Clocks and Expectation Values
The Wheeler DeWitt equation is timeless in the sense that it contains no explicit time parameter, but provides a relation between metric functions on any slicing of spacetime. To specify our slicing is to treat some coordinate (or some combination of coordinates)
as constant on each slice. Moving between slices is equivalent to evolving that coordinate. That is, that coordinate acts as a clock, and the Wheeler DeWitt equation can then be used to compute probabilities which are conditional upon the choice of clock. These probabilities can be used to compute expectation values to validate the stability of our semiclassical theory.
A number of different possible clocks where considered for the AdS-Schwarzschild black hole in [24]. The \(g_{tt}\) clock turned out to be convenient because a Hamilton-Jacobi function linear in \(g_{tt}\) was obtained. In that work, evolving \(g_{tt}\) from \(-\infty\) to \(\infty\) was equivalent to evolving the wavefunction from the AdS boundary (\(g_{tt}=-\infty\)), through the horizon (\(g_{tt}=0\)), and to the singularity (\(g_{tt}=\infty\)). However, as discussed in Section 2, when the black hole is imbued with a charge, \(g_{tt}\) is no longer monotonic from the boundary to the singularity. Therefore, although (7) is also linear in \(g_{tt}\), \(g_{tt}\) may not be the most convenient choice of clock. The fact that \(g_{tt}\) is not monotonic corresponds to the presence of two horizons, which also occurs in the dS-Schwarzschild black hole studied in [25]. In that work, constant \(R\) slices were used to move from the cosmological to the black hole horizons.
The analogous choice here, as advertised in Section 2, is to choose \(g_{xx}\) as our clock. This is because \(g_{xx}\) is monotonic from the AdS Boundary (\(g_{xx}\to\infty\)) to the singularity (\(g_{xx}\to 0\)). It is straightforward to show that \(g_{xx}\) is a null direction in the minisuperspace we are considering here. This means to take \(g_{xx}\) as a clock is to define the conserved norm via a limiting sequence of spacelike slices. That is, we can compute conditional probabilities for any given \(g_{xx}\), by defining a conserved norm as in [27]. For \(g_{xx}\), this norm is
\[\left|\Psi\right|^{2}_{g_{xx}}=-\frac{i}{2}\int d\phi_{t}\int dg_{tt}\left( \Psi^{*}\partial_{g_{tt}}\Psi-\Psi\partial_{g_{tt}}\Psi^{*}\right). \tag{10}\]
One can show using the Wheeler DeWitt equation (11) that \(\partial_{g_{xx}}\left|\Psi\right|^{2}_{g_{xx}}=0\), provided \(\Psi\) decays at large and small \(\{\phi_{t},g_{tt}\}\) on each \(g_{xx}\) slice. That is, the norm is conserved under evolution between different slices of fixed \(g_{xx}\). Evaluating this norm on a general semiclassical state of the form (10), we find
\[\left|\Psi\right|^{2}_{g_{xx}}=\int\frac{dkdc}{(2\pi)^{2}}\left|\beta(k,c) \right|^{2}. \tag{11}\]
One could alternatively compute the norm on the Fourier transformed state (12), however the \(\{k,c\}\) basis is a more computationally convenient choice.
An upshot of having defined a norm is that we can now compute expectation values, which can be used to examine the stability of minisuperspace classicality in the semiclassical regime. The simplest, non-trivial expectation values are those of the unfixed metric functions \(\langle\phi_{t}\rangle\) and \(\langle g_{tt}\rangle\), which we evaluate on (3.6) to obtain respectively
\[\begin{split}\langle\phi_{t}\rangle=&-\frac{i}{2} \int d\phi_{t}\phi_{t}\int dg_{tt}\left(\Psi^{*}\partial_{g_{tt}}\Psi-\Psi \partial_{g_{tt}}\Psi^{*}\right)\\ =&\frac{i}{2}\int\frac{dkdc}{(2\pi)^{2}}\left[\beta ^{*}\partial_{c}\beta-\beta\partial_{c}\beta^{*}\right]+\int\frac{dkdc}{(2 \pi)^{2}}\frac{e^{-k}c}{\sqrt{g_{xx}}}\left|\beta\right|^{2},\end{split} \tag{4.3}\]
and
\[\begin{split}\langle g_{tt}\rangle=&-\frac{i}{2} \int d\phi_{t}\int dg_{tt}g_{tt}\left(\Psi^{*}\partial_{g_{tt}}\Psi-\Psi \partial_{g_{tt}}\Psi^{*}\right)\\ =&-\int\frac{dkdc}{(2\pi)^{2}}\left(\frac{i}{2} \left[\beta\partial_{k}\beta^{*}-\beta^{*}\partial_{k}\beta\right]\frac{e^{-k }}{2\sqrt{g_{xx}}}+\left|\beta\right|^{2}\frac{c^{2}+4g_{xx}^{2}/L^{2}}{4g_{ xx}}e^{-2k}\right).\end{split} \tag{4.4}\]
Note here we have left the \(\{k,c\}\) dependence implicit. If we evaluate (4.3) on the Gaussian wavepacket (3.12), we obtain
\[\langle\phi_{t}\rangle-\mu_{0}=\frac{c_{0}e^{-k_{0}}}{\sqrt{g_{xx}}}\exp{ \left(\frac{1}{4\Delta_{k}^{2}}\right)}=\frac{c_{0}e^{-k_{0}}}{\sqrt{g_{xx}}} +O\left(1/\Delta_{k}^{2}\right), \tag{4.5}\]
where expanding in \(\Delta_{k}\gg 1\) we obtain the classical solution (2.9) as the leading contribution. Here, we have included the full quantum correction, to make explicit that the \(\mu_{0}\) contribution is independent of the quantum fluctuations. Thus, the freedom to redefine \(\phi_{t}\rightarrow\phi_{t}-\mu_{0}\) is preserved. We can similarly evaluate (4.4) on the Gaussian wavepacket (3.12), expanding in \(\Delta_{k},\Delta_{c}\gg 1\) to obtain
\[\langle g_{tt}\rangle=-\frac{e^{-2k_{0}}}{g_{xx}}\left(g_{xx}^{2}-\frac{1}{2} e^{k_{0}}\sqrt{g_{xx}}\epsilon_{0}+\frac{c_{0}^{2}}{4}\right)+O\left(\frac{1}{ \Delta_{c}^{2}},\frac{1}{\Delta_{k}^{2}}\right), \tag{4.6}\]
again obtaining to leading order the classical solution (2.9).
We can also compute the expectation values of momenta. For example, if \(g_{xx}\) is our clock defining our notion of time, \(\pi_{xx}=-i\partial_{g_{xx}}\) generates 'time translations' and therefore defines the 'Hamiltonian' for this clock. We compute the expectation value on the state (3.6) to obtain
\[\begin{split}\langle\pi_{xx}\rangle=&-\frac{1}{2} \int d\phi_{t}\int dg_{tt}\left(\Psi^{*}\partial_{g_{xx}}\left(\partial_{g_{tt }}\Psi\right)-\partial_{g_{xx}}\Psi\partial_{g_{tt}}\Psi^{*}\right)\\ =&\int d\phi_{t}\int dg_{tt}\left[\frac{g_{tt}}{2g_ {xx}}\left|\frac{\partial\Psi}{\partial g_{tt}}\right|^{2}+\frac{1}{2g_{xx}} \left|\frac{\partial\Psi}{\partial\phi_{t}}\right|^{2}-\frac{6}{L^{2}}g_{xx} \left|\Psi\right|^{2}\right]\\ =&\int\frac{dkdc}{(2\pi)^{2}}\left(\frac{i}{4g_{xx} }\left(\beta^{*}\partial_{k}\beta-\beta^{*}\partial_{k}\beta^{\prime}\right)-4 e^{-k}\sqrt{g_{xx}}/L^{2}\left|\beta\right|^{2}\right).\end{split} \tag{4.7}\]
To obtain the second line we used the Wheeler DeWitt equation (3.4). Evaluating (4.7) on the Guassian wavepacket we obtain
\[\langle\pi_{xx}\rangle=\frac{\epsilon_{0}}{2g_{xx}}-4e^{-k_{0}}\sqrt{g_{xx}}/L^{2 }\exp\left(\frac{1}{\Delta_{k}^{2}}\right)=\frac{\epsilon_{0}}{2g_{xx}}-4e^{-k _{0}}\sqrt{g_{xx}}/L^{2}+O\left(1/\Delta_{k}^{2}\right), \tag{4.8}\]
where expanding in \(\Delta_{k}\gg 1\) we obtain the classical solution as the leading contribution, as from (2.7) \(\pi_{xx}=\partial S_{1}/\partial g_{xx}=\epsilon_{0}/(2g_{xx})-4e^{-k_{0}} \sqrt{g_{xx}}/L^{2}\). For \(\epsilon_{0}>0\), \(\langle\pi_{xx}\rangle\) is monotonic. That the expansion does not break down anywhere indicates we have picked a good clock for extending our Wheeler DeWitt solutions through the spacetime.
Recall that the motivation for introducing these expectation values was to study the stability of minisuperspace classicality as the singularity is approached as \(g_{xx}\to 0\). A natural quantity to test that is the fluctuations of the metric function \(g_{tt}\). Computing the variance of \(g_{tt}\) we obtain
\[\langle g_{tt}^{2}\rangle-\langle g_{tt}\rangle^{2}=\frac{\Delta_{k}^{2}}{e^{2 k_{0}}8g_{xx}}+O\left(\frac{1}{\Delta_{k}^{2}},\frac{1}{\Delta_{c}^{2}}\right), \tag{4.9}\]
where we have used \(\Delta_{k},\Delta_{c}\gg 1\). One may recognise the denominator as (to leading order) \(\langle\pi_{tt}\rangle^{2}=4e^{2k_{0}}g_{xx}\), and indeed it is straightforward recover the uncertainty relation
\[\left(\langle\pi_{tt}^{2}\rangle-\langle\pi_{tt}\rangle^{2}\right)\left( \langle g_{tt}^{2}\rangle-\langle g_{tt}\rangle^{2}\right)=\frac{1}{4}+O\left( \frac{1}{\Delta_{k}^{2}},\frac{1}{\Delta_{c}^{2}}\right). \tag{4.10}\]
To interpret (4.9) near the singularity it will be useful to remove the \(g_{xx}\) dependence. The leading order dependence of \(g_{tt}\) on \(g_{xx}\) depends on whether or not charge is present. In the neutral (\(c_{0}=0\)) case
\[\lim_{g_{xx}\to 0}\langle g_{tt}\rangle\bigg{|}_{c_{0}=0}=\frac{\epsilon_{0}e^ {-k_{0}}}{2\sqrt{g_{xx}}}+O\left(g_{xx},\frac{1}{\Delta_{k}^{2}},\frac{1}{ \Delta_{c}^{2}}\right), \tag{4.11}\]
so the singularity is spacelike as \(\langle g_{tt}\rangle>0\). Using (4.11) we find that in the neutral case
\[\lim_{g_{xx}\to 0}\langle g_{tt}^{2}\rangle-\langle g_{tt}\rangle^{2} \bigg{|}_{c_{0}=0}=\frac{\Delta_{k}^{2}}{2\epsilon_{0}^{2}}\langle g_{tt} \rangle^{2}++O\left(\frac{1}{\Delta_{k}^{2}},\frac{1}{\Delta_{c}^{2}}\right). \tag{4.12}\]
Recall that in the semiclassical regime (3.17) \(\epsilon_{0}\gg\Delta_{k}\), so the variance is small compared to the expectation value and the Gaussian wavepacket is able to probe the singularity, as noted in [24].
When the black hole has a charge (i.e. \(c_{0}\neq 0\)), the leading order dependence of \(g_{tt}\) on \(g_{xx}\) becomes
\[\lim_{g_{xx}\to 0}\langle g_{tt}\rangle=-\frac{c_{0}^{2}e^{-2k_{0}}}{4g_{xx}}+O \left(\frac{1}{\sqrt{g_{xx}}},\frac{1}{\Delta_{k}^{2}},\frac{1}{\Delta_{c}^{2} }\right). \tag{4.13}\]
As is to be expected from the classical solution (2.13), we see that in the charged case the singularity is timelike as \(\langle g_{tt}\rangle<0\). Note that by taking the \(g_{xx}\to\infty\) limit, we have continued our wavepacket through the inner Cauchy horizon. This inner horizon is believed to be unstable [28; 29; 30], however this instability does not arise in any of the quantities we have computed in our minisuperspace description. Hence, it appears safe to continue the wavepacket through the Cauchy horizon.
Using (4.13), we compute
\[\lim_{g_{xx}\to 0}\langle g_{tt}^{2}\rangle-\langle g_{tt}\rangle^{2}=-\frac{ \Delta_{k}^{2}}{2c_{0}^{2}}\langle g_{tt}\rangle+O\left(\frac{1}{\Delta_{k}^{ 2}},\frac{1}{\Delta_{c}^{2}}\right). \tag{4.14}\]
In (4.14) the sign of the variance is still positive, as from (4.13) we have that \(\langle g_{tt}\rangle<0\) in the limit \(g_{xx}\to 0\). The significance of (4.14) is that the variance in \(g_{tt}\) is linear in \(g_{tt}\) rather than quadratic when \(\Delta_{k},\Delta_{c}\gg 1\). If we divide by \(\langle g_{tt}\rangle^{2}\), we see that for any choice of \(\Delta_{k}\) and \(c_{0}\) the variance will go to zero at the singularity. That means the quantum fluctuations are not enough to prevent this direction from collapsing as \(g_{xx}\to 0\), and the classicality of the minisuperspace approximation breaks down. Therefore, the Wheeler DeWitt state is not able to resolve the singularity.
One possible interpretation is that this result is because the singularity is timelike. With that in mind, it is useful to recall the strong cosmic censorship conjecture: that from physically reasonable generic initial conditions, only spacelike or null singularities can form classically [31]. There has been some evidence [28] via holography that quantum effects enforce strong cosmic censorship in charged AdS black holes. The result (4.14) may therefore indicate the quantum corrections accounted for by our semiclassical picture are similarly enforcing strong cosmic censorship. It would be interesting to perform computations analogous to (4.14) for Wheeler DeWitt states in other spacetimes which contain a timelike singularity, to see if a similar result is obtained.
Throughout these calculations, we have considered \(g_{xx}\) as a monotonic function from which to define a clock. As noted in Section 2, \(\phi_{t}\propto 1/\sqrt{g_{xx}}\) is also monotonic from the AdS boundary (\(\phi_{t}\to 0\)) to the singularity (\(\phi_{t}\to\infty\)). Indeed, one may have instead
chosen to define \(\phi_{t}\) as a clock, with an associated conserved norm
\[\left|\Psi\right|^{2}_{\phi_{t}}=\frac{i}{2}\int dg_{xx}\int dg_{tt}\frac{1}{g_{ xx}}\left(\Psi^{*}\partial_{\phi_{t}}\Psi-\Psi\partial_{\phi_{t}}\Psi^{*}\right). \tag{4.15}\]
In that case, one can show that on states of the form (3.6)
\[\left|\Psi\right|^{2}_{\phi_{t}}=\int\frac{dkdc}{(2\pi)^{2}}\left|\beta(k,c) \right|^{2}. \tag{4.16}\]
That is, the \(g_{xx}\) and \(\phi_{t}\) clocks lead to the same norm. This is to be expected, given the inverse proportionality of the functions. However, the integrals used to compute expectation values in the \(\phi_{t}\) norm are not as easy to evaluate as when using the \(g_{xx}\) clock, hence our preference for that choice in this work.
## 5 The Boundary Partition Function
From the AdS/CFT correspondence, we expect our theory of the RN-AdS bulk to be dual to some quantum theory living on the AdS boundary. In particular, the semi-classical bulk wavefunctions (3.6) constructed from (2.7) should be related to some partition function living on the AdS boundary \(g_{xx}\rightarrow\infty\). As noted in the introduction, the Hamilton-Jacobi function \(S\) should control the holographic renormalization group flow, with the arguments of \(S\) playing the role of coupling in the dual quantum theory. Here, we make that link explicit by studying the energy flow of the dual theory, and show that by running the coupling \(g_{xx}\) we move our partition function into the bulk and obtain a Wheeler DeWitt state. Equivalently, by evolving the Wheeler DeWitt state along \(g_{xx}\) slices to the boundary, we recover the partition function.
The holographic framework we consider is analogous to [24]. Specifically, the standard holographic relation tells us that the partition function of the Lorentzian quantum field theory (QFT) on the AdS boundary is
\[Z_{\rm QFT}\left[\gamma,A_{\gamma}\right]=\int\mathcal{D}g\mathcal{D}Ae^{iI[g, A]+iS_{\rm ct}\left[\gamma\right]}, \tag{5.1}\]
where the path integrals are over bulk metrics \(g\) that are asymptotically AdS with conformal boundary metric \(\gamma\), and over vector potentials \(A\). \(A_{\gamma}\) is the value of the Maxwell field at the boundary. In our minisuperspace approximation, \(\gamma_{tt}=g_{tt}\) and \(\gamma_{xx}=\gamma_{yy}=g_{xx}\). Here, \(I[g,A]\) is the bulk action and \(S_{\rm ct}[\gamma]\) a boundary counterterm action to cancel the divergence in the action at the boundary [32; 33]
\[S_{\rm ct}[\gamma]=4\int dx^{3}\frac{\sqrt{-\gamma}}{\Delta t\left(\Delta x \right)^{2}L}=\frac{4\sqrt{-g_{tt}}g_{xx}}{L}. \tag{5.2}\]
A difference between our setup and that of [24] is, as motivated by Section 4, we will use \(g_{xx}\) as our clock. The AdS boundary lives at \(g_{xx}\to\infty\), and we will consider the holographic renormalization group flow that evolves as a function of \(g_{xx}\). That is, we start with a QFT for a finite \(g_{xx}\), and then obtain a conformal field theory (CFT) in the limit that \(g_{xx}\to\infty\). As in [24], there is no need to remove a conformal factor from the metric, as conformal invariance at the AdS boundary constrains observables here.
If we wish to make more explicit our definition of the boundary partition function in terms of a trace over states in the boundary theory, we should first understand how imbuing the black hole with charge affects the dual QFT. The leading order contribution to (5.1) is the classical solution, upon which \(I[g,A]\) is precisely the Hamilton-Jacobi function. In Section 2, we found several classically equivalent Hamilton-Jacobi solutions, which are each described by a different choice of constants from each pair \(\{k_{0},\epsilon\}\) and \(\{c_{0},\mu\}\). To determine which of these will be most convenient for describing our dual theory, we will take advantage of the AdS/CFT correspondence and draw some intuition from the bulk. Classically, the RN-AdS bulk is described by its mass and charge, which would suggest that \(\epsilon\) and \(c_{0}\) should specify the dual quantum theory. We therefore want their conjugate variables \(k_{0}\) and \(\mu\) to be variables in our dual partition function, so (20c) is a useful form of the Hamilton-Jacobi function for us to use. Thus, with the addition of the counterterms (5.2), we obtain
\[\begin{split}\log Z_{\rm QFT}\left[g_{tt},g_{xx},\phi_{t};k_{0}, \mu_{0}\right]=&-2ie^{-k_{0}}\sqrt{g_{xx}}\left(e^{k_{0}}\sqrt{-g _{tt}}-\sqrt{g_{xx}}/L\right)^{2}\\ &-\frac{i}{2}e^{k_{0}}\left(\mu_{0}^{2}-\phi_{t}^{2}\right)\sqrt {g_{xx}}.\end{split} \tag{5.3}\]
We see explicitly that this partition funciton depends not only on the gauge field \(\phi_{t}\) and the boundary data \(g_{tt}\) and \(g_{xx}\), but additionally the classical parameters \(k_{0}\) and \(\mu_{0}\). It obeys the scale invariance
\[\log Z_{\rm QFT}[g_{tt}e^{-\frac{4}{3}k},g_{xx}e^{\frac{2}{3}k},\phi_{t}e^{- \frac{2}{3}k};\mu_{0}e^{-\frac{2}{3}k},k_{0}+k]=\log Z_{\rm QFT}[g_{tt},g_{xx},\phi_{t};\mu_{0},k_{0}]. \tag{5.4}\]
The energy density of the dual field theory is defined in the usual way, by the momentum conjugate to \(g_{tt}\)
\[\sqrt{-\gamma}\langle T_{t}^{t}\rangle_{\rm QFT}=-2i\gamma_{tt}\frac{\partial \log Z_{\rm QFT}}{\partial\gamma_{tt}}=4\sqrt{-g_{tt}g_{xx}}(\sqrt{g_{xx}}/L-e ^{k_{0}}\sqrt{-g_{tt}}). \tag{5.5}\]
To obtain the CFT limit, we eliminate \(g_{tt}\) using (2.9), and expand in \(g_{xx}\to\infty\) to obtain
\[\lim_{g_{xx}\to\infty}\sqrt{-\gamma}\langle T_{t}^{t}\rangle_{\rm QFT}=\epsilon. \tag{5.6}\]
We recover, as in [24], that the energy density of the black hole \(\epsilon\) is the energy density of the dual theory. Indeed, from [24] we expect the conjugate variable \(k_{0}\) to play a role anaologous to time in the CFT limit. To learn more about our dual theory, it will be instructive to study the dependence of \(\log Z_{\rm QFT}\) on \(k_{0}\). In particular, observe that
\[-i\frac{\partial\log Z_{\rm QFT}}{\partial k_{0}}= \frac{1}{2}\sqrt{g_{xx}}\left(e^{k_{0}}\left(-f_{0}^{2}+\phi_{t}^ {2}+4g_{tt}\right)+4g_{xx}e^{-k_{0}}/L^{2}\right) \tag{5.7}\] \[= \epsilon+c_{0}\mu_{0}\] \[= \epsilon-\mu_{B}Q.\]
In the second line we have used the classical solutions for \(g_{tt}\) and \(\phi_{t}\). We recognise from (5.6) that \(\epsilon\) is the energy density in the \(g_{xx}\to\infty\). In the third line we replaced \(c_{0}\mu_{0}\) with the boundary charge and chemical potential from (2.17) and (2.18). From (5.7) it appears that our dual theory is characterised by an energy and a charge. That is, beyond the classical regime we expect
\[-i\frac{\partial\log Z_{\rm QFT}}{\partial k_{0}}=H_{\rm QFT}-\mu_{B}Q_{\rm QFT}. \tag{5.8}\]
Here, \(H_{QFT}=\epsilon\) and \(Q_{\rm QFT}=Q\) in the classical limit. We therefore expect
\[Z_{\rm QFT}[g_{tt},g_{xx},\phi_{t};k_{0},f_{0}]={\rm Tr}\left(e^{ik_{0}\left( H_{QFT}[g_{tt},g_{xx},\phi_{t}]-\mu_{B}Q_{QFT}\right)}\right), \tag{5.9}\]
to be a suitable partition function of the boundary theory.
All of our discussion so far has only considered the classical limit of (5.1). To further justify our claim (5.9), we should consider (5.1) beyond the classical regime. Recall that the path integral over \(e^{iI[g,A]}\) is usually associated with the wavefunction of the gravititational bulk [34]. Therefore, when the boundary counterterms are subtracted out, in the semiclassical regime the partition function (5.1) should obey the Wheeler DeWitt equation (3.4). In particular, this means that in the semiclassical regime we can approximate the partition function as
\[Z_{\rm QFT}[g_{tt},g_{xx},\phi_{t};\beta]=e^{4i\sqrt{-g_{tt}}g_{xx}/L}\Psi \left(g_{tt},g_{xx},\phi_{t};\beta\right), \tag{5.10}\]
where \(\Psi\) is a solution to (3.4). Note that if we rearrange (5.10) for \(\Psi\), that is
\[\Psi\left(g_{tt},g_{xx},\phi_{t};\beta\right)=e^{-4i\sqrt{-g_{tt}}g_{xx}/L}Z_ {\rm QFT}[g_{tt},g_{xx},\phi_{t};\beta], \tag{5.11}\]
we could reinterpret the Wheeler DeWitt solution as a deformation of a QFT partition function. Or indeed, treating \(g_{xx}\) as a clock, as a deformation of a QFT living on the
AdS boundary as we evolve away from the \(g_{xx}\to\infty\) limit. This interpretation could be linked to studies of \(T^{2}\) deformations of the boundary theory in AdS/CFT [35; 36; 37], or indeed the explicit specification of a bulk state in terms of a boundary theory in [38].
To explicitly compute (5.10), we need to use a Wheeler DeWitt state \(\Psi\). In Section 3.2, we discussed how to construct a Wheeler DeWitt state (3.6) from a basis of solutions (3.5) weighted by some function \(\beta(k,c)\). What we want to do here is slightly different; as discussed earlier, (2.20c) is a more natural representation Hamilton-Jacobi function to identify with our partition function. We will therefore perform a Fourier transform to the \(c\) integral in (3.6) to recast our states in the semiclassical basis \(e^{iS_{k,\mu}}\). To do so, we will assume that our wavepackets are separable. That is, \(\beta(k,c)=\beta_{k}(k)\beta_{c}(c)\) and \(\alpha(\epsilon,\mu)=\alpha_{\epsilon}(\epsilon)\alpha_{\mu}(\mu)\). Performing the fourier transform by stationary phase, we recover
\[\Psi\left(g_{tt},g_{xx},\phi_{t}\right)\sim\int dkd\mu\beta_{k}(k)\alpha_{\mu} (\mu)e^{iS_{gt_{t},g_{xx},\phi_{t};k,\mu}}. \tag{5.12}\]
Here, \(\sim\) means up to some prefactor. Therefore, in the semiclassical regime, from (5.10) and (5.12) our claim (5.9) is that the partition function becomes
\[Z_{\rm QFT}\left[g_{tt},g_{xx},\phi_{t};\beta\right]\sim\int dkdf\beta_{k}(k) \alpha_{\mu}(\mu){\rm Tr}\left(e^{ik\left(H_{\rm QFT}-\mu\rho_{\rm QFT} \right)}\right). \tag{5.13}\]
Note that here we use the charge density \(\rho_{\rm QFT}=e^{k_{0}}Q_{\rm QFT}\). This is because it is more natural to build our wavepackets in \(f\) than \(\mu\), given the arguments of the Hamilton-Jacobi function. For the Gaussian wavepacket (3.12), (5.13) becomes
\[Z_{\rm QFT}\sim{\rm Tr}_{\epsilon,c}\left(e^{-(H_{\rm QFT}-\epsilon_{0})^{2 }/(2\Delta_{k}^{2})-\Delta_{c}^{2}(c-c_{0})^{2}/2}e^{ik_{0}\left(H_{QFT}-\mu_{ 0}\rho_{QFT}\right)}\right). \tag{5.14}\]
We can now assess the validity of the claim (5.9) by explicitly evaluating (5.10) and comparing it to (5.14). We evaluate (5.10) by making a stationary phase approximation and taking the \(g_{xx}\to\infty\) limit to obtain
\[\begin{split}\lim_{g_{xx}\to\infty}Z_{\rm QFT}\sim\int dedc& \exp\left\{-\frac{\Delta_{c}^{2}}{2}\left(c-c_{0}\right)^{2}-\frac{ \left(\epsilon-\epsilon_{0}\right)^{2}}{2\Delta_{k}^{2}}+i\epsilon\log\left(e ^{k_{0}}L\frac{\sqrt{-g_{tt}}}{\sqrt{g_{xx}}}\right)\right\}\\ &\times\exp\left\{i\left(c\phi_{t}+\frac{\epsilon^{2}L}{8\sqrt{- g_{tt}}g_{xx}}+\frac{c^{2}L\sqrt{-g_{tt}}}{2g_{xx}}\right)\right\},\end{split} \tag{5.15}\]
up to a prefactor which is not important here. The integrals in (5.15) are identified with the trace in (5.14), and the Gaussian terms are identical. The third term in the first line of (5.15) is due to the smearing of \(k\) around \(k_{0}\) and changes the sign of the \(\epsilon^{2}\) term, as noted in the neutral case [24]. The term linear in \(\phi_{t}\) arises from an analogous
blurring of \(\mu\) around \(\mu_{0}\), and also changes the sign of the \(c^{2}\) term. As in the neutral case, the \(\epsilon^{2}\) term is the scaling for the density of states in a \(2+1\) dimensional CFT. To interpret the \(c^{2}\) term, we compute the current associated with the electromagnetic potential \(A\). Classically, in the \(dt\) direction this is
\[J_{t}=\frac{\pi_{\phi}^{2}}{2g_{xx}^{2}}, \tag{5.16}\]
so the \(c^{2}\) term in (5.15) is identified with the energy density \(\sqrt{-\gamma}\langle J_{t}\rangle\) of the Maxwell field. Finally, to interpret the term linear in \(\phi_{t}\), we compute
\[-i\lim_{g_{xx}\rightarrow\infty}\frac{\partial\log Z_{\text{QFT}}}{\partial \phi_{t}}=\langle c\rangle=-\langle\rho\rangle. \tag{5.17}\]
Here, we have an expectation value because we are taking a trace of the boundary theory ensemble. The implication of (5.17) is that the bulk gauge field is dual to the charge density in the bulk theory. This recovers the standard result of the holographic dictionary [26]. We have therefore identified each term in (5.15) with those in (5.14), validating our proposition for the dual theory ensemble.
To summarise, what we have done here is demonstrated at a classical and semiclassical level that Wheeler DeWitt states of the RN-AdS interior can be constructed from the holographic renormalization group flow of \(g_{xx}\) of a partition function that lives on the AdS Boundary. The gaussian wavepacket localises to a fixed energy and charge, suggesting that the partition function of the dual theory is specified by those quantities.
## 6 Black hole thermodynamics
Having identified the dual theory as being specified by a fixed energy and charge, we expect to be able to construct a similar thermodynamic picture for the bulk theory. In particular, we expect that at the black hole horizon, Wheeler DeWitt states should know about the black hole thermodynamics. That quantum states of a gravitational bulk should know about horizon thermodynamics has long been appreciated [39]; what we do here is explicitly recover these thermodynamics from our semiclassical states.
To recover the bulk thermodynamics, we follow a similar procedure to [25]. There, it was observed that it is natural to average over a Wheeler DeWitt state such as (3.14) by integrating over a metric function. This averaging has the practical benefit of removing a delta function. Physically, this averaging corresponds to fixing the gravitational gauge redundancy represented by the Wheeler DeWitt equation. In particular,
in [25] the redundancy was fixed by fixing the trace of the extrinsic curvature \(K\), motivated by the observations of [23; 40; 41]. This is achieved by fourier transforming the wavefunction to a basis of extrinsic curvatures rather than metric functions, and then averaging. The trace \(K\) is proportional to \(\pi_{v}\), the momentum conjugate to the volume \(v=\sqrt{-g_{tt}}g_{xx}\). The fourier transform is therefore taking us from wavefunctions \(\Psi\left(v,[h],\phi_{t}\right)\) to \(\tilde{\Psi}\left(\pi_{v},[h],\phi_{t}\right)\), where \([h]=g_{tt}/g_{xx}\) is the conformal class of the induced metric. The transform is implemented by adding a term proportional to \(iv\pi_{v}\) in the exponent of (3.14). As in [25], we will add the Gibbons-Hawking boundary term
\[2\int d^{3}x\sqrt{h}K=g_{tt}\pi_{tt}+g_{xx}\pi_{xx}, \tag{6.1}\]
and consider the partial integration
\[\tilde{\Psi}\left(g_{xx},\phi_{t}\right)=\int dg_{tt}e^{-i\left(g_{tt}\left< \pi_{tt}\right>+g_{xx}\left<\pi_{xx}\right>\right)}\Psi\left(g_{tt},g_{xx}, \phi_{t}\right). \tag{6.2}\]
We integrate over \(g_{tt}\) because \(g_{xx}\) is monotonic from the AdS boundary to the singularity. We will in particular be interested in evaluating (6.2) at the black hole horizon \(g_{xx}(z_{h})\). It is worth noting that in this case, (6.2) is equivalent to averaging over a state in a gravitational theory with
\[I_{\text{GR}}\left[g,A\right]=\int d^{4}x\sqrt{-g}(R+6/L^{2})+2\int d^{3}x \sqrt{h}\left(K_{I}-K_{h}\right), \tag{6.3}\]
where \(K_{I}\) and \(K_{h}\) are the trace of the extrinsic curvature at the infinite boundary and the horizon respectively. This is in the spirit of the original Gibbons-Hawking procedure [39].
Integrating over \(g_{tt}\) only removes one of the delta functions from (3.14). That is, we still have a remaining distribution in \(\phi_{t}\). We proceed by integrating out \(\phi_{t}\) to obtain an effective field theory on a \(g_{xx}\) slice to obtain
\[\tilde{\Psi}\left(g_{xx}\right)=\int d\phi_{t}\int dg_{tt}e^{-i\left(g_{tt} \left<\pi_{tt}\right>+g_{xx}\left<\pi_{xx}\right>\right)}\Psi\left(g_{tt},g_{ xx},\phi_{t}\right). \tag{6.4}\]
Note that we are interested in \(\tilde{\Psi}\) at the black hole horizon \(g_{xx}=g_{xx,+}\), where \(\phi_{t}=0\) as per (2.19). We therefore don't lose any information by performing the integral over \(\phi_{t}\). Evaluating (6.4) on the solution (3.14) we obtain
\[\begin{split}\tilde{\Psi}\left(g_{xx,+}\right)=&\exp \left\{i\left[e^{-k_{0}}ST_{h}-(\epsilon+c_{0}\mu_{0})\right]\right\}\\ =&\exp\left\{i\left[e^{-k_{0}}S_{h}T_{h}-\epsilon+ \mu_{B}Q\right]\right\}.\end{split} \tag{6.5}\]
Here we defined the Bekenstein-Hawking entropy
\[S_{h}=A_{h},\text{ where }A_{h}=\frac{4\pi}{z_{h}^{2}}=4\pi g_{xx,+}, \tag{6.6}\]
and the black hole temperature
\[T_{h}=\frac{1}{4\pi}\left|\frac{\partial f(z)}{\partial z}\right|_{z=z_{+}}= \frac{1}{4\pi}e^{2k_{0}}\left|2\sqrt{g_{xx}}\frac{\partial g_{tt}}{\partial g _{xx}}\right|_{g_{xx}=g_{xx,+}}. \tag{6.7}\]
Note that, as in Section (5), we identified \(c_{0}\mu_{0}=-\mu_{B}Q\) as before. To proceed further, firstly observe from the metric (2.12) that \(\epsilon=Me^{-k_{0}}\) where \(M\) is the mass of the black hole. Additionally, whilst the charge \(Q\) in the bulk and boundary is the same, the chemical potential is redshifted when moving from the boundary to the horizon. The redshift is explicitly computed following [42]
\[\mu_{B}=\sqrt{-n_{a}n^{a}}\mu_{h}=e^{-k_{0}}\mu_{h}, \tag{6.8}\]
where \(n^{\mu}\) is the four-vector introduced above (2.18). Our averaged wavefunction (6.5) can therefore be expressed as
\[\Psi\left(g_{xx,+}\right)= \exp\left\{ie^{-k_{0}}\left[S_{h}T_{h}-M+\mu_{h}Q\right]\right\} =\exp\left\{-ie^{-k_{0}}W\right\}, \tag{6.9}\]
where \(W\) is the usual grand canonical thermodynamic potential. In the spirit of [25], if we set \(e^{-k_{0}}=-i/T_{h}\), the argument of the exponential is the usual thermodynamic argument for Euclidean quantum gravity [39]. Setting \(e^{-k_{0}}=-i/T_{h}\) is equivalent to performing a Wick rotation of the metric (2.12) to Euclidean signature. In Euclidean signature, the radial coordinate runs from the AdS boundary to the black hole horizon, where the Euclidean time circle shrinks to zero [26].
It is worth comparing the result (6.9) to that in [25], in which a de Sitter-Schwarzschild black hole was considered. There, the averaged and fourier transformed wavepacket recovered exactly the horizon entropy \(\tilde{\Psi}=e^{S_{h}}\) under an analogous Wick rotation. This is because for a closed universe, such as an asymptotically de Sitter one, \(W=-TS\), and so \(-WT^{-1}\) is the entropy \(S\) exactly. We make this comparison to emphasise that it is the thermodynamic potential \(W\) which the Wheeler DeWitt state knows about.
## 7 Discussion
In this work, we have studied Wheeler DeWitt solutions as a probe of the interior of charged black holes. In particular, we have applied the framework of [24] to the
RN-AdS black hole, constructing Wheeler DeWitt states in the black hole interior and extending them to the cauchy interior and the exterior. We found that the close to the singularity, quantum fluctuations become significant and the state is unable to probe the singularity. Moving to the exterior and considering the AdS/CFT correspondence, we then found that our Wheeler DeWitt states were part of the renormalization group flow of a quantum theory living on the AdS boundary. The partition function of this dual theory was specified by an energy and charge. We then returned to the bulk description to recover this thermodynamic picture there. In particular, by applying the averaging procedure of [25] and making an appropriate Wick rotation, we recovered the grand canonical thermodynamic potential at the black hole horizon.
Having developed this framework, it is natural to consider what other rich dynamics of charged black holes it will allow us to explore. For example, it was shown in [11] that the inner Cauchy horizon of the RN-AdS black hole vanishes upon coupling a scalar field to the bulk theory. Whilst directly coupling a scalar field to our theory has not yet yielded a tractable Hamilton-Jacobi solution, a simpler starting point might be to treat the scalar field as a perturbation of the Hamilton-Jacobi solution as in [43]. Such a treatment would introduce sources in the dual theory living on the AdS boundary and thus affect the renormalization group flow.
Coupling to a scalar field could equally be used to introduce inhomogeneities around our minisuperspace approximation, which would be a first step to going beyond the semiclassical regime [44; 45]. Going beyond minisuperspace may be useful to better understand the result (4.14). That is, it would be interesting to learn whether perturbations around minisuperspace make it possible for a Wheeler DeWitt state to resolve the timelike singularity.
Finally, this work should be extended to answer similar questions in de Sitter space. In [25], the Wheeler DeWitt framework of [24] was applied to recover the dS-Schwarzschild solution and propose a framework for static patch holography. The dS-Schwarzschild solution is another example of a spacetime with two horizons and where \(g_{tt}\) is not monotonic, so cannot be used as a clock to probe the interior. Whether the analogous function to \(g_{xx}\) in [25] (\(R\)) can be used as a clock may be useful for exploring interior singularities in de Sitter, which are expected to be highly inhomogeneous [46]. Outside the black hole interior, charged de Sitter black holes have interesting properties which Wheeler deWitt solutions could be used to explore semiclassically [47; 48].
## Acknowledgments
We are especially grateful to Sean Hartnoll for extensive feedback on an early draft of this paper. It is also a pleasure to thank Joseph Conlon, Fernando Quevedo, Ronak M Soni, Aron Wall and Zhenbin Yang for useful conversations. S.N. would like to thank DAMTP at the University of Cambridge and the University of Southampton for their hospitality while the majority of this work was being done. Finally, S.N. would like to specially thank Pam Kivelson and Steven Kivelson for their kind suggestions and warm encouragement. M.J.B. was supported by a Gates Cambridge Scholarship (#OPP1144). S.N. was supported by the China Scholarship Council-FaZheng Group at the University of Oxford.
|
2309.03890 | XpookyNet: Advancement in Quantum System Analysis through Convolutional
Neural Networks for Detection of Entanglement | The application of machine learning models in quantum information theory has
surged in recent years, driven by the recognition of entanglement and quantum
states, which are the essence of this field. However, most of these studies
rely on existing prefabricated models, leading to inadequate accuracy. This
work aims to bridge this gap by introducing a custom deep convolutional neural
network (CNN) model explicitly tailored to quantum systems. Our proposed CNN
model, the so-called XpookyNet, effectively overcomes the challenge of handling
complex numbers data inherent to quantum systems and achieves an accuracy of
98.5%. Developing this custom model enhances our ability to analyze and
understand quantum states. However, first and foremost, quantum states should
be classified more precisely to examine fully and partially entangled states,
which is one of the cases we are currently studying. As machine learning and
quantum information theory are integrated into quantum systems analysis,
various perspectives, and approaches emerge, paving the way for innovative
insights and breakthroughs in this field. | Ali Kookani, Yousef Mafi, Payman Kazemikhah, Hossein Aghababa, Kazim Fouladi, Masoud Barati | 2023-09-07T17:52:43Z | http://arxiv.org/abs/2309.03890v4 | XpookyNet: Advancement in Quantum System Analysis through Convolutional Neural Networks for Detection of Entanglement
###### Abstract
The application of machine learning models in quantum information theory has surged in recent years, driven by the recognition of entanglement and quantum states, which are the essence of this field. However, most of these studies rely on existing prefabricated models, leading to inadequate accuracy. This work aims to bridge this gap by introducing a custom deep convolutional neural network (CNN) model explicitly tailored to quantum systems. Our proposed CNN model, the so-called XpookyNet, effectively overcomes the challenge of handling complex numbers data inherent to quantum systems and achieves an accuracy of 98.5%. Developing this custom model enhances our ability to analyze and understand quantum states. However, first and foremost, quantum states should be classified more precisely to examine fully and partially entangled states, which is one of the cases we are currently studying. As machine learning and quantum information theory are integrated into quantum systems analysis, various perspectives, and approaches emerge, paving the way for innovative insights and breakthroughs in this field.
* September 2023
## 1 Introduction
In quantum mechanics, an extraordinary phenomenon known as quantum entanglement arises when two or more particles interact so that their quantum states become related [1]. This relation indicates that the particles become correlated and can no longer be described independently [2]. Any change made to one particle will be instantaneously reflected in the others, even if they are far apart [3]. Creating and increasing entanglement in arbitrary qubits for quantum algorithms and quantum information (QI) theory protocols, in which entanglement is a vital resource, plays an influential role [4]. As proof, it excludes undesirable energy levels in quantum
annealing [5] and facilitates the exchange of quantum information over long distances [6]. It also provides conditions for transferring classical bits of information with fewer qubits [7].
The first step in creating and increasing entanglement is recognizing its existence and amount. In recent years, various entanglement detection criteria have been proposed [8]. Yet, the positive partial transpose (PPT) criterion determines entanglement only in \(2\otimes 2\) and \(2\otimes 3\) non-mixed bi-party states by indicating the state is separable if the partial transpose of the density matrix is positive semi-definite [9]. In other words, there are some mixed states that are entangled but still meet the PPT conditions, which are called bound entangled states, as they cannot be used to create a maximally entangled state through local operations and classical communication (LOCC), even though the reduction criterion has been practical here [10]. Moreover, Werner states are another instance in which PPT is violated [11].
Alternatively, concurrence, negativity, and relative entropy of entanglement (REE) are some well-known measurements for measuring entanglement. In a density matrix, concurrence is the maximum of 0 and the largest eigenvalue subtracted by the aggregate of all the other eigenvalues [12]. Negativity is the sum of the negative eigenvalues of a density matrix's partial transpose [13]. REE measures a quantum system's uncertainty compared to the nearest separable state by von Neumann entropy [14]. Similarly, the Entanglement of Formation (EoF) measures the level of entanglement required to generate a quantum state and represents the minimum average entanglement needed. EoF is measured by tracing a subsystem and optimizing entropy over all possible state decompositions [15]. Generally, EoF distinguishes entangled states from separable ones, even in mixed states, resulting in values between 0 and \(\log_{2}(d)\), where d is the dimension of the subsystem.
From three qubits onwards, finding a real solution is as hard as those that take more than polynomial time. Hence, entanglement witnesses are tools used to detect entanglement in quantum systems. A witness \(W\) is a Hermitian operator with a non-negative expectation value for all separable states, but can have a negative expected value for some entangled states [16]. It performs entanglement detection to find a more suitable witness without fully characterizing the system or performing a full tomography; however, owing to the exponential growth of variables with the rise in qubit numbers, they require optimization in high-dimensional spaces. Quantum witnesses can be optimized using machine learning because they can quickly identify patterns in large datasets, making them ideal for solving complex problems [17; 18]. Four illustrations in Fig. 1 show how some of the most commonly used division methods are used for classifying separable and entangled states.
Deep learning (DL) models have transformed research fields and impacted our daily lives due to their robustness and versatility. These widely used models can attain accurate results on any dataset regardless of the intended application, as long as the data is encoded to be more simplistic. Furthermore, the model must be adapted to the data.
A link between learner models and QI has been extensively studied recently. Quantum neural networks detect entanglement and separability in multipartite quantum states using both discrete and continuous variations. Newly developed realignment criteria and generalized partial transposition criteria have led to the training of a neural network (NN) on bipartite and multipartite quantum systems [21]. The study of bound and noisy tripartite entanglement employs an NN with separable quantum states and a hidden mixing layer that encodes the classical probabilities
of mixed quantum states. This research determines the quantum channel capacity using an NN, witnesses W/GHZ entanglement, and examines entanglement behavior based on environmental properties [22]. Generative models and multilayer perceptrons construct separable states for comparison in bipartite and multipartite systems based on separable approximations of target states and noise thresholds. The algorithm uses an ansatz to find the nearest separable state, then establishes the boundaries of separability for entangled states with bound entanglement [23]. A novel method, combining a pseudo-Siamese network with a generative adversarial net, has been developed to detect entanglement. This technique reframes the problem as an anomaly detection task, achieving over 97.5% accuracy and investigating partial entanglement [24]. NNs also classify quantum states using Bell-type inequality for relative coherence entropy and supervised learning. The NN detects entanglement and predicts its properties. This method can be expanded to multiparty systems using Bell states in noisy channels [25].
All in all, there has not yet been a model with sufficiently rigorous accuracy for two-qubit data, and the design of DL models for QI applications remains relatively unexplored. Additionally, research needs to be more comprehensive in generating data beyond two qubits under entanglement categories and is currently limited to Bell-type data. Based on the findings of this study, a highly appropriate customized model has been developed for use as a criterion. The model can fit complex number data from QI theory into a common framework. Furthermore, there is an investigation of how purity affects the identification of states.
Section 2 discusses density matrices in QI theory, highlighting their application, classification, and detection of entanglement footprints in many-body quantum
Figure 1: State space is divided into two parts based on whether states are entangled or separable. (a) Adjustment of a witness as a linear hyperplane and its optimization failure. (b) Entanglement witness optimization approaches, including linear: from \(W_{1}\) to \(W_{1}^{\prime}\), and nonlinear: from \(W_{2}\) to \(W_{2}^{\prime}\)[19; 20]. (c) The convexity of the target space impedes precision, even when encircling the separable states with several witnesses. (d) Using simple learner models to improve the convex witnesses isolator (which often does not accurately cover the target space).
systems. Section 3 outlines the process of building a deep custom model from scratch and exploring advanced techniques throughout its design and learning processes. Additionally, quantum complex number data preprocessing is addressed. Section 4 outlines the methods of generating quantum states by computer and outside of a laboratory. In Section 5, the model performance results are evaluated and scrutinized based on the data obtained in Section 4.
## 2 Quantum Entanglement Formation
Quantum systems involving more than one qubit, known as multi-party quantum systems, can exhibit quantum entanglement, where each qubit interacts with the others. The density matrix is a Hermitian matrix that allows us to expand it in terms of its eigenvectors and eigenvalues. Generally, it is defined as \(\rho=|\psi\rangle\langle\psi|\) for any quantum state \(|\psi\rangle\). It serves as a mathematical representation of a multi-party quantum system, especially when it is in a mixed state. Since there is no such thing as a pure state in reality, and because the coherence of the system is reduced by noise and interaction with the environment, the resulting state is a mixed quantum state. Density matrices enable us to calculate properties such as entanglement and coherence of quantum systems [26]. In terms of mixed states, they facilitate obtaining the expectation values and the time evolution of the quantum system [27]. This leads to the transformation of the equation for the expectation value in pure states, initially given as \(\langle\hat{A}\rangle=\langle\psi|\hat{A}|\psi\rangle\), to \(\langle\hat{A}\rangle=\mathrm{Tr}(\hat{\rho}\hat{A})\). Similarly, the Schrodinger equation describing the time evolution of pure states, originally stated as \(\frac{d}{dt}|\psi(t)\rangle=\frac{1}{i\hbar}\hat{H}(t)|\psi(t)\rangle\), is transformed into \(\frac{d}{dt}\hat{\rho}(t)=\frac{1}{i\hbar}[\hat{H}(t),\hat{\rho}]\), where the relative density matrix of \(|\psi\rangle\) is represented by \(\hat{\rho}\).
Density matrices are viewed as raw but valuable data that contain latent patterns [28, 29, 30]. As these matrices are fed into the DL models and represented as data through their convolutional layers (which apply a set of filters), models analyze the density matrices thoroughly to decipher hidden patterns within them. Models perform their processes according to a label that describes the process they need to follow. As part of this work, the labeling was conducted according to the EoF criteria because the function was accurate and ready.
The positive value of EoF between two systems indicates an entanglement between the two systems. To determine the EoF for bipartite systems, the Schmidt decomposition of the quantum state must be computed. The EoF is then obtained by evaluating the von Neumann entropy of each eigenvalue, and summing over the probabilities of the respective states:
\[EoF=\inf{[\sum_{i}p_{i}S(\rho_{i})]}. \tag{1}\]
Here, the infimum is taken over all possible state decompositions into a probabilistic mixture of pure product states. Additionally, to calculate the von Neumann entropy of the reduced density matrix for subsystem A or B (obtained by tracing out the other subsystem), the following equation is used:
\[S(\rho_{A})=-\mathrm{Tr}[\rho_{A}\log_{2}{(\rho_{A})}], \tag{2}\]
where \(\rho_{A}\) represents the reduced density matrix of subsystem A.
## 3 Designing Custom Model
The DL models are graphs of convolutional layers; each consists of several kernels. The presence of multiple kernels in each layer, along with the diffusion of information from one layer to the next, leads to the extraction of patterns from the lowest-level features up to the highest-level features. Through these processes, complicated patterns within a matrix can ultimately be extracted [31].
In DL, the 3D convolution operation is expressed as follows [32]:
\[Y_{i,j,k}=\sigma\left(b+\sum_{p=0}^{P-1}\sum_{q=0}^{Q-1}\sum_{r=0}^{R-1}X_{i+p, j+q,k+r}\times W_{p,q,r}\right), \tag{3}\]
where \(Y_{i,j,k}\) is the output element at position \((i,j,k)\), \(X_{i+p,j+q,k+r}\) is the input element at position \((i+p,j+q,k+r)\), \(W_{p,q,r}\) is the weight element at position \((p,q,r)\), \(b\) is the bias term, and \(\sigma\) is the activation function. In this formula, \(P\), \(Q\), and \(R\) are the dimensions of the convolutional kernel or filter. The sum over \(p\), \(q\), and \(r\) represents the convolution operation, where the kernel is slid over the input tensor and multiplied element-wise with the corresponding elements in the input tensor. The bias term is added to each output value, and the activation function is applied to the result. The activation function introduces non-linearity into the output of the convolutional layer, allowing the network to learn complex patterns between the input and output.
The intricate nature of entanglement detection necessitates a high-capacity model with many layers. The longer the sequence of layers, the greater the chance that the gradient value of the loss function will approach zero [33]. This major problem, called vanishing gradients, is caused by activation functions that map input values to small intervals. In order to train the model effectively, the activation function must separate and focus on important information. Using Leaky ReLU as a solution to vanishing gradients, a slight negative slope is introduced for values below zero, thus allowing for continued learning [34]. In this way, we will be able to resolve the over-fitting problem and the vanishing gradient problem to a large extent.
The Leaky ReLU function is defined as follows:
\[f(x)=\begin{cases}ax&;x<0\\ x&;\text{else}\end{cases}, \tag{4}\]
where \(a\) represents a small positive constant, the function applies a linear transformation with a slope of \(a\) to the input when \(x\) is less than zero.
Optimizing weights in the convolution layer kernels leads to improved accuracy for DL models. The back-propagation algorithm is utilized for this optimization process, where the weights are adjusted based on a loss function called categorical cross entropy (CCE). This function is calculated for a given number of classes, denoted as \(C\), using the following equation:
\[L_{CCE}=-\sum_{i=1}^{C}y_{i}\log\left(\hat{y}_{i}\right) \tag{5}\]
In this equation, \(y_{i}\) represents the actual label for class \(i\), and \(\hat{y}_{i}\) denotes the predicted SoftMax probability for class \(i\).
However, when entanglement detection is the sole objective, or when the system consists of only two qubits, equation (1) can be simplified to a binary classification
problem by setting \(C=2\). The modified equation becomes:
\[-\sum_{i=1}^{2}y_{i}\log\left(\hat{y_{i}}\right)=-y_{1}\log\left(\hat{y_{1}} \right)-y_{2}\log\left(\hat{y_{2}}\right) \tag{6}\]
This equation can be further simplified to represent the formula for binary cross entropy (BCE) as follows:
\[L_{BCE}=-y\log\left(\hat{y}\right)-\left(1-y\right)\log\left(1-\hat{y}\right) \tag{7}\]
The DL model can be designed and developed by incorporating these components and assembling them in the forward flow, backpropagation, and selection processes. The results of this analysis provide valuable insights for further refinement and optimization.
### Deep Convolution Model
DL requires layers to be appropriately arranged and hyper-parameters to be selected wisely, such as the number of kernels and filters in each layer. We arranged different kernel sizes as shown in Fig. 2. Selection and arrangement of kernel sizes are based on the area that a kernel covers within the input tensor, e.g., a \(2\times 2\) kernel identifies the patterns between values in dimensions \(2\times 2\times N\). Typically, a smaller kernel identifies a more detailed pattern, whereas a larger kernel identifies a broader pattern [35].
Thus, the model's base path gradually reduces the size of the tensors. The parallel paths (branched model) process tensor generalities. By combining these paths, we can identify both general and specific patterns within the density matrix. The proposed model can determine entanglement, its amount, and the entangled qubits in three or more qubits. This is achieved by changing the last layer's activation function. As we examine processes like EoF and PPT, we realize there may be additional connections between density matrix values that cannot be identified mathematically.
Thus, to detect _spooky action at a distance_, XpookyNet is designed to extract patterns from density matrix values using dexterous learning techniques. Regardless of how tempting it seems to simplify the complex number form of raw density matrices through measurements, we do not alter their original form [36]. It is wise to divide them into two matrices, one consisting of real numbers and the other consisting of imaginary numbers.
Figure 2: XpookyNet’s overall scheme is as follows. The activation function of its last layer varies depending on whether the model is intended to detect entanglement or to predict the amount of entanglement. It also varies when categorizing the presence of entanglement among qubits.
### Improving Deep Model
We developed our model with more layers, drawing inspiration from well-known convolutional models such as VGGNET, INCEPTION, and ResNet, which have achieved unprecedented accuracy in various databases [37, 38, 39]. This model contains parallel layers with expanded kernel sizes and layers aligned in rows, as shown in Fig. 2. The main artery of the layers, depicted in blue, is in charge of extracting patterns by gradually reducing the dimensions of the data. Since the expansion of two-qubit data results in an input tensor with dimensions of \(4\times 4\times 2\), which is not large, the Max Pooling layer is not used. Parallel arteries, depicted in green, play a significant role in extracting larger patterns. A more efficient set can be produced by replacing these simple convolutional layers with "separable convolutional layers", complemented with Batch Normalization (BN).
_Batch Normalization_ is a technique used in DL to improve the performance, accuracy, and speed of DL models. When BN is applied to DL models, they become more stable. This is because the output of the layers becomes less sensitive to changes in the input [40].
Furthermore, Separable convolutions make CNN training more effortless and less overfitting-prone, reduce computational complexity, allowing faster training and inference, and improve accuracy by capturing more sophisticated features. Nevertheless, they may not be as effective as standard convolutions when dealing with low-dimension tensors. Additionally, they may require more layers than traditional CNNs to achieve similar accuracy levels [41].
### Constructing Extended-Tensor
QI relies on complex numbers, but DL models cannot accommodate them. This method is inspired by models that discover patterns within images defined by three channels: red, green, and blue, known as RGB, and convert the density matrices into three-dimensional tensors (as indicated in Fig. 3). We use typical and advanced models designed for working with images or similar datasets to process quantum data instead of limited, eccentric, and intricate models. As illustrated in Fig. 3, we divide the density matrices into two matrices: one containing real numbers and the other containing imaginary numbers. Finally, we obtain a tensor that forms the input data.
### Training Convolution Model
Optimization, a fundamental component of DL algorithms, updates the weights and biases of the model to correspond with the loss function so that the loss function reaches its minimum.
In the proposed model, the stochastic gradient descent (SGD) optimizer with a momentum of \(0.9\) is used to explore the loss function space to find and save the most accurate model with the minimum loss function. At the end of each epoch, the most accurate model is saved compared between the current and last model. When the loss function reaches a plateau, the learning process should be halted, and the optimizer's learning rate should be adjusted. This is to detect small hollows in the loss function and converge to its minimum. The model's learning rate is cut by one-tenth each time it hits a plateau.
In addition, the parallel paths benefit the model by passing the data through larger kernel sizes and preventing vanishing gradients with shortcuts in
backpropagation. Preprocessing quantum data by balancing and shuffling is a simple but highly effective method in improving results. Entangled data is likely to be produced, and with the increase of qubits, this probability will rise even more. However, giving all the data of one label after all the data of another label causes model bias and reduces generalization.
## 4 Quantum Data Generation
### Two-qubit Entangled State
In order to generate two-qubit state data, we use the QuTiP library [42]; it provides a random density matrix with complex number elements, which ensures that the matrix is Hermitian, positive-semidefinite, and normalized to have a trace value of one. However, since generating random bipartite entangled data is approximately three times more likely than generating bipartite separable data, it's crucial that we store an equal number of matrices as our dataset for each class to prevent model bias. As a result of this simple action, the model's generalization improves, and as the generalization improves, so does the accuracy.
We generate one million \(4\times 4\) random density matrices as the dataset. This dataset contains 500,000 entangled and 500,000 separable data. The matrices are labeled by calculating the EoF of each matrix using the Qiskit library [43]. The amount of entanglement is also stored as a result of the EoF function.
### Three-qubit Entangled States
According to Fig. 4(a), Appendix A to describe each category. Besides generating entangled tripartite data, which only leads to the known states of GHZ state, W state, and Graph state, generating entangled \(B|AC\) partial data is also challenging,
Figure 3: The process of converting complex number data to real number data provides simplicity in typical models.
as detailed in Appendix B. In two-qubit entanglement cases, partial entanglements \(A|BC\) and \(C|AB\) are only provided with the tensor product operator between a single state and a bipartite entangled state. Fig. 4(b) demonstrates the method of physically generating states which exert randomness by single random unitary operations \(U_{rand}^{A,B,C}\) and entanglement by \(U_{Ent.}\) operator. We create 250,000 density matrices for three-qubit states, divided into five balanced categories as a dataset, which takes only 168 seconds to prepare when running on a regular CPU. The states are entirely pure, but if we wish to generate less pure data, we must mix more states, which requires a longer time.
## 5 Results Evaluation
We summarize the key statistics and trends using a table and a graph. Next, we evaluate the performance of our classification algorithm using a confusion matrix and identify areas where it might need improvement. Finally, we present a plot that visualizes the relationships between different variables in our dataset.
### Two-qubit Entanglement Detection
XpookyNet is tested with various approaches to achieve the state-of-the-art model presented thus far. It is imperative to know how effective each approach is to gain more meaningful insight into practice. This is because the design of customized learning models in QI has been sparse.
We present in Table 1 the results of our tests, which comprised 10,000 two-qubit states. The table provides an overview of each of the approaches or their combination, its effect on the model's parameters, the required time to complete an epoch of learning, and, above all, the model's accuracy on the test data.
As is evident from comparing the accuracy, one can immediately notice the significant superiority of the convolutional model over the NN. However, achieving near 100% accuracy still requires advanced considerations.
In comparing the three methods proposed separately to achieve higher accuracy, it is observed that reducing the learning rate on plateaus (ReduceLROnPlateau) is the most effective. The separable convolutional and BN methods constantly interact and are more effective than branched models. Nevertheless, when these methods are
Figure 4: An overview of three qubits: (a) An allegory of how state space is divided into categories. (b) The quantum circuit used for preparing known tripartite entangled states incorporates the single random unitary operators \(U_{rand}^{A,B,C}\) and the entanglement operation \(U_{Ent.}\).
combined two by two, the branched model along with ReduceLROnPlateau achieves the highest accuracy, even better than the combination of all methods. As mentioned, separable convolutions are not suitable for working with small tensors. Due to the quadrupling of the tensor size in three qubits, the use of separable convolutional, along with the other methods, results in an improvement.
Aside from these factors, we considered that the designed model would be highly accurate with a generalization ability, in addition to having a reasonable learning speed, and that it would be capable of attaining the highest accuracy in a relatively short time. We achieved an accuracy of 98.53% with only 14 epochs of XpookyNet; this result was obtained from test data, which is noteworthy. Additionally, with the help of our multi-functional model, in addition to detecting entanglement, XpookyNet can also determine the degree of entanglement.
Observing and comparing the learning process of each situation mentioned in the table and the degree of improvement in their accuracy provides more insight. Fig. 5 shows the loss function diagrams and accuracy diagrams separately for simple and combined models. The shortcomings of the NN model compared to convolutional models and the leap caused by ReduceLROnPlateau are evident both in the loss function and in accuracy at a glance. This phenomenon is depicted in Fig. 5 (c) and (d), where it is evident that the application of ReduceLROnPlateau at epochs 8 and 12 leads to a substantial reduction in the loss function. This model has an excellent start in the first epoch thanks to separable convolutional and BN layers. Alternatively, reducing the learning rate on plateaus leads to better results.
### Three-qubit Entanglement Detection
The XpookyNet model was put to work on three-qubit data, with the only difference being that separable convolution layers were employed along with BN, instead of using convolution layers, as shown in the last case of Table 1.
A comprehensive investigation of partial entanglements and mixed-state data was performed since they are ignored when classifying three-qubit data with high accuracy if the data is limited to a few categories, making tripartite entanglement and so on insignificant. XpookyNet classifies all five categories with an accuracy of 99.88%, whereas the state mixing transformation relatively reduces its accuracy. The decrease
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & & & \multicolumn{3}{c}{Model Structure} \\ \cline{4-6} Models & Max. & Epoch & Number & of & Number of \\ & ACC & Time & Conv. & Layer & FC Layer & Parameters \\ \hline NN & 0.8325 & 19s44ms & 0 & 3 & 3,233 \\ Simple Conv. & 0.9500 & 51s55ms & 10 & 2 & 1,334,913 \\ Brch. & 0.9624 & 63s22ms & 14 & 2 & 2,015,521 \\ BN. Sep. & 0.9632 & 57s20ms & 10 & 2 & 1,080,263 \\ Plat. & 0.9789 & 48s33ms & 10 & 2 & 1,334,913 \\
**Brch. Plt.** & **0.9852** & **64s58ms** & **14** & **2** & **2,015,521** \\ Plt. BN. Sep. & 0.9749 & 60s47ms & 10 & 2 & 1,080,263 \\ Brch. BN. Sep. & 0.9664 & 75s62ms & 14 & 2 & 1,324,615 \\ Brch. Plt. BN. Sep. & 0.9761 & 75s95ms & 14 & 2 & 1,335,430 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of Methods used in this study. The words Conv., Brch., Sep., and Plat. stand for Convolutional, Branched model, Separable convolution, and ReduceLROnPlateau, respectively.
in states' purity makes them more difficult to analyze. From Fig. 6, it is evident that XpookyNet can quickly classify the state space in its first epoch with a high degree of precision. However, its performance diminishes with a decline in purity. Further, it is evident that even though XpookyNet classified the GHZ state, W state, and Graph state as tripartite, as learning progressed, it differentiated them and divided them into three distinct categories. Most importantly, the confluence between separable and partially entangled states experiences the highest amount of collision.
In the next step, we display in Fig. 7 the two and three-qubit classification confusion matrix as a performance metric for evaluating XpookyNet. In the two-qubit entanglement case, the error in detecting full entanglement is expectedly higher, but when three qubits are considered, full entanglement can be detected with fewer errors. As the number of qubits (\(N\)) increases, detecting \(N\)-partite entanglement becomes undoubtedly more challenging. To address this apparent contradiction, all bipartite entanglements, whether two-qubit or partially entangled, are generated randomly with varying purities and aren't limited to known types due to their labeling indexes. The tripartite set is quite limited as it only includes the GHZ state, W state, and Graph state, representing a small fraction of the entire set. Additionally, as observed, the most significant error in prediction arises when partial entanglements are incorrectly characterized as separable.
## 6 Conclusion and Discussion
In recent years, quantum information theory has witnessed rapid growth and faced notable challenges. One significant hurdle is applying artificial intelligence (AI) to this
Figure 5: Loss function and accuracy graphs for the methods listed in Table 1. (a) Comparison of basic models and models that adopt one of the approaches in terms of accuracy. (b) The accuracy of the combined approach is zoomed in due to their high accuracy. (c) Loss function graphs of part a are shown in the same colors. (d) Loss function graphs of part b are shown in the same colors.
Figure 6: A three-qubit state classification plot visualizes how the purity of the states and the number of epochs elapsed affect the model’s ability to classify data.
Figure 7: The confusion matrices for two-qubit and three-qubit classification.
field due to the complexity of feeding data with complex numbers into conventional AI models. However, this article presents a groundbreaking solution that addresses this challenge and propels the field forward. The key contribution of this research is the development of an advanced deep Convolutional Neural Network (CNN) model, boasting an impressive accuracy rate of 98.5%. This innovative model successfully overcomes the limitations of handling data with complex numbers, thereby unlocking new possibilities for effectively leveraging advanced machine learning techniques in processing quantum information.
Furthermore, we have investigated the preparation and labeling of three-qubit states, considering both tripartite and partially entangled states. We have explored the impact of purity on the complexity of these states, shedding light on quantum systems' fundamental properties. Understanding quantum states better is of the utmost importance in QI theory, as it forms the foundation for various quantum algorithms and applications.
We can solve previously unsolved problems in QI theory by leveraging advanced machine learning models, such as the deep CNN we have developed. These models offer powerful tools for analyzing and processing quantum data, enabling us to gain deeper insights into quantum phenomena. Moreover, applying these models to analyze mixed states offers new perspectives for studying noise and imperfections in quantum systems. It can alter how we investigate and mitigate noise in quantum systems, enhancing performance and robustness.
Applied QI techniques hold promise for quantum computing and information processing in the future. In addition to providing insights into quantum states, our research represents a tremendous leap forward in developing a high-accuracy deep CNN model. The potential impact of these findings extends beyond theoretical research, contributing to advancing this interdisciplinary field.
## Appendix A A Generation of The Two-qubits Entangled State
Two-qubit separable states are generated by:
\[\rho_{sep}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{A}\otimes\rho_{i}^{B}, \tag{1}\]
where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to an arbitrary number beyond, and density matrices of A and B qubits are \(\rho_{i}^{A}\) and \(\rho_{i}^{B}\), respectively. Entangled states are selected from the system's randomly generated states using the EoF criterion. Therefore, Two-qubit entangled states can be considered as the following:
\[\rho_{ent}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{AB}, \tag{2}\]
where the density matrix of the two-qubit entangled state is \(\rho_{i}^{AB}\).
## Appendix B A Generation of Three-qubits Entangled State
In the three qubits case, two entanglement classes (bipartite and tripartite entangled state) need to be generated.
### Three-qubit Separable State
Separable states are prepared by applying three single qubit operators \(U_{rand}^{A,B,C}\) to a fixed initial state \(\psi_{0}\rangle\). Therefore, three-qubit separatable states are generated by:
\[\rho_{sep}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{A}\otimes\rho_{i}^{B}\otimes\rho_ {i}^{C}, \tag{10}\]
where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to 20.
### Bipartite entangled state
Bipartite entangled states of a three-qubit system are prepared by generating a randomly separated one-qubit and an entangled pair of two-qubit. Entangled states are selected from randomly generated states of the entire system using the EoF criterion.
Case 1: Entangled pairs B and C:
Bipartite entangled states are generated by:
\[\rho_{A|BC}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{A}\otimes\rho_{i}^{BC}, \tag{11}\]
where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to an arbitrary number higher than one, and \(\rho_{i}^{BC}\) is the generated entangled states for subsystem BC using the EoF criterion.
Case 2: Entangled pairs A and B:
Bipartite entangled states are generated by:
\[\rho_{C|AB}=\sum_{i=1}^{m}\lambda_{i}\rho_{i}^{AB}\otimes\rho_{i}^{C}, \tag{12}\]
where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to 20, and \(\rho_{i}^{AB}\) is the generated entangled states for subsystem AB using the EoF criterion.
Case 3: Entangled pairs A and C:
The state vectors of the entangled state \(|\psi_{AC}\rangle\) and separated state \(|\psi_{B}\rangle\) considered as:
\[|\psi_{AC}\rangle=[a_{0}\ \ a_{1}\ \ a_{2}\ \ a_{3}]^{T} \tag{13}\]
and
\[|\psi_{B}\rangle=[b_{0}\ \ b_{1}]^{T}. \tag{14}\]
Therefore, the bipartite entangled states can be considered as:
\[|\psi_{B|AC}\rangle=\frac{1}{N_{B|AC}}[a_{0}b_{0}\ \ a_{1}b_{0}\ \ a_{0}b_{1}\ \ a_{1}b_{1}\ \ a_{2}b_{0}\ \ a_{3}b_{0}\ \ a_{2}b_{1}\ \ a_{3}b_{1}]^{T}, \tag{15}\]
where \(a\) and \(b\) are state vector coefficients that should satisfy normalization condition, \(\sum_{i}|a_{i}|^{2}=1\), and \(\sum_{i}|b_{i}|^{2}=1\). Also, \(N_{B|AC}\) is the normalization coefficient of the three-qubit state vector. Ultimately, the bipartite entangled states are generated by:
\[\rho_{B|AC}=\sum_{i=1}^{m}\lambda_{i}\left(|\psi_{B|AC}\rangle\langle\psi_{B| AC}|\right)_{i}, \tag{16}\]
where \(\sum_{i}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\), with \(m\) iterating from 1 to 20, and \(|\psi_{B|AC}\rangle\) is the generated bipartite entangled states for subsystem AC and separated qubit B.
### Tripartite GHZ state
The state vector of the tripartite GHZ state (final three-qubit entangled state \(|\psi_{f}\rangle\)) can be considered as:
\[|\psi_{f}\rangle=|\Psi_{GHZ}\rangle=\frac{1}{\sqrt{N_{GHZ}}}\left(\cos{(\epsilon )}|000\rangle+\sin{(\epsilon)}e^{i\phi}|\varphi_{A}\varphi_{B}\varphi_{C} \rangle\right) \tag{11}\]
with initial states:
\[|\varphi_{A}\rangle=\cos{(\theta_{A})}|0\rangle+e^{i\phi_{A}}\sin{( \theta_{A})}|1\rangle, \tag{12}\] \[|\varphi_{B}\rangle=\cos{(\theta_{B})}|0\rangle+e^{i\phi_{B}}\sin{ (\theta_{B})}|1\rangle,\] (13) \[|\varphi_{C}\rangle=\cos{(\theta_{C})}|0\rangle+e^{i\phi_{C}}\sin{ (\theta_{C})}|1\rangle, \tag{14}\]
where \(N_{GHZ}=1/(1+\cos{(\delta)}\sin{(\delta)}\cos{(\alpha)}\cos{(\beta)}\cos{( \phi)})\). The angles belong to the intervals \(\delta\in(0,~{}\pi/4]\), \((\alpha,~{}\beta,~{}\gamma)\in(0,~{}\pi/2]\), and \(\phi\in[0,~{}2\pi)\).
### Tripartite W-state
Every W-state can be written as:
\[|\psi_{f}\rangle=|\Psi_{W}\rangle=\frac{1}{\sqrt{N_{W}}}\left(a~{}|001\rangle+ b~{}|010\rangle+c~{}|100\rangle+d~{}|\phi\rangle\right), \tag{15}\]
where normalization coefficient is \(N_{W}=1/\sqrt{|a|^{2}+|b|^{2}+|c|^{2}+|d|^{2}}\), and \(|\phi\rangle\) is a superposition of remaining states that superposed with W-state.
### Tripartite Graph state
The Graph state vector can be considered as:
\[|\psi_{f}\rangle=|\Psi_{Graph}\rangle=\frac{1}{\sqrt{N_{Graph}}}( \alpha_{0}~{}|000\rangle+\alpha_{1}~{}|001\rangle+\alpha_{2}~{}|010\rangle\] \[-\alpha_{3}~{}|011\rangle+\alpha_{4}|100\rangle+\alpha_{5}~{}|10 1\rangle-\alpha_{6}~{}|110\rangle+\alpha_{7}~{}|111\rangle), \tag{16}\]
where normalization coefficient is \(N_{Graph}=1/\sqrt{|\alpha_{0}|^{2}+\ldots+|\alpha_{7}|^{2}}\).
|
2310.00222 | Source Inference Attacks: Beyond Membership Inference Attacks in
Federated Learning | Federated learning (FL) is a popular approach to facilitate privacy-aware
machine learning since it allows multiple clients to collaboratively train a
global model without granting others access to their private data. It is,
however, known that FL can be vulnerable to membership inference attacks
(MIAs), where the training records of the global model can be distinguished
from the testing records. Surprisingly, research focusing on the investigation
of the source inference problem appears to be lacking. We also observe that
identifying a training record's source client can result in privacy breaches
extending beyond MIAs. For example, consider an FL application where multiple
hospitals jointly train a COVID-19 diagnosis model, membership inference
attackers can identify the medical records that have been used for training,
and any additional identification of the source hospital can result the patient
from the particular hospital more prone to discrimination. Seeking to
contribute to the literature gap, we take the first step to investigate source
privacy in FL. Specifically, we propose a new inference attack (hereafter
referred to as source inference attack -- SIA), designed to facilitate an
honest-but-curious server to identify the training record's source client. The
proposed SIAs leverage the Bayesian theorem to allow the server to implement
the attack in a non-intrusive manner without deviating from the defined FL
protocol. We then evaluate SIAs in three different FL frameworks to show that
in existing FL frameworks, the clients sharing gradients, model parameters, or
predictions on a public dataset will leak such source information to the
server. We also conduct extensive experiments on various datasets to
investigate the key factors in an SIA. The experimental results validate the
efficacy of the proposed SIAs. | Hongsheng Hu, Xuyun Zhang, Zoran Salcic, Lichao Sun, Kim-Kwang Raymond Choo, Gillian Dobbie | 2023-09-30T01:56:04Z | http://arxiv.org/abs/2310.00222v1 | # Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
###### Abstract
Federated learning (FL) is a popular approach to facilitate privacy-aware machine learning since it allows multiple clients to collaboratively train a global model without granting others access to their private data. It is, however, known that FL can be vulnerable to _membership inference attacks_ (MIAs), where the training records of the global model can be distinguished from the testing records. Surprisingly, research focusing on the investigation of the source inference problem appears to be lacking. We also observe that identifying a training record's source client can result in privacy breaches extending beyond MIAs. For example, consider an FL application where multiple hospitals jointly train a COVID-19 diagnosis model, membership inference attackers can identify the medical records that have been used for training, and any additional identification of the source hospital can result the patient from the particular hospital more prone to discrimination. Seeking to contribute to the literature gap, we take the first step to investigate source privacy in FL. Specifically, we propose a new inference attack (hereafter referred to as _source inference attack_ - SA), designed to facilitate an honest-but-curious server to identify the training record's source client. The proposed SIAs leverage the Bayesian theorem to allow the server to implement the attack in a non-intrusive manner without deviating from the defined FL protocol. We then evaluate SIAs in three different FL frameworks to show that in existing FL frameworks, the clients sharing gradients, model parameters, or predictions on a public dataset will leak such source information to the server. We also conduct extensive experiments on various datasets to investigate the key factors in an SA. The experimental results validate the efficacy of the proposed SIAs, e.g., an attack success rate of 67.1% (baseline 10%) can be achieved when the clients share model parameters with the server. Comprehensive ablation studies demonstrate that the success of an SA is directly related to the overfitting of the local models.
Federated Learning, Membership Inference Attacks, Source Inference Attacks, Privacy Leakage.
## 1 Introduction
Recent deep learning advances have partly contributed to the building of powerful machine learning (ML) models from large datasets. In practice, however, data often resides across different organizational entities (also referred to as data islands). The data records of a single entity, perhaps with the exception of extremely large technology organizations, are generally limited and do not represent the entire data distribution. Thus, stakeholders (e.g., consumers) can generally benefit if different data owners can collaboratively train a joint machine learning model based on the union of different datasets. For example, our society will benefit if different countries and organizations can collaborate to collaboratively train COVID-19 diagnosis models using the broad range of medical data records in these different entities. The exacting privacy regulations (e.g., GDPR [1] in the European Union and CCPA [2] in the United States), however, complicate such collaborative efforts.
Federated learning (FL) is one approach that can be utilized to circumvent the limitations due to data islands, by allowing multiple clients coordinated by a central server to train a joint ML model in an iterative manner [3, 4, 5, 6]. In FL, the clients send their model updates to the server but never their raw training dataset, thereby leading to a privacy-aware paradigm for collaborative model training. For the example mentioned above, FL can greatly facilitate the hospitals wishing to train a joint COVID-19 diagnosis model from the distributed data in different hospitals. A real-life case in [7] has shown the successful adoption of FL where an ML model for COVID-19 diagnosis has been trained with the usage of the geographically distributed chest computed tomography data collected from different patients at different hospitals.
However, many recent studies [8, 9, 10, 11, 12, 13] have shown that FL does not provide sufficient privacy guarantees, because sensitive information from the training data can still be revealed during the communication process. In FL, the clients transmit necessary information from updates, e.g., gradients, to the central server for global model training. Because the updates are derived from the clients' private training data, there are several recently proposed privacy attacks trying to infer the privacy of the clients from such updates, such as data reconstruction attacks [14], property inference attacks [15], preference profiling attacks [13], and membership inference attacks (MIAs) [16]. Among such attacks, MIAs aim to identify whether or not a data record was in the training dataset of a target model (i.e., a member). While an MIA seems like a simple attack, it can impose
severe privacy risks in many settings where knowing that someone was in a dataset is harmful [17]. For example, by identifying the fact that a clinical record was in the training dataset of a medical model associated with a certain disease, MIAs enable an attacker to know that the owner of the clinical record has a high chance of having the disease.
In FL, the training data of the FL system consist of all the training records in the datasets of the clients because they collaboratively build a global federated model. Thus, MIAs of existing research in the context of FL [8, 9, 18, 19] are designed to distinguish the training records from the testing records of the global model, i.e., they do not require to identify which client owns a training record, i.e., the source of the training record. However, it is important and practical to explore source privacy in FL beyond membership privacy because the leakage of the source information can further breach privacy. For example, in the previously mentioned FL application where multiple hospitals jointly train a COVID-19 diagnosis model, MIAs can only tell who has had a COVID-19 test, but the further identification of the source hospital of the people could make them more prone to discrimination, especially when the hospital is located in a high-risk region or country [20]. Other types of attacks in FL such as property inference attacks also fail to explore the source privacy of clients because they are either designed to infer other types of private information or the private information inferred does not attribute to a specific client. For example, property inference attacks in FL [8] can infer a certain property in the training data but can not attribute the property to the client who owns this property.
This paper proposes a novel inference attack, named _Source Inference Attack_ (SIA), that determines which client in FL owns a training record. The SIA can be considered a stronger attack based on the foundation of MIAs, i.e., after identifying which data records are training records via MIAs, the attacker further implements SIAs to determine which client a training record comes from. For practical reasons, we consider the server can be a semi-honest-but-curious) attacker who passively tries to learn the secret based on the information on the identities of clients and communications between them. Specifically, the semi-honest server means that the server tries to infer the private information of the clients without interfering with the federated training, i.e., without deviating from the defined FL protocol, which is the main challenge of SIAs. Note that the attacker can be one of the clients, but we argue that it is impractical in this case for SIAs because the client does not know the identities of the other clients, and it can only access the global model via communications to the server [21].
We leverage the Bayesian theorem to analyze how to effectively implement SIAs in FL. We demonstrate that an honest-but-curious server can estimate the source of a training record in an SIA by leveraging the prediction loss of the local models. More specifically, we theoretically demonstrate that the client with the smallest prediction loss on the training record should have the highest probability of owning it. To demonstrate the feasibility of the proposed SIAs to different FL frameworks, we propose three FL-SIA frameworks that enable the server to conduct the SIAs in three FL frameworks, FedSGD [3], FedAvg [3], and FedMD [22]. The purpose of selecting the three FL frameworks is to show that in existing FL frameworks, the clients sharing gradients, model parameters, or predictions on a public dataset will lead to source information leakage to the server. We conduct extensive experiments on six datasets and different model architectures under various FL settings to evaluate the effectiveness of SIAs. The experiment results validate the efficacy of our proposed SIAs. We conduct a detailed ablation study to investigate how data distributions across the clients and the number of local epochs in FL affect the performance of an SIA. An important finding is that the success of an SIA is directly related to the overfitting of the local models, which is mainly caused by the non-IID data distribution across the clients.
The main contributions of this paper are three-folds:
* We propose a novel inference attack in federated learning (FL), named as source inference attack (SIA), which infers the source client of a training record. Beyond membership inference attacks, SIAs can further breach the privacy of the training records in FL.
* We innovatively adopt the Bayesian theorem to analyze how an honest-but-curious server can implement SIAs in a non-intrusive manner to infer the source of a training record with the highest probability by using the prediction loss of local models.
* We show the feasibility and effectiveness of SIAs in three FL frameworks, including FedSGD, FedAvg, and FedMD. We perform extensive experiments to empirically evaluate SIAs in the three frameworks with various datasets and under different FL settings. The results validate the efficacy of the proposed SIA. Our proposed SIAs shed new light on how FL reveals sensitive information and the need to build more private FL frameworks.
This work extends our earlier conference paper [23]. In Section 2, we introduce additional background material to give the readers have a broader understanding of the role and impact of SIAs in FL. In Section 3, we extend the earlier work in the following ways:
* We use a threat model to describe the attack goal, target FL systems, attacker, and attack knowledge.
* We provide systematic theoretical analysis and propose theorems to show how to leverage the prediction loss of the models for conducting SIAs in FL. For the corresponding theorems, we also present detailed corresponding proofs.
* We show how to implement SIAs in two other commonly-used FL frameworks, FedSGD and FedMD, while in [23] we only introduced the implementation of SIAs in FedAvg. This extension demonstrates the broader applicability of SIAs in FL.
In Section 4, we describe our comprehensive experimental evaluation, including the addition in Section 4.6 to explain why the proposed SIAs can work in FL. In Section 5, we provide more comprehensive experiments to investigate whether the popular defense method of differential privacy can mitigate SIAs. Moreover, in Section 5, we discuss the limitations and potential research opportunities of our proposed SIAs. In Section 6, we also include additional related
work to give the readers a broader picture of privacy attacks and defenses in FL. Source code for implementing SIAs in FedSGD, FedAvg, and FedMD are also included1, while in [23] we only provide code for SIAs in FedAvg.
Footnote 1: [https://github.com/HongshengHu/SIAs-Beyond_MIAs_in_Federated_Learning](https://github.com/HongshengHu/SIAs-Beyond_MIAs_in_Federated_Learning)
## 2 Preliminaries
This section reviews the background of federated learning and membership inference attacks.
### _Federated Learning_
Because data usually exists in the form of isolated islands and central storage is impractical due to privacy regulations and laws, federated learning (FL) has been proposed to allow multiple clients to collaboratively train a machine learning model in an interactive manner. During the training phase of FL, the clients send necessary information of the updates but never their private datasets to the central server. Because the updates contain less information than the raw training data, FL has obvious privacy advantages compared to data center training [3].
**Horizontal and vertical FL.** Based on the feature space or the sample ID space the local datasets share, FL can be divided into horizontal FL and vertical FL [24, 25]. Horizontal FL, aka. cross-device FL, describes FL scenarios where the local datasets share the same feature space but are different in samples. An example of horizontal FL is multiple regional banks having different users from their respective regions, while the feature spaces of such users are the same because the banks have very similar businesses [26]. Vertical FL, aka. cross-silo FL, describes FL scenarios where the local datasets share the same or similar sample ID space but differ in feature space. An example of vertical FL is two different commercial companies in the same city having the same or very similar customers in the area. However, due to different business modes, the commercial company of the bank has the user's revenue and expenditure transactions, while the commercial company of e-commerce owns the user's browsing and purchasing history. Vertical FL enables the two different companies to jointly build a model for predicting users' living behaviors [24].
**Homogeneous and heterogeneous FL.** Based on architectures, FL frameworks can be divided into two categories, i.e., FL with a homogeneous architecture and FL with a heterogeneous architecture [27]. In FL with a homogeneous architecture, the local models have the same architecture as the global model, and there are two forms of FL [3]: i) FedSGD, in which each of the clients sends gradients calculated on its local data to the server; ii) FedAvg, in which each of the clients sends the calculated local models' parameters to the server. FedSGD has the advantage of convergence guarantees of the global model but requires frequent communication between the clients and the server [28]. FedAvg is more communication efficient but the global model may not perform well when the training data across the clients are highly non-identically distributed [29]. In FL with a heterogeneous architecture, the local models do not have to share the same architecture as the global model, while each client can still benefit during the federated training process. FedMD [22] (Federated Model Distillation) is a novel FL framework with a heterogeneous architecture, which shares the knowledge of each client's local model via their predictions on an unlabeled public dataset instead of the local model's parameters. Compared to FedAvg, FedMD eliminates the risk of white-box inference attacks [8] and has the advantage of reduced communication costs.
Note that there are many other FL frameworks such as FedProx [29], SCAFFOLD [30], FedDF [31], and Cronus [32] that are proposed to solve different challenges in FL [25, 28, 33]. These frameworks can be divided into the categories of homogeneous and heterogeneous FL we introduced above based on their architectures. They differ from the FL frameworks of FedSGD, FedAvg, and FedMD in how the local models or the global model is trained, but the information exchange between the clients and the server is the same as the three FL frameworks, i.e., the clients sharing gradients, model parameters, or predictions on an unlabeled dataset to the server. In this paper, we show the effectiveness of SIAs in the three FL frameworks of FedSGD, FedAvg, and FedMD to demonstrate that an _honest-but-curious_ server in FL can infer source information of the training records, no matter what kind of updates are uploaded by the clients. But it is worth noting that SIAs are also effective in other FL frameworks because their communication exchange between the clients and the server is the same as the FL frameworks we evaluated in this paper.
### _Membership Inference Attacks_
Membership inference attacks (MIAs) aim to identify whether or not a data record was in the training dataset of a target model. Although an MIA seems like a simple attack, it can directly lead to severe privacy breaches of individuals. For instance, identifying that a patient's clinical record was used to train a model associated with a disease can reveal that the patient has this disease with a high chance. Although FL has emerged as a popular privacy-aware learning paradigm, recent works [8, 9, 18, 34, 35, 36, 37] have demonstrated the success of MIAs on FL models. For instance, Melis et al. [8] show that a malicious client in FL can infer whether or not a location profile was in the FourSquare dataset that was used to train the global model. In FL, because the training dataset of the FL system consists of all the local training data records, the existing research of MIAs are designed to infer whether or not a data record was used to train the global model, but not to identify whether a data record was used to train a local model [19].
Currently, there are no attacks in FL to explore which client (i.e., the source) owns the training records identified by MIAs, while it is important and practical to explore the source information of the training records. For instance, in a promising FL application where multiple hospitals jointly train a COVID-19 model for COVID-19 diagnosis, an attacker can implement MIAs to infer who has been tested for COVID-19, but further identification of the source hospital where the people are from will make them more prone to discrimination, especially when the hospital is in a high-risk region or country [20]. In this paper, we propose SIAs to show the feasibility of breaching the source privacy of the
training records in FL. Our proposed SIAs shed new light on how FL reveals sensitive information and the necessity to build more private FL frameworks.
## 3 Source Inference Attacks
In this section, we first introduce the threat model. Then, we show how to leverage the Bayesian theorem to analyze how the attacker can perform SIAs based on the prediction loss of the local models.
### _Threat Model_
**Target FL systems.** As this is the first paper that investigates the source privacy of the training records in the FL system and to avoid the vague definition of SIAs, we consider SIAs on _horizontal FL_ (see Section 2.1 for detailed introduction of horizontal FL) where there is only one source client for one data record. While in vertical FL, one data record can correspond to multiple source clients, and SIAs under this setting seems more interesting, we leave the investigation of SIAs in vertical FL for our future work.
We consider the server can be a semi-honest attacker who passively tries to learn the secret data from the communication updates uploaded by the local clients. During the attack process, the server follows the training protocol of the FL system but will passively try to learn the secret by mounting the SIAs. The server can acquire the communication updates uploaded by the local clients. Finally, based on the communication updates, the server can acquire the source information. There is a possibility that the attacker can be one of the clients. However, because local clients in FL can only observe the global model parameters while having little information about the identities of other clients [21], it is almost impossible for a local client to achieve the attack goal (detailed in the following paragraph).
During the attack process, the local clients follow the training protocol of the FL system. Specifically, the local clients download the global aggregation results calculated by the server and then perform local model updates. After that, the clients upload the necessary information of updates to the server for aggregation.
**Attack goal.** The attacker of SIAs in FL aims to identify which client owns a training record that participants in the federated training process. The goal of this attack is motivated by FL applications where source information is sensitive. A motivating example is the FL application of multiple hospitals training a COVID-19 diagnosis model (see Section 1 and Section 2.2). Another example is the FL application of multiple users collaboratively training an image classification model. If an attacker can identify which user owns a sensitive image, the attacker can directly obtain the user's privacy based on the sensitive content of that image [8].
**Attack knowledge.** We consider the attacker of the central server is _honest-but-curious_: The central server will not deviate from the defined FL protocol but will attempt to infer the source information from legitimately received information from the local clients. Specifically, the central server implements SIAs to infer the source information of the clients based on the received gradients, model parameters, or predictions on an unlabeled dataset from the local clients. However, the central server will not actively manipulate these updates from the local clients and thus without affecting the utilities of the FL model.
Because SIAs are considered as further attacks based on the foundation of MIAs, we follow the similar setting in the literature of MIAs [16, 19, 38] that the attacker is given a data record we called _a target record_, and it has been identified as a training record by MIAs. Note that we do not focus on how the attacker obtains the training record from the clients but focus on investigating the potential source privacy leakage of the training record, while one can refer to data reconstruction attacks [10, 14, 28, 39, 40] to see how an attacker in FL can reconstruct the training data. An SIA is said to succeed if the attacker can correctly identify which local client the target record comes from.
### _Source Inference Attack Method_
In this paper, we focus the horizontal FL on classification tasks. Let \(D_{\text{train}}=\{D_{1},\cdots,D_{K}\}\) (assuming there are \(K\) clients) be the training dataset of the FL system, where each \(D_{i}\) corresponds to the local training dataset of the client \(i\). We assume there are \(n\) data records \(\mathbf{z}_{1},\cdots,\mathbf{z}_{n}\) in \(D_{\text{train}}\). Each data record is represented as \(\mathbf{z}=(\mathbf{x},y)\), where \(\mathbf{x}\) is the feature and \(y\) is the class label.
**Source status.** In horizontal FL, each target record exists in the local dataset of one client. Thus, we can use a \(K\)-dimensional multinomial vector \(\mathbf{s}\) to represent the source status of each target record. In \(\mathbf{s}\), the \(k\)-th element equals 1 representing the target record belongs to the client \(k\), while all the remaining elements equal \(0\). For example, assuming there are six clients in FL and the target record \(\mathbf{z}\) comes from the second client. Then, the multinomial variable \(\mathbf{s}\) is represented by \(\mathbf{s}=[0,1,0,0,0,0]^{\text{T}}\).
We assume the target record \(\mathbf{z}_{i}\) comes from the client \(k\) with the probability \(\lambda\), i.e., the probability of the \(k\)-th element in \(\mathbf{s}_{i}\) equals 1 is \(\lambda\), denoted as \(\mathbb{P}(s_{ik}=1)=\lambda\). Without loss of generality, we take the case of \(\mathbf{z}_{1}\) to study the source inference problem, which is defined as follows.
**Definition 1** (Source Inference).: _Given a local model \(\mathbf{\theta}_{k}\) and a target record \(\mathbf{z}_{1}\), source inference on \(\mathbf{z}_{1}\) aims to infer the posterior probability of \(\mathbf{z}_{1}\) belonging to the client \(k\):_
\[\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1}):=\mathbb{P}(s_{1k}=1|\mathbf{\theta}_{k}, \mathbf{z}_{1}). \tag{1}\]
For source inference by Definition 1, we aim to derive an explicit formula for \(\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})\) from the Bayesian perspective, which can provide insights on how to leverage the prediction loss of the local clients for implementing SIAs. We denote \(\mathbf{\tau}=\{\mathbf{z}_{2},\cdots,\mathbf{z}_{n},\mathbf{s}_{2},\cdots,\mathbf{s}_{n}\}\) as the set which includes the remaining training records and their source status. The explicit formula of \(\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})\) is given by the following theorem.
**Theorem 1**.: _Given a local model \(\mathbf{\theta}_{k}\) and a target record \(\mathbf{z}_{1}\), the source inference is given by:_
\[\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})=\mathbb{E}_{\mathbf{\tau}}\left[\sigma \left(\log(\frac{\mathbb{P}(\mathbf{\theta}_{k}|s_{1k}=1,\mathbf{z}_{1},\mathbf{\tau})}{ \mathbb{P}(\mathbf{\theta}_{k}|s_{1k}=0,\mathbf{z}_{1},\mathbf{\tau})})+\mu_{\lambda} \right)\right], \tag{2}\]
where \(\mu_{\lambda}=\log(\frac{\lambda}{1-\lambda})\), and \(\sigma(\cdot)\) is a sigmoid function defined as \(\sigma(\mathbf{x})=(1+e^{-\mathbf{x}})^{-1}\).
Proof.: Applying the law of total expectation [41], we have:
\[\begin{split}\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})&= \mathbb{P}(s_{1k}=1|\mathbf{\theta}_{k},\mathbf{z}_{1})\\ &=\mathbb{E}_{\mathbf{\tau}}[\mathbb{P}(s_{1k}=1|\mathbf{\theta}_{k},\mathbf{ z}_{1},\mathbf{\tau})].\end{split} \tag{3}\]
Applying the Bayes' formula, we have:
\[\mathbb{P}(s_{1k}=1|\mathbf{\theta}_{k},\mathbf{z}_{1},\mathbf{\tau})=\frac{\mathbb{P}(\bm {\theta}_{k}|s_{1k}=1,\mathbf{z}_{1},\tau)\mathbb{P}(s_{1k}=1|\mathbf{z}_{1},\mathbf{\tau} )}{\mathbb{P}(\mathbf{\theta}_{k}|\mathbf{z}_{1},\mathbf{\tau})}. \tag{4}\]
As the source variables \(\mathbf{s}_{1},\cdots,\mathbf{s}_{n}\) are independent, event \(s_{1k}=1\) is independent from \(\mathbf{z}_{1},\mathbf{\tau}\). Thus, we have:
\[\mathbb{P}(s_{1k}=1|\mathbf{z}_{1},\mathbf{\tau})=\mathbb{P}(s_{1k}=1). \tag{5}\]
Let:
\[\phi :=\mathbb{P}(\mathbf{\theta}_{k}|s_{1k}=1,\mathbf{z}_{1},\mathbf{\tau}) \mathbb{P}(s_{1k}=1). \tag{6}\] \[\omega :=\mathbb{P}(\mathbf{\theta}_{k}|s_{1k}=0,\mathbf{z}_{1},\mathbf{\tau}) \mathbb{P}(s_{1k}=0). \tag{7}\]
Plugging eqn. (5), eqn. (6), and eqn. (7) into eqn. (4), we have:
\[\begin{split}\mathbb{P}(s_{1k}=1|\mathbf{\theta}_{k},\mathbf{z}_{1},\mathbf{ \tau})&=\frac{\mathbb{P}(\mathbf{\theta}_{k}|s_{1k}=1,\mathbf{z}_{1},\mathbf{ \tau})\mathbb{P}(s_{1k}=1)}{\mathbb{P}(\mathbf{\theta}_{k}|\mathbf{z}_{1},\mathbf{\tau})} \\ &=\frac{\phi}{\phi+\omega}\\ &=\sigma\left(\log\left(\frac{\phi}{\omega}\right)\right).\end{split} \tag{8}\]
Given that \(\mathbb{P}(s_{ik}=1)=\lambda\), we have:
\[\begin{split}\log\left(\frac{\phi}{\omega}\right)=\log\left( \frac{\mathbb{P}(\mathbf{\theta}_{k}|s_{1k}=1,\mathbf{z}_{1},\mathbf{\tau})}{\mathbb{P}( \mathbf{\theta}_{k}|s_{1k}=0,\mathbf{z}_{1},\mathbf{\tau})}\right)+\log\left(\frac{\lambda }{1-\lambda}\right).\end{split} \tag{9}\]
Let \(\mu_{\lambda}=\log(\frac{\lambda}{1-\lambda})\), we have:
\[\begin{split}\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})& =\mathbb{E}_{\mathbf{\tau}}[\mathbb{P}(s_{1k}=1|\mathbf{\theta}_{k},\mathbf{ z}_{1},\mathbf{\tau})]\\ &=\mathbb{E}_{\mathbf{\tau}}\left[\sigma\left(\log\left(\frac{\phi}{ \omega}\right)\right)\right]\\ &=\mathbb{E}_{\mathbf{\tau}}\left[\sigma\left(\log(\frac{\mathbb{P}( \mathbf{\theta}_{k}|s_{1k}=1,\mathbf{z}_{1},\mathbf{\tau})}{\mathbb{P}(\mathbf{\theta}_{k}|s_{1 k}=0,\mathbf{z}_{1},\mathbf{\tau})}+\mu_{\lambda}\right)\right],\end{split} \tag{10}\]
which concludes the proof.
We observe that _Theorem 1_ does not have the loss \(\ell(\cdot)\) form and only relies on the posterior parameter \(\mathbf{\theta}_{k}\) in expectation given \(\{\mathbf{z}_{1},\cdots,\mathbf{z}_{n},\mathbf{s}_{1},\cdots,\mathbf{s}_{n}\}\) is a random variable. To make \(\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})\) more explicit with the loss term, we assume an ML algorithm produced parameters \(\mathbf{\theta}\) follows a posterior distribution. Specifically, following the assumption in the previous work [42], we assume the posterior distribution of an ML model \(\mathbf{\theta}\) as follows:
\[p(\mathbf{\theta}|\mathbf{z}_{1},\cdots,\mathbf{z}_{n})\propto e^{-\frac{1}{\gamma}\sum_{i= 1}^{n}\ell(\mathbf{\theta},\mathbf{z}_{i})}, \tag{11}\]
where \(\gamma\) is a temperature parameter which controls the stochasticity of \(\mathbf{\theta}\). Following this assumption, given \(\{\mathbf{z}_{1},\cdots,\mathbf{z}_{n},\mathbf{s}_{1},\cdots,\mathbf{s}_{n}\}\), the posterior distribution of \(\mathbf{\theta}_{k}\) follows:
\[p(\mathbf{\theta}_{k}|\mathbf{z}_{1},\cdots,\mathbf{z}_{n},\mathbf{s}_{1},\cdots,\mathbf{s}_{n}) \propto e^{-\frac{1}{\gamma}\sum_{i=1}^{n}s_{ik}\ell(\mathbf{\theta}_{k},\mathbf{z}_{i })}. \tag{12}\]
We further define the posterior distribution of \(\mathbf{\theta}_{k}\) given training records \(\mathbf{z}_{2},\cdots,\mathbf{z}_{n}\) and their source status \(\mathbf{s}_{2},\cdots,\mathbf{s}_{n}\) :
\[p_{\mathbf{\tau}}(\mathbf{\theta}_{k}):=\frac{e^{-\frac{1}{\gamma}\sum_{i= 2}^{n}s_{ik}\ell(\mathbf{\theta}_{k},\mathbf{z}_{i})}}{\int_{\mathbf{t}}e^{-\frac{1}{ \gamma}\sum_{i=2}^{n}s_{ik}\ell(\mathbf{t},\mathbf{z}_{i})}d\mathbf{t}}, \tag{13}\]
where the denominator is a constant value. The following theorem explicitly demonstrates how to conduct the source inference with the loss term.
**Theorem 2**.: _Given local resulting model \(\mathbf{\theta}_{k}\) and a target record \(\mathbf{z}_{1}\), the source inference attack is given by:_
\[\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})=\mathbb{E}_{\mathbf{\tau}}\left[\sigma \left(g(\mathbf{z}_{1},\mathbf{\theta},p_{\mathbf{\tau}})+\mu_{\lambda}\right)\right], \tag{14}\]
_where_
\[\ell_{p_{\mathbf{\tau}}}(\mathbf{z}_{1}) :=-\gamma\log\left(\int_{\mathbf{t}}e^{-\frac{1}{\gamma}\ell(\mathbf{t}, \mathbf{z}_{1})}p_{\mathbf{\tau}}(\mathbf{t})d\mathbf{t}\right), \tag{15}\] \[\ell(\mathbf{\theta}_{k},\mathbf{z}_{1}) :=-\gamma\log\left(e^{-\frac{1}{\gamma}\ell(\mathbf{\theta}_{k},\mathbf{ z}_{1})}\right),\] (16) \[g(\mathbf{z}_{1},\mathbf{\theta},p_{\mathbf{\tau}}) :=\frac{1}{\gamma}(\ell_{p_{\mathbf{\tau}}}(\mathbf{z}_{1})-\ell(\mathbf{ \theta}_{k},\mathbf{z}_{1})). \tag{17}\]
Proof.: For \(\phi\) and \(\omega\) of \(\omega\) in eqn. (6), we have:
\[\begin{split}\phi&=\lambda\frac{e^{-\frac{1}{\gamma} \ell(\mathbf{\theta}_{k},\mathbf{z}_{1})}e^{-\frac{1}{\gamma}\sum_{i=2}^{n}s_{ik}\ell( \mathbf{\theta}_{k},\mathbf{z}_{i})}}{\int_{\mathbf{t}}e^{-\frac{1}{\gamma}\sum_{i=2}^{n}s_{ ik}\ell(\mathbf{t},\mathbf{z}_{i})}d\mathbf{t}}\\ &=\lambda\frac{e^{-\frac{1}{\gamma}\ell(\mathbf{\theta}_{k},\mathbf{z}_{1} )}p_{\mathbf{\tau}}(\mathbf{\theta}_{k})}{\int_{\mathbf{t}}e^{-\frac{1}{\gamma}\ell(\mathbf{t}, \mathbf{z}_{1})}p_{\mathbf{\tau}}(\mathbf{t})d\mathbf{t}},\end{split} \tag{18}\] \[\omega =(1-\lambda)\frac{e^{-\frac{1}{\gamma}\sum_{i=2}^{n}s_{ik}\ell( \mathbf{\theta}_{k},\mathbf{z}_{i})}}{\int_{\mathbf{t}}e^{-\frac{1}{\gamma}\sum_{i=2}^{n}s_{ ik}\ell(\mathbf{t},\mathbf{z}_{i})}d\mathbf{t}}\] (19) \[=(1-\lambda)p_{\mathbf{\tau}}(\mathbf{\theta}_{k}).\]
Thus, we have:
\[\begin{split}\log\left(\frac{\phi}{\omega}\right)&=- \log\left(\int_{\mathbf{t}}e^{-\frac{1}{\gamma}\ell(\mathbf{t},\mathbf{z}_{1})}p_{\mathbf{ \tau}}(\mathbf{t})d\mathbf{t}\right)+\log\left(e^{-\frac{1}{\gamma}\ell(\mathbf{\theta}_{k}, \mathbf{z}_{1})}\right)\\ &+\log\left(\frac{\lambda}{1-\lambda}\right)\\ &=\frac{1}{\gamma}(\ell_{p_{\mathbf{\tau}}}(\mathbf{z}_{1})-\ell\left(\mathbf{ \theta}_{k},\mathbf{z}_{1}\right))+\mu_{\lambda}.\end{split}\] (20) \[\begin{split}\mathcal{
of the loss on the target record \(\mathbf{z}_{1}\). Comparing eqn. (15) and eqn. (16), we can find that \(\ell_{p_{\mathbf{\tau}}}(\mathbf{z}_{1})\) is the expectation of the loss \(\ell(\cdot,\mathbf{z}_{1})\) over the typical models that have not seen \(\mathbf{z}_{1}\). Thus, we can interpret \(g(\mathbf{z}_{1},\mathbf{\theta},p_{\mathbf{\tau}})\) as the difference between \(\mathbf{\theta}_{k}\)'s loss on \(\mathbf{z}_{1}\) and other models' (trained without \(\mathbf{z}_{1}\)) average loss on \(\mathbf{z}_{1}\).
In other words, \(g(\mathbf{z}_{1},\mathbf{\theta},p_{\mathbf{\tau}})\) is the differences between the prediction loss of the local model of client \(k\) and the average prediction loss of other clients' local models. If \(\ell_{p_{\mathbf{\tau}}}(\mathbf{z}_{1})\approx\ell(\mathbf{\theta}_{k},\mathbf{z}_{1})\), which means the client \(k\) behaves almost the same as other clients on \(\mathbf{z}_{1}\), then \(g(\mathbf{z}_{1},\mathbf{\theta},p_{\mathbf{\tau}})\approx 0\). Since \(\sigma(\mu_{\lambda})=\lambda\), the posterior probability \(\mathcal{S}(\mathbf{\theta}_{k},\mathbf{z}_{1})\) is equal to \(\lambda\). Thus, we have no source information gain on \(\mathbf{z}_{1}\) beyond prior knowledge. In FL, the prior knowledge \(\mathbb{P}(s_{ik}=1)=\lambda=\frac{1}{K}\). In this case, the source inference equals _random guess_. However, if \(\ell_{p_{\mathbf{\tau}}}(\mathbf{z}_{1})>\ell(\mathbf{\theta}_{k},\mathbf{z}_{1})\), that is, the client \(k\) performs better than other clients on \(\mathbf{z}_{1}\), \(g(\mathbf{z}_{1},\mathbf{\theta},p_{\mathbf{\tau}})\) becomes positive. When \(g(\mathbf{z}_{1},\mathbf{\theta},p_{\mathbf{\tau}})>0\), \(\mathbb{P}(s_{1k}=1|\mathbf{\theta}_{k},\mathbf{z}_{1})>\lambda\) and thus we gain non-trivial source information on \(\mathbf{z}_{1}\). Moreover, since \(\sigma(\cdot)\) is non decreasing, smaller \(\ell(\mathbf{\theta}_{k},\mathbf{z}_{1}))\) indicates higher probability that \(\mathbf{z}_{1}\) belongs to the client \(k\).
**Conclusion from Theorem 2.** We conclude that the smaller the loss of client \(k\)'s local model on a target record \(\mathbf{z}_{1}\), the higher posterior probability that \(\mathbf{z}_{1}\) belongs to the client \(k\). This motivates us to design the source inference attack that the client whose local model has the smallest loss on a target record should own this record. Moreover, if the client's local model's behavior on its local training data is different from that of other clients, our attack will always achieve better performance than randomly guessing. We give more empirical evidence in Section 4.
### _Source Inference Attacks in Different FL Frameworks_
In this paper, we investigate SIAs in three FL frameworks under horizontal FL. Specifically, we investigate SIAs in FedSGD [3], FedAvg [3], and FedMD [22] where local clients upload gradients, model parameters, or predictions on an unlabeled dataset to the server. The success of SIAs in the three FL frameworks sheds light on how the communications between clients and the semi-honest central server in existing FL frameworks enable the server to mount SIAs to steal source information. Fig. 1 shows an overview of how the honest-but-curious server implements SIAs in FL.
Intuitively, the server in FedAvg can directly conduct an SIA in each communication round because it receives the parameters of local models from the clients. Thus, the server can directly use the local models to calculate the prediction loss of a target record for implementing source inference. However, in FedSGD and FedMD, the server cannot directly implement SIAs because it cannot directly leverage the updates from the clients to calculate the prediction loss of the local models on the target records. In these two frameworks, we introduce two strategies to enable the server to implement the proposed SIAs.
In FedSGD, each client \(k\) uploads the average gradient \(\mathbf{g}_{k}=\nabla\ell(\mathbf{\theta}_{t-1},D_{k})\) calculated on its local data \(D_{k}\) at the current global model \(\mathbf{\theta}_{t-1}\). Thus, the server can leverage the gradient from each client to update the global model \(\mathbf{\theta}_{t}^{k}\leftarrow\mathbf{\theta}_{t-1}-\eta\mathbf{g}_{k}\) separately, where \(\eta\) is a fixed learning rate in the FL framework. Note that \(\mathbf{\theta}_{t}^{k}\) essentially is the updated local model of the client \(k\) in the \(t\)-th communication round. Thus, the server can use \(\mathbf{\theta}_{t}^{k}\) to calculate the prediction loss of each local model in each communication round to conduct SIAs.
In FedMD, the server can leverage knowledge distillation [43, 44] to achieve SIAs. Knowledge distillation aims to transfer knowledge of larger models to smaller models so that the smaller models are as accurate as larger models. The larger models are referred to as teacher models and the smaller models are referred to as students models. During knowledge distillation, the student model is trained to match the logits of the teacher model for learning its knowledge. Knowledge distillation enables the smaller stu
Fig. 1: An overview of SIAs in FL. In each communication round, each client transmits the necessary information of updates to the central server for aggregation. The central server faithfully follows the defined FL protocol while inferring the source clients of the target data records from legitimately received information from the local clients.
dent model to have similar performance to their teacher models [45]. In FedMD, the server cannot directly use the updates from the clients to calculate the prediction loss of the target records because the updates are predictions of a public dataset. However, because the clients' predictions on the public dataset represent the knowledge of the local models, the server can leverage knowledge distillation to mount SIAs. Specifically, for each of the local clients, the server considers it as a teacher and leverages its predictions to train a student model to mimic the local model. Because the student models are expected to behave similarly to the local models, the server can use the prediction loss of the student models on the target records as the estimate of the prediction loss of the local models. Thus, based on the estimated prediction loss, the semi-honest server can mount SIAs in FedMD. Note that although the student models mimic the local models, there is a bias between the estimated prediction loss and the actual loss of the local model. However, if the estimated prediction loss of the client owning the target instance is distinguishable from the estimated losses of other clients, the SIAs can still succeed, which we will show in the experiments.
Based on the analysis above, we propose three FL frameworks FedSGD-SIA, FedAvg-SIA, and FedMD-SIA that allow an honest-but-curious server to conduct SIAs without deviating from the normal FedSGD, FedAvg, or FedMD protocols. Algorithm 1, Algorithm 2, and Algorithm 3 describe FedSGD-SIA, FedAvg-SIA, and FedMD-SIA, respectively. Each algorithm consists of two steps, i.e., _Server executes_ and _ClientUpdate_. In each algorithm, we assume there are \(K\) clients and we take a target record of \(\mathbf{z}\) as an example to show how to implement SIAs.
```
1:Server executes
2:initialize \(\theta_{0}\)\(\triangleright\) initialize weights of the global model
3:for each round \(t=1\) to \(T\)do
4:for each client \(k\)do
5:\(\theta_{t}^{k}\leftarrow\)ClientUpdate(\(\theta_{t-1}\))\(\triangleright\) local model weight of client k at round t
6: Compute \(\ell(\theta_{t}^{k},\mathbf{z})\)\(\triangleright\) calculate the local prediction loss on \(z\)
7:endfor
8:\(i\gets argmin(\ell(\theta_{1},\mathbf{z}),\cdots,\ell(\theta_{K},\mathbf{z}))\)\(\triangleright\) identify the source client
9:\(\theta_{t}\leftarrow\sum_{k}\frac{n^{(k)}}{n}\theta_{t}^{k}\)\(\triangleright\) update the global model
10:endfor
11:ClientUpdate(\(\theta\))\(\triangleright\) run on client k
12:\(\mathcal{B}\leftarrow\) (split \(D_{k}\) into multiple batches of size \(\mathbf{B}\))
13:for each local epoch \(i\) from 1 to \(E\)do
14:for batch \(b\in\mathcal{B}\)do
15:\(\theta\leftarrow\theta-\eta\nabla\ell(b;\theta)\)\(\triangleright\) mini-batch gradient descent
16:endfor
17:endfor
18:return\(\theta\)\(\triangleright\) return local model to the central server
```
**Algorithm 1**FedSGD-SIA The \(K\) clients are indexed by \(k\); \(\eta\) represents the learning rate; \(\mathbf{z}\) represents a target record.
**FedSGD-SIA:** i) **Server executes**
```
1:Server executes
2:initialize \(\theta_{0}\)\(\triangleright\) initialize weights of the global model
3:for each round \(t=1\) to \(T\)do
4:for each client \(k\)do
5:\(\theta_{t}^{k}\leftarrow\)ClientUpdate(\(\theta_{t-1}\))\(\triangleright\) local model weight of client k at round t
6: Compute \(\ell(\theta_{t}^{k},\mathbf{z})\)\(\triangleright\) calculate the local prediction loss on \(z\)
7:endfor
8:\(i\gets argmin(\ell(\theta_{1},\mathbf{z}),\cdots,\ell(\theta_{K},\mathbf{z}))\)\(\triangleright\) identify the source client
9:\(\theta_{t}\leftarrow\sum_{k}\frac{n^{(k)}}{n}\theta_{t}^{k}\)\(\triangleright\) update the global model
10:endfor
11:ClientUpdate(\(\theta\))\(\triangleright\) run on client k
12:\(\mathcal{B}\leftarrow\) (split \(D_{k}\) into multiple batches of size \(\mathbf{B}\))
13:for each local epoch \(i\) from 1 to \(E\)do
14:for batch \(b\in\mathcal{B}\)do
15:\(\theta\leftarrow\theta-\eta\nabla\ell(b;\theta)\)\(\triangleright\) mini-batch gradient descent
16:endfor
17:endfor
18:return\(\theta\)\(\triangleright\) return local model to the central server
```
**Algorithm 2**FedAvg-SIA The \(K\) clients are indexed by \(k\); \(B\) represents the local mini-batch size; \(\mathbf{E}\) represents the number of local epochs; \(\eta\) represents the learning rate; \(\mathbf{z}\) represents a target record.
**FedMD-SIA:** i) **Server executes**
```
1:Server executes
2:initialize \(\theta_{0}\)\(\triangleright\) initialize weights of the global model
3:for each round \(t=1\) to \(T\)do
4:for each client \(k\)do
5:\(\theta_{t}^{k}\leftarrow\)ClientUpdate(\(\theta_{t-1}\))\(\triangleright\) local model weight of client k at round t
6: Compute \(\ell(\theta_{t}^{k},\mathbf{z})\)\(\triangleright\) calculate the local prediction loss on \(z\)
7:endfor
8:\(i\gets argmin(\ell(\theta_{1},\mathbf{z}),\cdots,\ell(\theta_{K},\mathbf{z}))\)\(\triangleright\) identify the source client
9:\(\theta_{t}\leftarrow\sum_{k}\frac{n^{(k)}}{n}\theta_{t}^{k}\)\(\triangleright\) update the global model
10:endfor
11:ClientUpdate(\(\theta\))\(\triangleright\) run on client k
12:\(\mathcal{B}\leftarrow\) (split \(D_{k}\) into multiple batches of size \(\mathbf{B}\))
13:for each local epoch \(i\) from 1 to \(E\)do
14:for batch \(b\in\mathcal{B}\)do
15:\(\theta\leftarrow\theta-\eta\nabla\ell(b;\theta)\)\(\triangleright\) mini-batch gradient descent
16:endfor
17:endfor
18:return\(\theta\)\(\triangleright\) return local model to the central server
```
**Algorithm 3**FedSGD-SIA The \(K\) clients are indexed by \(k\); \(B\) represents the local mini-batch size; \(\mathbf{E}\) represents the number of local epochs; \(\eta\) represents the learning rate; \(\mathbf{z}\) represents a target record.
**FedMD-SIA:** i) **Server executes**
```
1:Server executes
2:initialize \(\theta_{0}\)\(\triangleright\) initialize weights of the global model
3:for each round \(t=1\) to \(T\)do
4:for each client \(k\)do
5:\(\theta_{t}^{k}\leftarrow\)ClientUpdate(\(\theta_{t-1}\))\(\triangleright\) local model weight of client k at round t
6: Compute \(\ell(\theta_{t}^{k},\mathbf{z})\)\(\triangleright\) calculate the local prediction loss on \(z\)
7:endfor
8:\(i\gets argmin(\ell(\theta_{1},\mathbf{z}),\cdots,\ell(\theta_{K},\mathbf{z}))\)\(\triangleright\) identify the source client
9:\(\theta_{t}\leftarrow\sum_{k}\frac{n^{(k)}}{n}\theta_{t}^{k}\)\(\triangleright\) update the global model
10:endfor
11:ClientUpdate(\(\theta\))\(\triangleright\) run on client k
12:\(\mathcal{B}\leftarrow\) (split \(D_{k}\) into multiple batches of size \(\mathbf{B}\))
13:for each local epoch \(i\) from 1 to \(E\)do
14:for batch \(b\in\mathcal{B}\)do
15:\(\theta\leftarrow\theta-\eta\nabla\ell(b;\theta)\)\(\triangleright\) mini-batch gradient descent
16:endfor
17:endfor
18:return\(\theta\)\(\triangleright\) return local model to the central server
```
**Algorithm 4**FedSGD-SIA The \(K\) clients are indexed by \(k\); \(\eta\) represents the learning rate; \(\mathbf{z}\) represents a target record.
In each algorithm, the server calculates the local model and the client owning the target instance. The server calculates the local model and the client owning the target instance.
while implementing SIAs (Lines 12, 13, and 16). In each communication round, the server receives the predictions of the public dataset from each client (Lines 9-11). Then, for each client, the server trains a student model on the public dataset to imitate the local model (Line 12). The server calculates each student model's prediction loss on \(\mathbf{z}\) and obtains the source \(i\) by finding which student model has the smallest prediction loss on \(\mathbf{z}\) (Lines 13 and 16). Last, the server aggregates the predictions from each client for the next round of updating (Line 17). ii) **ClientUpdate:** First, each client trains the local model on the soft-labeled public dataset to approach the consensus on the public dataset (Lines 19-21 for Digest). Then, each client trains the local model on the private dataset (Lines 22-25 for Revisit). Last, each client computes the predictions on the public dataset and sends the predictions to the server (Line 26).
**Complexity analysis of SIAs.** The proposed SIAs leverage the prediction loss of local models to infer the source client of a target record. The computational complexity of SIAs mainly depends on two factors. One is the evaluation of the prediction loss of local models, and the other is the identification of the client having the smallest prediction loss. In FedAvg-SIA, because the server directly leverages the uploaded local models to evaluate the prediction loss, the computation cost is determined by the model size. The time complexity of SIAs in FedAvg-SIA with respect to the number of clients \(K\) is \(O(K)\). As the server received the gradients from each client to update the global model in FedSGD-SIA, the local models can be obtained by the server for SIAs with no extra computation cost. Thus, the computational complexity of FedSGD-SIA is the same as FedAvg-SIA. In FedMD-SIA, SIAs are more complicated than that in FedSGD-SIA and FedAvg-SIA because the server requires a student model to mimic the behavior of local models. The computation cost of training the student models is mainly determined by the student model size and the size of the public dataset. Then, the server can implement SIAs based on the student models, with a time complexity of \(O(K)\) with respect to the number of clients. In our experiments using PyTorch with a single GPU NVIDIA Tesla P40, the execution time of SIAs in FedSGD-SIA and FedAvg-SIA is within seconds and in FedMD-SIA is within one minute.
## 4 Experiments
In this section, we empirically evaluate FedSGD-SIA, FedAvg-SIA, and FedMD-SIA. We first introduce datasets and model architectures used in the experiments. Then, we demonstrate the effectiveness of SIAs in the three FL frameworks and conduct a detailed ablation study to identify how different factors in FL influences the performance of SIAs. In the end, we discuss why our SIAs work.
### _Datasets and Model Architectures_
**Datasets.** The datasets used in our experiments are reported in Table I. We create an IID _Synthetic_ dataset to allow us to precisely manipulate data heterogeneity. We generate _Synthetic_ as described in previous works [29, 46]. The remaining datasets are widely used datasets for simulating and evaluating the privacy leakage on machine learning models [11, 15, 16, 47]. For MNIST and CIFAR10, the training dataset and testing datasets have been divided when downloading them. For the remaining datasets, we use the _train_test_split_ function from the _sklearn_2 toolkit to randomly select 80% samples as the training records (before partitioning client data), and the remaining 20% records are used as the testing records. We use the four datasets to evaluate FedSGD-SIA and FedAvg-SIA. Because FedMD-SIA requires a public dataset to share the knowledge of the clients, we evaluate it on paired datasets. Specifically, we select paired MNIST /FEMNIST and CIFAR-10/CIFAR-100. For MNIST/FEMNIST pairs, we select MNIST as the public data and a subset of the Federated Extended MNIST (FEMNIST) [5] as the private data. For CIFAR-10/CIFAR-100 pairs, CIFAR-10 is selected as the public dataset, and the private dataset is a subset of CIFAR-100.
**Models.** We use deep neural networks (DNNs) as the global models in FL for the classification tasks. Specifically, we use convolutional neural networks (CNN) for the image datasets of MNIST and CIFAR-10. For binary datasets of Synthetic and purchase, we use fully-connected (FC) neural networks. The architecture of the CNN and FC classifiers used in FedSGD-SIA and FedAvg-SIA are described in Table II. Because FedMD-SIA is designed for FL with heterogeneous architectures, we adopt the setting of the different CNN architectures of the clients in [22], which proposed the FedMD framework. For the detailed description of the CNN architectures of the clients in FedMD-SIA, readers can refer to [22]. The student model in FedMD-SIA used for imitating the local models is listed in Table II. Note that the DNN architectures used in this paper do not necessarily achieve the best performance for the considered datasets in FL, because our goal is not to attack the best DNN architecture. In this paper, we aim to show that FL is vulnerable to SIAs.
### _Evaluation Metrics, Baseline, and Parameter Settings_
**Metric.** We use _attack success rate_ (ASR) to evaluate SIAs, which is the most commonly used metric to evaluate the performance of a given attack approach. ASR [19] is defined as the fraction of the number of attacks that successfully identify the source of the target instances:
\[\text{ASR}=\frac{\#\text{Successful}\text{attacks}}{\#\text{All}\text{attacks}}.\]
For example, consider there are 100 target instances for identifying their source clients, and SIAs successfully identify 60 instances' source clients. Then, the ASR is calculated as 60%.
**Baseline.** Because we are the first to propose SIAs, there is no current work we can compare. Thus, to demonstrate the effectiveness of SIAs, we consider a trivial attack of _randomly guessing_ as the baseline of SIAs. Randomly guessing randomly selects a client as the source of the target training record. The ASR of randomly guessing is defined as \(\frac{1}{K}\), where \(K\) is the number of clients.
**Hyper-parameter setting.** We use the SGD optimizer for all the models with a learning rate of \(0.01\). We assume there are \(10\) clients in the FL. In each trial of the experiment, \(100\) training records from each client are selected as the target records for SIAs. For all the learning tasks, we set the total number of communication rounds to 20, which is enough for the global model to converge. During each communication round, we record the ASR of SIAs. We report the mean of ASR over five different random seeds.
### _Factors in Source Inference Attacks_
**Data distribution \(\alpha\).** In FL, the training data across the clients are usually non-IID [48]. This means the local data from one client can not be considered as samples drawn from the overall data distribution. To simulate the heterogeneity distribution of client data, we leverage a Dirichlet distribution as in previous works [31, 50, 51, 52, 49] to create disjoint non-IID training data for each client. The degree of non-IID is controlled by the value of \(\alpha\) (\(\alpha>0\)) of the Dirichlet distribution. For example, \(\alpha=100\) imitates almost identical local data distributions. With a smaller \(\alpha\), each client is more likely to have the training records from only one class. We use Fig. 2 to visualize how the training records of CIFAR-10 are distributed among 10 clients when setting different \(\alpha\) values.
**Number of local epochs \(E\).** In FedAvg-SIA and FedMD-SIA, each client can update their model for several epochs, and then send the model weights or predictions to the server. Many recent studies [53, 54, 55, 56] have shown that DNN models can easily memorize their training data. Intuitively, if a client trains the local model with more epochs, the local updated model should better remember the information of the local dataset and be more confident to classify its training records. Accordingly, the prediction loss of a target record calculated from the update of the source client will be much smaller than that calculated from the updates of other clients, which will be beneficial for SIAs.
### _Effectiveness of Source Inference Attacks_
To demonstrate the effectiveness of SIAs, we evaluate FedSGD-SIA, FedAvg-SIA, and FedMD-SIA under common settings: We divide the training data to the clients in the three FL frameworks with \(\alpha=1\), resulting in a moderate non-IID distribution. We set \(E=5\) for FedAvg-SIA, and \(E_{1}=1,E_{2}=5\) for FedMD-SIA, which are the common settings for FedAvg [3] and FedMD [22].
Fig. 3 shows the ASR of FedSGD-SIA, FedAvg-SIA, and FedMD-SIA during each communication round. We can observe from Fig. 3 that the ASR on all datasets is larger than the baseline of 10% in each communication round,
demonstrating the effectiveness of SIAs. This indicates that a semi-honest server can steal significant source information of the training data records via our proposed SIAs in any communication rounds during federated training. We can also see that the ASR of each FL framework differs in different datasets. This is because the local models are overfitted with a different level to their local training data. We will discuss this phenomenon in more detail in subsection 4.6.
**Takeaway 1** Our proposed SIAs are effective on the FedSGD, FedAvg, and FedMD.
### _Ablation Study_
We conduct a detailed ablation study on FedSGD-SIA, FedAvg-SIA, and FedMD-SIA to learn how data distribution and local epochs influence the effectiveness of SIAs. We record and report the highest ASR during the federated training process. Table III shows the ASR when setting different levels of non-IID data distribution and different number of local epochs.
**Evaluation of non-IID data distribution.** As we can see in Table III, the ASR of all the three FL frameworks on all datasets increase when the degree of non-IID data distribution across clients increases. For example, the ASR of FedSGD-SIA on CIFAR-10 increases from 17.6% to 58.3% when the non-IID data distribution increase from \(\alpha=100\) to \(\alpha=0.1\). This is because the more non-IID of the local data is, the more different the updated local models will be, which benefits SIAs. For example, for the CIFAR-10 task, a client is highly likely to have training records from only one class (_e.g.,_ deer) when the degree of the non-IID data distribution is high. It is expected that such a client's local model will perform well in predicting its own records of deer images but perform badly in predicting the other clients' records such as dogs and trucks because the local model had never seen such images during the update process. Thus, the local model will have very small prediction losses on its own records and large losses on the other records. The distinguishable prediction losses across different local models of the clients enable the server to easily implement SIAs to infer where a training record comes from.
**Evaluation of local epochs.** As shown in most scenarios in Table III, the increase of \(E\) from 1 to 10 makes the ASR of FedSGD-SIA and FedMD-SIA increase. For example, the ASR of FedAvg-SIA on CIFAR-10 increases from 16.6% to 51.1% when the local epochs increase from 1 to 10 when \(\alpha\) is set to 100. This is because the more epochs the clients update, the more confident the local model is in predicting its training data. However, we also observe that there are
Fig. 3: Attack success rate (ASR) of the three proposed FL frameworks during each communication round. In each plot, the \(x\) axis represents the number of communication rounds, and the \(y\) axis represents ASR. Each line is the mean ASR of 5 runs of experiments with shaded regions corresponding to 95% confidence interval. (a) ASR of FedSGD-SIA. (b) ASR of FedAvg-SIA. (c) ASR of FedMD-SIA. As we can see, the ASR on all datasets is larger than the baseline, i.e., randomly guessing, demonstrating the effectiveness of SIAs.
Fig. 2: Illustration of the number of samples per class allocated to each client at different Dirichlet distribution alpha values, for CIFAR-10 with 10 clients. The x-axis represents the IDs of the clients. The y-axis represents the class labels of CIFAR-10. The dot size reflects the number of samples allocated to the clients. As we can see, the data distribution across the clients becomes more and more non-IID as \(\alpha\) decreases.
scenarios, _e.g.,_FedAvg-SIA on MNIST when \(\alpha=1\) and \(\alpha=0.1\), that increasing \(E\) does not lead to the increase of ASR but leads to a decrease. We suspect this is because training the local model with more epochs not only makes it more confident to predict its training records but also helps it to generalize better to other clients' data. In this case, the prediction losses across different local models will become less distinguishable, which leads to a decrease in ASR. We will further explain how \(E\) influences ASR from the perspective of overfitting in the following section.
**Takeaway 2**
* Higher data heterogeneity among local clients results in more effective SIAs.
* Larger local epochs in clients usually lead to more effective SIAs.
### _Why Source Inference Attacks Work_
Machine Learning models especially DNNs are usually overparameterized with high complexity. This enables DNNs to learn patterns effectively from the training data on one side while on the other side such models can have unnecessarily high capacities to memorize the details of the training data [53, 54, 56], which can lead to the overfitting of ML models. An overfitted ML model cannot generalize well on its test data, i.e., the model performs much better on its training data than test data. In FL, the local dataset of a client often has a limited number of records and fails to represent the whole data distribution, which exacerbates the overfitting of local models. An overfitted local model of a client is expected to have a much smaller prediction loss on a training record of its own than the loss of a training record of other clients. The distinguishable prediction losses of the local models of the different clients enable our proposed SIAs to work effectively. Moreover, the more overfitted the local models are, the more effective the SIAs will be.
We use generalization error [57] to measure the overfitting level of the local models, which is a widely used metric to quantify overfitting in existing works [19]. The generalization error of an ML model is defined as the absolute difference between the training accuracy and the testing accuracy of the model. Here, we first calculate the generalization error for each of the local models by using its local training dataset and the global testing dataset. Then, we calculate the averaged generalization error of the local models to reflect the overfitting level of the FL system.
**The impact of overfitting on ASR.** Fig. 4 shows the impact of overfitting on the performance of SIAs. As we can see, for FedSGD-SIA, FedAvg-SIA, and FedMD-SIA, the honest-but-curious server can obtain a higher attack success rate when the FL system is more overfitted to the training dataset. This observation validates our analysis of the relationship between the overfitting level and the performance of SIAs.
**The impact of non-IID distribution on overfitting.** Fig. 5 examines the effect of different levels of non-IID data distribution on the overfitting level of the FL system. As we can see, for all the FL systems, increasing the level of non-IID data across the clients (i.e., decreasing \(\alpha\) from \(100\) to \(10\) and \(0.1\)) will inevitably increase the overfitting level. This is because the more non-IID data is, the less representative the local data will be, which makes the local model less possible to generalize well beyond its own training data.
**The impact of local epoch on overfitting.** Fig 6 shows the impact of the number of epochs on the overfitting level of FedAvg-SIA and FedMD-SIA. We can see that in FedMD-SIA, increasing the number of local epochs does not increase the overfitting level of the FL system too much, which results in a slight increase of the attack success rate as we can observe in Table III. In FedAvg-SIA, we observe that there is a large increase of the overfitting level on CIFAR-10 when we increase \(E\) from \(1\) to \(10\), while there is no such trend for the other datasets. This observation explains why the ASR of CIAFR-10 is more sensitive to the local epoch compared to other datasets in Table III.
In summary, the overfitting of the local models directly contributes to the success of SIAs, while overfitting is mainly caused by the non-IID data distribution across the clients. In FL, if the local dataset fails to represent the overall data distribution, the local model can easily overfit to the local training dataset, as depicted in Fig. 5. Because an overfitted local model cannot generalize well to the data beyond its local training records, it will behave differently on its
\begin{table}
\begin{tabular}{l l l l l l l l l l l l} \hline \hline & & \multicolumn{6}{c}{The attack success rate (\%) of source inference attacks} \\ \cline{3-10} & & & \(\alpha\!=\!100\) & & & \(\alpha\!=\!1\) & & & \(\alpha\!=\!0.1\) \\ \cline{3-10} & & \(E\!=\!1\) & \(E\!=\!5\) & \(E\!=\!10\) & \(E\!=\!1\) & \(E\!=\!5\) & \(E\!=\!10\) & \(E\!=\!1\) & \(E\!=\!5\) & \(E\!=\!10\) \\ \hline \multirow{3}{*}{FedSGD-SIA} & Synthetic & \(19.1\pm 0.4\) & — & — & \(30.9\pm 2.6\) & — & — & \(55.9\pm 3.2\) & — & — \\ & Purchase & \(15.7\pm 0.4\) & — & — & \(30.6\pm 1.0\) & — & — & \(63.9\pm 1.6\) & — & — \\ & MNIST & \(12.7\pm 0.3\) & — & — & \(23.1\pm 0.5\) & — & — & \(50.2\pm 3.7\) & — & — \\ & CIFAR-10 & \(17.6\pm 0.3\) & — & — & \(28.5\pm 0.7\) & — & — & \(58.3\pm 5.2\) & — & — \\ \hline \multirow{3}{*}{FedAvg-SIA} & Synthetic & \(19.2\pm 0.5\) & \(19.7\pm 0.5\) & \(18.9\pm 0.6\) & \(28.5\pm 1.4\) & \(28.1\pm 1.8\) & \(28.5\pm 1.2\) & \(53.6\pm 1.3\) & \(50.8\pm 2.6\) & \(51.7\pm 3.3\) \\ & Purchase & \(15.6\pm 0.5\) & \(21.9\pm 0.3\) & \(28.2\pm 0.5\) & \(31.4\pm 0.8\) & \(32.6\pm 0.7\) & \(34.8\pm 0.5\) & \(67.1\pm 0.4\) & \(64.4\pm 0.8\) & \(66.2\pm 0.9\) \\ & MNIST & \(12.1\pm 0.1\) & \(12.8\pm 0.3\) & \(13.5\pm 0.3\) & \(23.7\pm 0.9\) & \(23.3\pm 0.4\) & \(22.1\pm 0.7\) & \(58.4\pm 4.9\) & \(53.1\pm 1.1\) & \(23.43\pm 2.6\) \\ & CIFAR-10 & \(16.6\pm 0.2\) & \(47.8\pm 0.4\) & \(51.1\pm 1.1\) & \(26.3\pm 0.7\) & \(49.9\pm 0.7\) & \(55.8\pm 0.7\) & \(56.8\pm 3.9\) & \(60.9\pm 3.9\) & \(62.5\pm 1.9\) \\ \hline \multirow{3}{*}{FedMD-SIA} & FEMNIST & \(15.1\pm 0.4\) & \(17.2\pm 0.5\) & \(17.5\pm 0.6\) & \(23.2\pm 1.1\) & \(25.4\pm 1.3\) & \(24.5\pm 1.1\) & \(42.5\pm 3.3\) & \(40.6\pm 1.0\) & \(46.7\pm 2.1\) \\ & CIFAR-100 & \(18.2\pm 0.2\) & \(18.8\pm 0.8\) & \(20.5\pm 0.4\) & \(23.9\pm 1.1\) & \(28.1\pm 1.9\) & \(25.5\pm 0.6\) & \(40.3\pm 1.7\) & \(43.5\pm 3.8\) & \(45.6\pm 3.9\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Understanding the impact of data distribution and local epochs in SIAs. For each value of the parameter, we report the averaged attack success rate over 5 different random seeds with its standard deviation. Because FedSGD-SIA transmits gradient calculated on the current global model (equivalent to \(E=1\)) and does not involve training local models for several epochs, we leave the column \(E=5\) and \(E=10\) of the ASR of FedSGD-SIA blank.
training data from other clients' data, which guarantees the feasibility of SIAs. Many recent works [29, 31, 46, 48, 58] have shown that the non-IID data distribution has brought important challenges to FL such as model convergence guarantees. In this paper, we demonstrate another challenge of non-IID from the perspective of privacy: The leakage of source privacy about the training data.
There are several promising applications where SIAs can be applied. First, as a newly proposed inference attack, SIAs can be leveraged to evaluate the privacy-preserving ability of an FL framework. A strong privacy-aware FL framework should guarantee the source privacy of the clients. Second, because no defense methods have been specifically proposed for mitigating SIAs, they can inspire researchers to propose novel defense methods or design new FL frameworks for protecting clients' source privacy. Last, we have identified that the overfitting of an FL framework is the main success factor for SIAs. Thus, SIAs can be used to help evaluate and understand the overfitting phenomenon of FL frameworks. For example, SIAs can be used to determine how many local epochs a local model should update to avoid overfitting.
Fig. 4: Understanding the impact of overfitting in SIAs. In each plot, the \(x\) axis represents the overfitting level of the FL framework, and the \(y\) axis represents ASR. As we can see, for all the datasets in all the three FL frameworks, the more overfitted the local models are, the higher the ASR will be.
Fig. 5: Understanding the impact of the non-IID data distribution on the overfitting level of the FL framework. In the three FL frameworks, the local epoch sets to \(1\). In each plot, the \(x\) axis represents the inverse of \(\alpha\), and the \(y\) axis represents the overfitting level.
Fig. 6: Understanding the impact of the local epoch on the overfitting level of the FL framework. In the three FL frameworks, the data distribution value \(\alpha\) sets to 100. In each plot, the \(x\) axis represents the number of local epoch, and the \(y\) axis represents the overfitting level.
**Takeaway 3**
* The overfitting of local models is the main reason why SIAs can succeed.
* Higher data heterogeneity results in a higher overfitting level of local models.
## 5 Discussion and Future Work
### _Defenses against SIAs_
**Differential privacy.** As a probabilistic privacy mechanism, differential privacy (DP) [59] provides a mathematically provable privacy guarantee. Recently, many works [60, 61, 62, 63, 64] suggest DP can be applied to ML models to defend against inference attacks such as membership inference attacks and property inference attacks. When an ML model is trained with differential privacy guarantees, the learned model is expected not to learn or remember any specific data details. By definition, if the local models in FL are differentially private, the success probability of SIAs should be reduced because the communication updates calculated from such models should contain less information about the local training datasets. We discuss and evaluate whether DP can effectively mitigate SIAs.
In the experiments, we set \(\alpha=1\) for all three FL frameworks and set \(E=10\) for FedAvg-SIA and FedMDSIA. Here, we consider record-level DP implemented with DP-SGD [65], which is the first and the most widely used differentially private training method. We train differentially private local models in each communication round before sending the updates to the central server. In our experiments, we fine-tune the noise of DP to obtain an attack success rate slightly larger than the baseline of random guess (i.e., 10% because of 10 clients) while reporting the training and testing accuracy of the FL system.
Table IV compares the FL systems with DP to the FL systems without DP in terms of the model utility (measured by the testing accuracy) and the attack success rate of SIAs. As we can see, differential privacy indeed reduces the ASR of FedSGD-SIA, FedAvg-SIA, and FedMD-SIA on all the datasets. However, there is a significant model utility reduction in the FL systems. In some cases, the global model does not converge, e.g., FedAvg-SIA trained on MNIST and CIFAR-10. We are aware that there exists client-level DP [61, 66] in FL where the local clients' privacy is preserved while the utility of the FL system is maintained. However, client-level DP requires a very large number of clients (e.g., thousands in [66]) to achieve the desired privacy-utility guarantee. Applying them in our experiments is not expected to achieve satisfactory results as there are only ten clients in the FL system.
**Regularization techniques.** Regularization techniques such as L2-regularization and Dropout [67] are leveraged to defend against membership inference attacks on machine learning models [19]. Intuitively, regularization techniques can help local models to reduce their overfitting degrees to the local training datasets. Thus, it is promising that we can leverage regularization techniques to defend against SIAs in FL. However, regularization techniques are not perfect as not all of them can achieve a satisfactory trade-off between privacy and model utility: strong regularization can also significantly reduce model performance [16]. We leave the investigation of finding appropriate regularization techniques as a defense against SIAs for our future work.
### _Limitations and Future Work for SIAs_
**Number of clients.** In our experiments, to demonstrate the effectiveness of SIAs, we evaluate the FL systems with ten clients, which is a small number. This FL scenario corresponds to federated to business mode where a handful of organizations jointly build a useful model, e.g., a small number of banks collaboratively build a fraud detection model [21, 24]. There also exists federated to customer mode where thousands or even millions of clients involved in FL systems, e.g., thousands of mobile devices jointly train a model for next-word prediction [62]. Intuitively, SIAs are much more challenging under federated to customer mode because finding the source of a target record from thousands of clients is difficult. Under this setting, the performance of our proposed SIAs is expected to drop while still performing better than randomly guessing as long as the local client's model's behavior on its local training data is different from that of other clients (see discussion in Section 3.3).
**Model size.** As this is the first paper that investigates the source privacy leakage in FL, we only evaluate the FL system with relatively small deep learning models such as
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline & & \multicolumn{3}{c}{FL without differential privacy} & \multicolumn{3}{c}{FL with differential privacy} \\ \cline{3-9} & & \multicolumn{2}{c}{Training acc.(\%)} & \multicolumn{2}{c}{Kestg (\%)} & \multicolumn{2}{c}{Training acc.(\%)} & \multicolumn{2}{c}{Testing acc.(\%)} & \multicolumn{2}{c}{ASR (\%)} & \multicolumn{2}{c}{Privacy budgets} \\ \hline \multirow{3}{*}{FedSGD-SIA} & Synthetic & \(84.3\pm 0.5\) & \(83.9\pm 0.4\) & \(30.9\pm 2.6\) & \(34.9\pm 1.3\) & \(34.7\pm 1.1\) & \(22.5\pm 0.7\) & 1.8 \\ & Parchase & \(87.8\pm 0.1\) & \(85.9\pm 0.2\) & \(30.6\pm 1.0\) & \(14.6\pm 0.3\) & \(14.2\pm 0.1\) & \(23.2\pm 0.6\) & 2.1 \\ & MNIST & \(99.3\pm 0.1\) & \(99.1\pm 0.1\) & \(23.1\pm 0.5\) & \(35.7\pm 0.7\) & \(32.1\pm 0.5\) & \(15.8\pm 1.9\) & 1.9 \\ & CIFAR-10 & \(67.8\pm 0.1\) & \(64.5\pm 0.2\) & \(28.5\pm 0.7\) & \(13.8\pm 0.5\) & \(13.7\pm 0.3\) & \(22.1\pm 1.2\) & 1.8 \\ \hline \multirow{3}{*}{FedAvg-SIA} & Synthetic & \(94.2\pm 0.4\) & \(93.8\pm 0.4\) & \(28.5\pm 1.2\) & \(51.7\pm 1.5\) & \(51.2\pm 0.9\) & \(21.4\pm 1.3\) & 1.9 \\ & Parchase & \(94.2\pm 0.1\) & \(89.5\pm 0.1\) & \(34.8\pm 0.5\) & \(33.6\pm 1.8\) & \(32.1\pm 1.3\) & \(24.8\pm 1.2\) & 6.8 \\ & MNIST & \(99.7\pm 0.1\) & \(99.2\pm 0.1\) & \(22.1\pm 0.7\) & \(10.9\pm 0.1\) & \(9.9\pm 0.1\) & \(11.8\pm 0.5\) & 3.2 \\ & CIFAR-10 & \(96.3\pm 0.6\) & \(70.1\pm 0.5\) & \(55.8\pm 0.7\) & \(10.1\pm 0.1\) & \(9.9\pm 0.1\) & \(19.5\pm 0.6\) & 2.8 \\ \hline \multirow{3}{*}{FedMD-SIA} & FEMNT & \(99.7\pm 0.1\) & \(84.1\pm 1.5\) & \(24.5\pm 1.1\) & \(16.7\pm 0.1\) & \(16.5\pm 0.1\) & \(16.3\pm 0.3\) & 4.8 \\ & CIFAR-100 & \(99.5\pm 0.1\) & \(72.1\pm 0.9\) & \(25.5\pm 0.6\) & \(16.6\pm 0.1\) & \(16.2\pm 0.1\) & \(17.1\pm 0.5\) & 9.6 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The evaluation of source inference defenses via differential privacy. In FedSGD-SIA and FedAvg-SIA, the training accuracy and testing accuracy is the accuracy of the trained global model. In FedMD-SIA, because there is no global model, the training accuracy and testing accuracy is the averaged accuracy of the local models. The privacy budget is calculated every communication round.
CNN models with only two convolutional layers but have not evaluated large models. This is because the purpose of our experiments is to show SIAs are effective in the FL frameworks of FedSGD, FedAvg, and FedMD but not to attack the best deep models. However, it would be interesting to investigate SIAs in FL with large models of millions of parameters such as VGG [68] and Resnet [69]. Models with large sizes on one side have strong learning ability while from the other side have the unnecessary capability to memorize their training data [70], which might be beneficial for SIAs. We leave the investigation of how model size influences the performance of SIAs for future work.
**Best inference round.** Our proposed SIAs enable the server to infer the source of a target record in every communication round. As we can see in Fig. 3, the attack success rate of our proposed SIAs varies in each communication round on all the datasets. This can be because the local models overfit their local training datasets to different degrees in different communication rounds. There are two promising directions after this paper: i) It would be interesting if we can find the communication round that can achieve the best attack performance; ii) Since the server can save the updates of the clients in every communication round, it would be interesting to investigate the possibility of proposing a new attack approach that can leverage all the information collected during training to achieve better performance.
## 6 Related work
### _Privacy Attacks in FL_
We summarize the existing privacy attacks and compare them with our proposed source inference attacks in FL in Table V. For each of the existing attacks, we introduce them with more detailed descriptions as follows.
**Data reconstruction attacks.** This type of attack aims to reconstruct individual client's class-wise training data records that represent a whole class or instance-wise training data records. Class-wise data reconstruction attacks are usually achieved by GANs techniques [72] that leverage the model updates as discriminators to generate generic representations of class-wise data records [11, 14]. Instance-wise data reconstruction attacks leverage optimization techniques to iteratively optimize a dummy data record with a label so that the gradients on the dummy record are close to the gradients uploaded from a client [10, 73]. The instance-wise data reconstruction attacks firstly were limited in a setting where the mini-batch is one [10, 73], and recent works [40, 74] have relaxed this assumption and demonstrate the effectiveness in the setting of large mini-batch sizes.
**Property inference attacks.** This type of attack aims to infer the properties of clients' training data, e.g., inferring when a particular person first appears in a client's photos or when the client begins to visit a certain type of location [8]. Property inference attacks are usually achieved by training a binary classifier that takes as input the parameters of the model and outputs whether the model's training data has the target property or not [15, 75].
**Feature inference attacks.** This type of attack targets vertical FL (see Section 2.1 for a detailed introduction of vertical FL) and aim to infer the feature values of clients [71]. Feature inference attacks leverage the solving of mathematical equality for simple models such as logistic regression because the prediction output and the input containing the target features can be constructed as a set of equations. For complex models such as neural networks, a generative regression network is trained through an optimization process and is leveraged to compute the target features. Recently, the work [76] demonstrates that feature inference attacks can also reconstruct the private input on centralized trained models based their explanation values of predictions. The work [76] shows that an adversary having an auxiliary dataset and black-box access to the model can successfully attack popular Machine Learning as a Service platforms such as Google Cloud and IBM aix360.
**Preference profiling attacks.** This type of attack aims to infer the data preference of clients, e.g., in the FL application of recommender systems, a malicious server infers which item a client likes or dislikes [13]. The attack intuition is the sensitivity of gradients during the FL training reflects the sample size of a class: a small gradient change indicates the sample size of a class is large. The statistical heterogeneity in FL amplifies the gradient sensitivity across classes and thus benefits the preference profiling attacks.
**Membership inference attacks.** Membership inference attacks (MIAs), which are the most related inference attacks to the proposed SIAs, aim to identify the training data of a model [19]. MIAs are usually achieved by training a binary classifier [16] or leveraging a threshold of a metric such as prediction confidence [38] and prediction entropy [77] to decide whether a data record is training data or not. Recent studies [80, 81, 82, 78, 83, 79] have shown that many different types of model such as semantic segmentation models, text-to-image generation models, graph neural networks, multi-modal models, and recommender systems are vulnerable to MIAs. Currently, most studies of MIAs [16, 84, 85, 86, 87] are investigated under centralized settings where one dataset containing all training instances is used for training the models.
In the context of FL, MIAs were first investigated in FedSGD where a malicious client or server can infer whether or not a specific location profile was used for federated training based on the observation of the non-zero gradients of the embedding layer of the global model [8]. Then, MIAs are investigated in FedAvg where a malicious client can passively infer the membership privacy of the FL system or actively craft her updated model parameters to steal more membership privacy [9]. However, because the purpose of MIAs is to distinguish training data from testing data of the FL system, the existing research of MIAs ignores exploring the source client of the training data. In this paper, we propose SIAs to fill this gap and demonstrate SIAs are effective in different FL frameworks.
### _Privacy Defenses in FL_
**Cryptography-based defense.** Cryptography-based defense leverages homomorphic encryption (HE) or secure multiparty computation (SMC) techniques to defend against privacy leakage in FL. In HE-based FL systems [88, 89, 90, 91, 92, 93, 94, 95], each local client encrypts their gradients or model parameters using a public key |
2309.09403 | Selecting which Dense Retriever to use for Zero-Shot Search | We propose the new problem of choosing which dense retrieval model to use
when searching on a new collection for which no labels are available, i.e. in a
zero-shot setting. Many dense retrieval models are readily available. Each
model however is characterized by very differing search effectiveness -- not
just on the test portion of the datasets in which the dense representations
have been learned but, importantly, also across different datasets for which
data was not used to learn the dense representations. This is because dense
retrievers typically require training on a large amount of labeled data to
achieve satisfactory search effectiveness in a specific dataset or domain.
Moreover, effectiveness gains obtained by dense retrievers on datasets for
which they are able to observe labels during training, do not necessarily
generalise to datasets that have not been observed during training. This is
however a hard problem: through empirical experimentation we show that methods
inspired by recent work in unsupervised performance evaluation with the
presence of domain shift in the area of computer vision and machine learning
are not effective for choosing highly performing dense retrievers in our setup.
The availability of reliable methods for the selection of dense retrieval
models in zero-shot settings that do not require the collection of labels for
evaluation would allow to streamline the widespread adoption of dense
retrieval. This is therefore an important new problem we believe the
information retrieval community should consider. Implementation of methods,
along with raw result files and analysis scripts are made publicly available at
https://www.github.com/anonymized. | Ekaterina Khramtsova, Shengyao Zhuang, Mahsa Baktashmotlagh, Xi Wang, Guido Zuccon | 2023-09-18T00:01:24Z | http://arxiv.org/abs/2309.09403v1 | # Selecting which Dense Retriever to use for Zero-Shot Search
###### Abstract.
We propose the new problem of choosing which dense retrieval model to use when searching on a new collection for which no labels are available, i.e. in a zero-shot setting. Many dense retrieval models are readily available. Each model however is characterized by very differing search effectiveness - not just on the test portion of the datasets in which the dense representations have been learned but, importantly, also across different datasets for which data was not used to learn the dense representations. This is because dense retrievers typically require training on a large amount of labeled data to achieve satisfactory search effectiveness in a specific dataset or domain. Moreover, effectiveness gains obtained by dense retrievers on datasets for which they are able to observe labels during training, do not necessarily generalise to datasets that have not been observed during training.
This is however a hard problem: through empirical experimentation we show that methods inspired by recent work in unsupervised performance evaluation with the presence of domain shift in the area of computer vision and machine learning are not effective for choosing highly performing dense retrievers in our setup. The availability of reliable methods for the selection of dense retrieval models in zero-shot settings that do not require the collection of labels for evaluation would allow to streamline the widespread adoption of dense retrieval. This is therefore an important new problem we believe the information retrieval community should consider. Implementation of methods, along with raw result files and analysis scripts are made publicly available at [https://www.github.com/~anonymity](https://www.github.com/~anonymity)
Model selection, Dense retrievers, Zero Shot Model Evaluation +
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
across all domains and datasets, we propose to develop methods that search engine practitioners can reliably use to select a DR model from an existing pool of DR checkpoints that performs best on their application's domain or dataset. This line of research is vital for the practical deployment of DRs in real-world scenarios.
In this paper, we introduce, formalize and operationalize the problem of model selection in IR, which involves selecting the best DR model from a pool of trained models, or alternatively ranking the pool based on a criterion that is indicative of their effectiveness on the unlabeled target dataset. We demonstrate the soundness of the problem by illustrating that the effectiveness of a model varies across target datasets due to differences in data quality, the nature of domain shift, and the evaluation metric employed, as each metric prioritizes different aspects of model effectiveness. In other words, a model might perform well on some datasets, while failing on others. We further provide evidence that the performance of a model on the source dataset is not always a reliable indicator of its effectiveness on target datasets, highlighting the necessity for an alternative criterion specifically tailored to each target dataset. An example of such a criterion is the evaluation of the model's level of uncertainty, with the hypothesis that models with lower uncertainty are more likely to produce accurate predictions.
The goal of this paper is thus to provide an extensive overview of the current state-of-the-art methods for model selection, which have been mostly developed in the fields of computer vision and machine learning. We outline their limitations and summarize the main challenges in their adaptation to IR tasks. We further provide an outline of our findings along with associated reflections and challenges, and propose possible future directions for exploration. It is worth noting that while the problem of unsupervised model selection has been explored in the general machine learning field, there has been little to no exploration of this problem in the context of information retrieval.
## 2. Related Work
In this section, we will give an overview of the current state-of-the-art methods of unsupervised model selection and its relation to IR. In particular, we target the problem of selecting the best dense retriever model for the datasets from BEIR collection, that contains a wide range of corpora from diverse text retrieval tasks and domains for zero-shot evaluation.
### Unsupervised Model Selection
The problem of Unsupervised Model Selection is actively researched in the context of general deep learning tasks. It involves choosing the best model for an unsupervised target dataset, that belongs to a distribution different from the original training dataset.
Corneanu et al. (2017) analyze various topological statistics, calculated from the correlation matrix of the network's activations, and show that they are correlated with model generalizability.
You et al. (2017) introduce Deep embedded validation (DEV)- a method to obtain estimations of the target risk by assessing the likelihood of each validation sample to belong to the target domain. The authors show that the small value of DEV correspond to a large generalizability of the model and can be therefore used for selecting the best performing model for the dataset in hand.
Soft Neighborhood Density method (SND) was proposed by Saito et al. (2017) to evaluate the generalisability of the classifier networks by measuring the quality of clustering via a relative distance of the neighboring samples. The intuition is that the desired model should encode target samples of the same class into dense neighborhoods. Another method of cluster evaluation, called Topological Uncertainty, was proposed by Lacombe et al. (2017). For each class (or label), the authors create a prototype of the topological activation footprint, and evaluate how different is the embedding of each target sample from its closest prototype. Our retrieval task is drastically different from classification: in our case, well-performing model does not necessarily generate well-separated clusters in their embedding space.
Neural Persistence measure, proposed by Rieck et al. (2017), compares the networks based on the topological features of their last fully connected layers. Despite showing a correlation with model generalizability, Neural persistance-based ranking will provide the same result for all the datasets, as it is purely weight-based and dataset-independent. Additionally, for a fair comparison, Neural Persistence requires the same layer dimensionality across models, which is not the case in our experimental setup.
### Unsupervised OOD Performance Evaluation
The problem of unsupervised Out-Of-Distribution (OOD) performance evaluation involves estimating the performance of a model with the presence of domain shift between the train and test data. Note that the experimental setup of performance estimation is slightly different to model selection: here, only one model is given, and the task is to predict its performance on several target datasets.
The first group of methods analyzes the quality of model predictions through a range of metrics, calculated from the network output. For example, Miller et al. (2017) show a positive correlation between in-domain and out-of-domain performances (Method 1 in our analysis); Hendrycks et al. (2017) use the statistics of softmax outputs to identify misclassifications; Guillory et al. (2018) evaluate the performance of the models based on the uncertainty of its predictions, approximated via the difference of confidences between the base dataset predictions and the target dataset predictions. Garg et al. (2017) estimate the average confidence threshold (ATC) from the validation data, above which the prediction of the network is considered to be incorrect.
The second group of methods evaluate the quality of the model embeddings. For example, Deng et al. (2018) show negative correlation between recognition accuracy and the Frechet distance between network activations to the source and the target datasets; while Jiang et al. (2018) predict generalization gap using margin distribution - the distances of training points to the decision boundary. Margin distribution requires class predictions, while Frechet distance only relies on hidden embeddings, that is why we use Frechet distance in our analysis (see Method 3).
The last group of methods for performance evaluation focuses on behaviour analysis of the network. For example, Bridal et.al. (2017) analyze training dynamics of the model; in particular, the authors measure generalization of a model via the topological properties of its optimization trajectories. Instead of monitoring the network
during training, the authors of (Deng et al., 2017; Wang et al., 2018; Wang et al., 2018) perform the training of the same network several times, and measure the disagreement between the resulting trained models on the target dataset. Another work by Deng et al. (Deng et al., 2018) shows how the performance on auxiliary task can be used to estimate the performance on the main task. Finally, Khrtantsova et al. (Khrtantsova et al., 2019) propose to analyze how the network changes when it is fine-tuned on the target dataset with an unsupervised loss (e.g., entropy minimization). The authors show that the degree of change in network weights is negatively correlated with the effectiveness of the network on target dataset.
### Zero-shot Dense Retrieval
Bi-encoder dense retrievers, initialized with pre-trained language models (PLMs) and fine-tuned with supervised data, have shown remarkable effectiveness in in-domain information retrieval tasks (Wang et al., 2018; Wang et al., 2018). However, recent studies on the BEIR benchmark dataset (Wang et al., 2018; Wang et al., 2018) have revealed that DRs suffer from the domain shift problem: the effectiveness of a DR varies depending on the corpus domain, which can be different from the domain of the data it was trained on.
The most straightforward domain adaptation method for PLM-based DRs is to first continue pre-training the PLMs on the target domain corpus before fine-tuning them with labeled data (Deng et al., 2018). However, it is often difficult, and costly, to obtain sufficient in-domain labeled training data for IR tasks. Furthermore, it is often the case that DRs must be deployed in a zero-shot setting on a target corpus, which presents additional challenges for domain adaptation. One simple solution for this issue is to pre-train on the target domain data using unsupervised training tasks, followed by fine-tuning on the source domain data where there is sufficient labeled training data (Wang et al., 2018).
Another line of work aimed at improving the zero-shot ability of dense retrievers (DRs) is through _query generation_. This technique has been widely used for enhancing retrieval effectiveness on in-domain data (Wang et al., 2018), and recent studies have shown its effectiveness in the BEIR zero-shot dataset as well (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). In one approach, a query generator trained on general domain data is employed to synthesize domain-targeted queries for a target corpus, on which a dense retriever is trained from scratch (Wang et al., 2018). This method has also yielded promising results and has been utilized as a post-training method for adapting powerful MS MARCO retrievers to target domains (Wang et al., 2018). More recently, Wang et al. (Wang et al., 2018) proposed GPL, which further improves the DR's domain shift ability by combining a query generator with pseudo labeling from a cross-encoder.
However, our paper's aim is fundamentally different from the works mentioned in this section, where the focus is on enhancing the effectiveness of DRs on domain transfer tasks. Instead, we focus _on selecting the best-performing DR model_ on the target corpus from a pool of existing models.
### Query Performance Prediction
In this paper we are interested in predicting the relative effectiveness of a DR model among a pool of available DR models. This problem shares several common aspects with the well known information retrieval task of query performance prediction (QPP) (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). The goal of query performance prediction is to determine which queries, among a set pool of queries, a target system will perform best/worst on. The task we examine instead, is to determine which dense retriever model performs best/worst among a pool of dense retrievers. Nevertheless, evaluation practices in QPP can be adapted to our task, for example the measurement of the Kendall Tau correlation between two query rankings, popular in QPP works, can be adapted to our settings - we in fact do this in Section 4.3. We further note that other measures developed to evaluate QPPs, e.g., the \(\tau_{AP}\) measure (Wang et al., 2018), can also be adapted to our task, but we leave this to future work.
## 3. Model Selection for Dense Retrievers
In this section we first introduce the formalization of the problem of model selection for dense retrievers. We then discuss the challenges of model selection task, specific to DRs. We finally provide a description of the methods we investigate for model selection and how they are adapted within the context of DRs.
### Problem Formulation
Consider a set of rankers \(\mathcal{R}=\{R_{1},R_{2},\ldots R_{n}\}\) and the source dataset \(\mathcal{S}\), which was used for training the rankers. Each ranker represents a dense retriever (dual- or bi- encoder model) \(R=\{E_{Q},E_{D}\}\), that separately encodes the query and the document into dense vectors. The relevance score between the query and the document is computed using a similarity function. In this work, we consider dense retrievers that use either cosine similarity or dot product:
\[dot(q,d)=E_{Q}(q)^{T}E_{P}(p);\quad\text{cos\_sim}(q,d)=\frac{E_{Q}(q)^{T}E_{ D}(d)}{||E_{Q}(q)||||E_{D}(d)||}\]
Consider further a target dataset \(\mathcal{T}\) for which no labeled data was observed during the training of the rankers in \(\mathcal{R}\). The target dataset \(\mathcal{T}\) consists of a set of queries \(Q_{\mathcal{T}}\) and a set of documents \(D_{\mathcal{T}}\), i.e. \(\mathcal{T}=\{Q_{\mathcal{T}},D_{\mathcal{T}}\}\). In addition, a set of relevance judgments (also called labels) is denoted as \(\mathcal{J}_{\mathcal{Q}_{\mathcal{T}},D_{\mathcal{T}}}\); below we use \(\mathcal{J}\) as a shorthand for \(\mathcal{J}_{\mathcal{Q}_{\mathcal{T}},D_{\mathcal{T}}}\) when this cannot be confused. The elements of this set are pairs that establish a relationship between the elements in \(Q_{\mathcal{T}}\) and those in \(D_{\mathcal{T}}\). Practitioners can construct an ordering of the rankers in \(\mathcal{R}\), or a ranking of models, according to a target evaluation measure \(\mathcal{E}\) by applying each ranker to the dataset and use the relevance judgments to compute the evaluation measure. Then, the rankers would be ordered in decreasing value of \(\mathcal{E}\), forming the true ranking of rankers \(\mathcal{O}(\mathcal{R},\mathcal{T},\mathcal{E},\mathcal{J})\).
Definition 3.1 (Zero-shot Model Selection): The problem of model selection consists of predicting the ranking \(\mathcal{O}(\mathcal{R},\mathcal{T},\mathcal{E},\mathcal{J})\)_without accessing the relevance judgments \(\mathcal{J}_{\mathcal{Q}_{\mathcal{T}},D_{\mathcal{T}}}\)_. This is equivalent to producing a ranking \(\hat{\mathcal{O}}(\mathcal{R},\mathcal{T},\mathcal{E})\) of the rankers in \(\mathcal{R}\) for dataset \(\mathcal{T}\) and evaluation measure \(\mathcal{E}\), such that \(\hat{\mathcal{O}}(\mathcal{R},\mathcal{T},\mathcal{E})\) corresponds to the true ranking \(\mathcal{O}(\mathcal{R},\mathcal{T},\mathcal{E},\mathcal{J})\). Note that \(\hat{\mathcal{O}}(\mathcal{R},\mathcal{T},\mathcal{E})\) does not include the relevance assessments \(\mathcal{J}_{\mathcal{Q}_{\mathcal{T}},D_{\mathcal{T}}}\) as input.
Given the problem of zero-shot model selection, we are interested in devising a method \(\mathcal{M}(\mathcal{R},\mathcal{T},\mathcal{E})\) that produces the ranking \(\hat{\mathcal{O}}(\mathcal{R},\mathcal{T},\mathcal{E})\).
**Definition 3.2** (Methods for Zero-shot Model Selection).: A method \(\mathcal{M}(\mathcal{R},\mathcal{T},\mathcal{E})\) for zero-shot model selection takes as input the set of rankers \(\mathcal{R}\), the target dataset \(\mathcal{T}\) and the evaluation measure \(\mathcal{E}\) and produces the predicted ranking \(\hat{O}(\mathcal{R},\mathcal{T},\mathcal{E})\).
The effectiveness of the model selection method can be measured by its correlation with the ground truth performance across various target datasets. A high correlation, denoted as \(corr(\hat{O}(\mathcal{R},\mathcal{T},\mathcal{E}),\mathcal{O}(\mathcal{R}, \mathcal{T},\mathcal{E},\mathcal{J}))\), indicates that the method can closely approximate the ground truth ranking without relying on target labels \(\mathcal{J}\).
Note that the problem of zero-shot model selection can be relaxed to allow for the prediction methods to access a subset of the relevance judgments \(\mathcal{J}_{\mathcal{O}_{T},D_{T}}\). For example, this problem could be revisited in the context of _few-shot retrieval_, where the model selection method (and possibly but not necessarily also the rankers in \(\mathcal{R}\)) have access to a subset \(F\in J_{\mathcal{O}_{T},D_{T}}\) of all relevance judgments available for the dataset. We do not consider this setting in this paper.
### Challenges of model selection in IR
Before proceeding with the formalization of the chosen model selection methods, it is important to emphasize the challenges associated with adapting existing approaches to the information retrieval setting. We demonstrate that the specificity of the field prevents a straightforward adaptation of many existing methods. This reinforces our reasoning for choosing the final set of methods for comparison.
1. [leftmargin=*]
2. **Variance in network structure: dense retrievers differ in types and numbers of network layers.** The machine learning research community lacks a consensus on the way a set of models for model selection is constructed. As a result, researchers employ different techniques to generate a pool of models, such as varying training hyper-parameters (Zhu et al., 2017), taking different subsets of data for training (Zhu et al., 2017), changing initialization strategies (Zhu et al., 2017) or using the combination of these techniques (Zhu et al., 2017). However, in all of these scenarios, the models' structure and depth remains the same, which significantly simplifies the task of model selection. However, in this paper, we focus on a more realistic scenario for model selection in IR. In particular, instead of artificially synthesizing a collection of models and deriving a measure for early stopping (Zhu et al., 2017) or for hyper-parameter search (Zhu et al., 2017), we aim to select the best model for the target dataset among the latest available state-of-the-art dense retrievers. As an example, consider the scenario of a hospital, that has a small highly specialized dataset in hand. With the appearance of a new model, that outperforms the other known baselines on the source dataset (e.g., MS MARCO), the question arises: Will the new model perform similarly well on the hospital's local unlabeled dataset? In practice, we use models from the leaderboard for zero-shot learning from the BEIR benchmark (Zhu et al., 2017), that includes models with different architectures (Bert-based; DistilBert-based, Roberta-Based). Therefore, in order to meet the realistic demands for model selection, an essential additional constraint is that the model selection method must be insensitive to the architecture of the models.
3. **Variance in scoring function: based on the training procedure, document relevance is estimated by different scoring functions, e.g., cosine similarity or dot product.** While classifier models produce a unified output that represents the probability of the input sample belonging to a certain class, dense retriever scores do not represent probabilities. This difference prevents the direct adaptation of several methods (Kang et al., 2016; Zhu et al., 2017; Zhu et al., 2017; Zhu et al., 2017) to dense retrieval model selection. Comparing models directly by the score is also not possible due to the variation in score range based on the score function used (for instance, cosine similarity with scores ranging from -1 to 1 vs. unbound dot product). Simultaneously, substituting one scoring function with the other is inadvisable as it results in a different ranking result. To mitigate this, we propose to use cosine similarity solely for model selection, while preserving the original score function for retrieval (see Method 2 in the following section for more details). The final score-related challenge is that not only do score distributions differ among various models, but they also vary across different queries within a single model. It implies that unlike ATC (Baktas et al., 2018), there is no unified threshold across queries, below which the document is more likely to be irrelevant.
4. **Large number of both network parameters and source dataset samples, which makes re-training impractical.** Several existing approaches require access to the training process, either for generating training trajectories from several consecutive training checkpoints (Baktas et al., 2018); or for evaluating the disagreement in judgments between similar models (Baktas et al., 2018; Zhu et al., 2017; Zhu et al., 2017). However, retraining dense retrievers require significant computational resources and expertise. In addition, if the code was not made publicly available, it can be difficult to reproduce the exact training conditions, including hyper-parameters and data pre-processing steps. Therefore, in this work, we focus on methods that do not require re-training on the source dataset.
### Method 1: In-Domain Performance
A most naive approach is to rank models by their in-domain performance, or the performance on the source dataset. Let \(\varepsilon_{S}(R)\) be an evaluation measure of a ranker \(R\) on the source dataset \(S\). Then, the In-Domain Performance-based model ranking is defined as follows:
\[Method_{1}=argmax_{R\in\mathcal{R}}(\varepsilon_{S}(R)) \tag{1}\]
### Method 2: Query Similarity
Query Similarity Score evaluates a ranker by the proximity of its source and target query representations. The premise behind this score is that the most generalizable ranker should produce similar embeddings for both source and target queries. Let target query relevance score - \(tqr\) - be the cosine similarity between a target query and its closest counterpart within source queries:
\[tqr(q_{t},Q_{S})=\operatorname*{arg\,max}_{q_{t}\in Q_{S}}(\operatorname*{ cos\,sim}(q_{t},q_{S}))\]
Then the Query Similarity Score of a ranker \(R\) is the average target query relevance score across target queries:
\[Q\_sim(Q_{S},Q_{T})=\frac{\sum_{q_{t}\in Q_{T}}tqr(q_{t},Q_{S})}{|Q_{T}|} \tag{2}\]
Finally, the Query Similarity Score-based model ranker is defined as follows:
\[Method_{2}=argmax_{R\in\mathcal{R}}(Q\_sim_{R}) \tag{3}\]
### Method 3: Corpus Similarity
Corpus Similarity Score evaluates the similarity of corpus representations. The intuition is that the ranker should generalize well if the source and the target corpora are encoded in a similar way.
Using per-sample cosine similarity from the previous section proves to be impractical for corpus comparison due to a high dimensionality of their representations. Instead, we propose to employ Frechet distance (Peng and Zheng, 2017) between network activations of the source and target corpora. Deng and Zheng (2018) show that small Frechet distance between network activations is correlated with the high recognition accuracy of convolutional classifiers.
Let \(\mu_{s},\mu_{t},\Sigma_{S},\Sigma_{T}\) be the mean and the covariance matrix of the network representations, produced by the source and the target corpora:
\[\mu_{s} =\frac{\sum_{d_{s}\in D_{S}}E_{D}(d_{s})}{|D_{S}|};\mu_{t}=\frac{ \sum_{d_{s}\in D_{s}}E_{D}(d_{t})}{|D_{t}|}\] \[\Sigma_{S} =\frac{\sum_{d_{s}\in D_{S}}[E_{D}(d_{s})-\mu_{s}][E_{D}(d_{s})- \mu_{s}]^{T}}{|D_{S}|-1}\] \[\Sigma_{T} =\frac{\sum_{d_{s}\in D_{T}}[E_{D}(d_{t})-\mu_{t}][E_{D}(d_{t})- \mu_{t}]^{T}}{|D_{T}|-1} \tag{4}\]
Then, Frechet distance between source and target corpus representations is defined as following:
\[FD(D_{S},D_{T})=(||\mu_{s}-\mu_{t}||)^{2}+Tr(\Sigma_{S}+\Sigma_{T}-2Tr\sqrt{ \Sigma_{S}\Sigma_{T}}) \tag{5}\]
Finally, the Corpus Similarity-based model ranker is defined as follows:
\[Method_{3}=argmin_{R\in\mathcal{R}}(FD_{R}) \tag{6}\]
### Method 4: Extracted Document Similarity
A particularity of IR setup in comparison to other ML fields is a large dataset size: target corpora, and in particular source corpora often consist of millions of documents. For that reason, comparing full corpus representations might lead to over-generalization and, as a consequence, result in an inaccurate metric. Instead, we propose to adapt Method 3.5 to IR task by only comparing the subset of documents, extracted by the target query from the source corpus and from the target corpus.
Let \(l_{S}^{q}=[d_{1},d_{2},\ldots,d_{k}],d_{i}\in S\) be a list of \(k\) documents, extracted by a ranker with a query \(q\) from a source dataset \(S\). In addition, let \(l_{T}^{q}=[d_{1}^{\prime},d_{2}^{\prime},\ldots,d_{k}^{\prime}],d_{i}^{\prime}\in T\) be a list of \(k\) documents, extracted with the same query from a target dataset \(T\). Then, for a target query \(q\in Q_{T}\), Frechet distance between extracted subsets of source and target document representations is denoted as \(FD(l_{S}^{q},l_{T}^{q})\).
We average the resulting distances across target queries to obtain an adapted version of Frechet distance:
\[FD\_IR(D_{S},D_{T})=\frac{\sum_{q\in Q^{T}}FD(l_{S}^{q},l_{T}^{q})}{|Q^{T}|} \tag{7}\]
Finally, the Extracted Document Similarity-based model ranker is defined as follows:
\[Method_{4}=argmin_{R\in\mathcal{R}}(FD\_IR) \tag{8}\]
### Method 5: Binary Entropy
In the context of classification tasks, entropy is commonly employed to measure the level of uncertainty in a model's predictions. However, in our scenario, it cannot be directly applied due to the possibility of multiple relevant documents per query. To adapt the entropy-based method for our task, we apply a two-step process. First, we transform the scores, produced by the dense retriever, into probabilities indicating the relevance of each document to the query. Next, we compute binary entropy for each document in the rank. We then calculate the probability-at-rank distribution by adding up the binary probabilities for all the documents in the ranking.
Let \(s_{T}^{q}=[s_{1},s_{2},..,s_{k}]\) be a list of scores, obtained by a ranker with the query \(q\) from the target dataset \(T\); scores can be either a dot product or a cosine similarity, depending on the model. We first need to normalize the scores. For that, we mine for the negative samples as follows: \(\hat{s}_{T}^{q}=[\tilde{s}_{1},\tilde{s}_{2},...,\tilde{s}_{100}]\); \(\hat{s}_{T}^{q}\cap s_{T}^{q}=\emptyset\). Then we approximate the minimum score for the rank and normalize the scores to get probabilities: \(min_{T}^{q}=\arg\min(\hat{s}_{T}^{q})\); \(p(d_{i})=(s_{i}-min_{T}^{q})/(s_{1}-s_{i})\)
Assume that \(p(d_{i})\) is the probability that the document at rank \(i\) is relevant, while \(p(d_{1},\ldots,d_{k})\) is a probability distribution over the relevance associated with a document list of length \(k\) (i.e., probability-at-rank). Then, the binary entropy is:
\[H(p(d_{i}))=-p(d_{i})\log p(d_{i})-(1-p(d_{i}))\log(1-p(d_{i})) \tag{9}\]
The entropy probability-at-rank distribution for query \(q\) is therefore defined as follows:
\[H^{q}(p(d_{1},\ldots,d_{k}))=\sum_{i=1}^{k}H(p_{i})\] \[H_{R}=\sum_{q\in Q^{T}}H^{q}/|Q^{T}| \tag{10}\]
Finally, the Binary Entropy-based model ranker is defined as follows:
\[Method_{5}=argmin_{R\in\mathcal{R}}(H_{R}) \tag{11}\]
### Method 6: Query Alteration
This method aims to rank models by assessing their ability to handle changes in the queries of the target dataset. The intuition is that if a model is robust enough to the queries and documents in the target domain, then the relevance scores should remain stable even after introducing noise to the queries. To evaluate model robustness, we follow a query perturbation-based approach. Specifically, we issue the original queries to each model on the target domain dataset and record the top k retrieved documents of each query. We then randomly replace some tokens in the original queries with [MASK] tokens, with the proportion of replaced tokens controlled by a hyper-parameter p that ranges from 0 to 1. Subsequently, we
recompute the relevance scores between the altered queries and the originally retrieved documents. By comparing the scores of the original and perturbed queries, we obtain an indication of how well the model can handle variations and noise in query inputs. We quantify the model robustness by calculating the standard deviation of the score changes, with lower standard deviation implying better robustness.
## 4. Experimental Setup
### Datasets
To evaluate the ability of the predictors to select the most performing model for a target dataset, we employ the BEIR evaluation benchmark (Krishnan et al., 2017), which has been commonly employed to evaluate the generalisation and zero-shot effectiveness of retrieval models (Krishnan et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). BEIR contains a heterogeneous set of 18 datasets drawn from 9 text retrieval tasks and domains. We refer the reader to the original work describing BEIR for an in-depth analysis of each dataset and baseline results (Krishnan et al., 2017) - however, as the complete set of BEIR subsets is not publicly available, we follow the standard practice of selecting a representative subset of the available subsets for our experiments. Specifically, we select 10 out of the 18 available subsets to conduct our experiments. The selected subsets are consistent with prior work on the dataset, allowing for comparability across studies. We highlight that a key finding of that work was that dense retrievers exhibited poor generalization capabilities.
### Dense Retrieval Models
To evaluate our proposed dense retrieval model selection methods, we require a diverse pool of DR models that exhibit varying performance across different BEIR subsets. To this end, we consider DR models that have been submitted to the BEIR leaderboard 1, including Contriver (Krishnan et al., 2017), DistilBERT (TAS-B) (Krishnan et al., 2017), ANCE (Wang et al., 2019), DistilBERT v3 (Wang et al., 2019), DistilBERT (dot) (Wang et al., 2019), MiniLM-L-12 (Wang et al., 2019), MiniLM-L-6 (Wang et al., 2019), and DistilBERT v2 (Wang et al., 2019). Additionally, we include some advanced DR models that are not on the leaderboard, such as CoCondenser (Krishnan et al., 2017) and SimLM (Wang et al., 2019), as well as a DR model that we trained ourselves (BERT-DPR) by using Tevatron DR training toolkit (Krishnan et al., 2017). We obtain all model checkpoints from the Huggingface model hub, as uploaded by the original authors. Notably, all DR models considered in this study are trained on MS MARCO training data (Wang et al., 2019), and therefore perform zero-shot retrieval on BEIR datasets.
Footnote 1: [https://github.com/beir-cellar/beit/wiki/Leaderboard](https://github.com/beir-cellar/beit/wiki/Leaderboard)
### Evaluation Measures for Dense Retrievers Selection
For evaluating considered model selection methods, we need to first have the ideal model ranking on the target corpus to compare with the rankings generated by the model selection methods. To this end, for each of the considered datasets, we record the effectiveness of each dense retrievers according to several evaluation measures \(\epsilon\). Specifically we consider nDCG@10, which is the official evaluation measure for the BEIR leaderboard. We report these measures for the Dense Retrievers we consider in our main experiments in Table 1.
After having the real nDCG@10 scores of methods on BEIR dataset, then given a dataset, we identify the best Dense Retriever model \(M\) according to the evaluation measure \(\epsilon\) among the set \(\mathcal{M}\) of dense retrievers we consider, i.e. \(M=argmax_{\hat{M}\in\mathcal{M}}(\epsilon(\hat{M}))\). For each prediction method \(\theta\) we then identify the model \(\hat{M}_{\theta}\) that has been predicted as being the most effective for that dataset. Then, we measure the loss for the evaluation measure \(\epsilon\) when using \(\hat{M}_{\theta}\), i.e. the model predicted to be the best by the prediction method \(\theta\), in place of \(M\), i.e. the true best model. We define this loss as:
\[\Delta_{\epsilon}=e(\hat{M}_{\theta})-e(M) \tag{12}\]
We also consider the relative loss produced when choosing \(\hat{M}_{\theta}\) in place of \(M\): this measure is interesting in that a search engine practitioner may tolerate to select a sub-optimal model up to a certain percentage of loss.
\[\%\Delta_{\epsilon}=100*\frac{e(\hat{M}_{\theta})-e(M)}{e(M)} \tag{13}\]
In addition the above loss based evaluation method, we also consider the Kendall Tau correlation, which measures the similarity between the rankings generated by the model selection methods and the ranking by true nDCG values. Specifically, the Kendall Tau correlation measures the proportion of pairs of documents that are ranked in the same order by both rankings. A perfect correlation between the rankings of two models results in a Kendall Tau score
\begin{table}
\begin{tabular}{l|c|c|c|c|c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{In-dom.} & \multicolumn{3}{c|}{Zero-shot} & \multicolumn{3}{c}{Zero-Shot Datasets} \\ & Rank & MS MARCO & Rank & Avg. & COVID & NF & NQ & Hotpot & FQA & ArguAna & DBPEDIA & SCIDOCS & SciFact & Quora \\ \hline Contriver & 4 & 0.407 & 1 & **0.496** & 0.596 & **0.329** & 0.498 & **0.638** & **0.329** & **0.446** & **0.413** & **0.165** & **0.677** & **0.865** \\ CoCondenser & 2 & 0.433 & 2 & **0.472** & **0.752** & 0.297 & 0.495 & 0.562 & 0.297 & 0.377 & 0.364 & 0.137 & 0.556 & 0.567 \\ DistilBERT (TAS-B) & 3 & 0.408 & 3 & 0.459 & 0.481 & 0.319 & 0.463 & 0.584 & 0.300 & 0.427 & 0.384 & 0.149 & 0.643 & 0.835 \\ SimLM & 1 & **0.458** & 4 & 0.443 & 0.527 & 0.318 & **0.502** & 0.568 & 0.297 & 0.376 & 0.351 & 0.137 & 0.559 & 0.796 \\ ANCE & 7 & 0.388 & 5 & 0.426 & 0.653 & 0.236 & 0.444 & 0.451 & 0.295 & 0.419 & 0.281 & 0.122 & 0.511 & 0.852 \\ DistilBERT v3 & 5 & 0.389 & 6 & 0.425 & 0.477 & 0.256 & 0.450 & 0.513 & 0.257 & 0.426 & 0.338 & 0.133 & 0.538 & 0.855 \\ DistilBERT (dot) & 6 & 0.389 & 7 & 0.418 & 0.633 & 0.269 & 0.442 & 0.477 & 0.253 & 0.329 & 0.315 & 0.114 & 0.515 & 0.833 \\ MiniLM-L-12 & 8 & 0.385 & 8 & 0.403 & 0.473 & 0.252 & 0.422 & 0.456 & 0.240 & 0.407 & 0.307 & 0.113 & 0.503 & 0.854 \\ BERT-DPR & 10 & 0.364 & 9 & 0.398 & 0.619 & 0.216 & 0.442 & 0.454 & 0.216 & 0.354 & 0.297 & 0.111 & 0.452 & 0.792 \\ MiniLM-L-6 & 9 & 0.379 & 10 & 0.395 & 0.479 & 0.255 & 0.394 & 0.448 & 0.231 & 0.394 & 0.292 & 0.116 & 0.495 & 0.845 \\ DistilBERT v2 & 11 & 0.336 & 11 & 0.364 & 0.242 & 0.258 & 0.362 & 0.427 & 0.212 & 0.429 & 0.288 & 0.133 & 0.495 & 0.815 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Effectiveness of the considered Dense Retrievers on the in-domain dataset (MS MARCO, from which the training data was also drawn) and on the BEIR zero-shot target datasets.
of 1 in case of positive correlation and -1 in case of negative correlation, while a completely random correlation results in a score of 0.
## 5. Results
Table 1 presents the effectiveness in terms of nDCG@10 of the considered DR models on the source MS MARCO dataset and the target collection BEIR, as well as the in-domain and out-of-domain (zero-shot) ground truth model ranking. It is evident that the zero-shot ranking differs from the in-domain ranking, highlighting the necessity for a better model selection criterion. Furthermore, we provide the effectiveness of our proposed model selection methods in terms of Kendall Tau correlation, \(\Delta_{e}\), and \(\%\Delta_{e}\) in Tables 2, 3, and 4, respectively. These tables allow us to compare the effectiveness of the selected models and provide a comprehensive overview of the investigated methods for model selection.
### Desired ranking
To gain an understanding of the desired model ranking, we first evaluate the actual effectiveness of the considered DR models on the BEIR dataset. The results are presented in Table 1. While Contirever may not perform the best on the in-domain corpus (i.e. the dataset that is aligned to the training data), it outperforms other DRs on most of the zero-shot datasets, ranking at number one in terms of average zero-shot effectiveness. CoCondense and SimLM, on the other hand, exhibit better in-domain effectiveness than Contirever, but only achieve the best zero-shot effectiveness on NQ and COVID, respectively, ranking at four and two on average in terms of zero-shot effectiveness. Similarly, the effectiveness of all other models varies on the zero-shot datasets, irrespective of their effectiveness on the in-domain data.
It is important to keep in mind that our objective is to rank the DR models on the target dataset as closely as possible to the true ranking of the models. Therefore, the naive baseline of ranking models solely based on their in-domain effectiveness is evidently inadequate, which is quantified in Tables 2, 3 and 4 (first row of each table).
We next analyze how each of the methods we considered for model selection perform, compared to the naive baseline of ranking models solely based on their in-domain effectiveness.
### Query Similarity
The Query Similarity method identifies ANCE as the top-ranked model across all zero-shot datasets. This may be because most of ANCE's top scores have values close to 1, unlike the other DR models. CoCondense is always ranked as the second best model, followed by BERT-DPR in third place. However, it should be noted that predicting BERT-DPR to be ranked high is not desirable, as its actual rank position based on the ground-truth is usually near the bottom of the list, regardless of the zero-shot dataset, as shown in Table 1. Overall, Query Similarity displays weak average correlation with the true rankings of DR models (Table 2), and is the worst method in terms of \(\Delta_{e}\) and \(\%\Delta_{e}\).
### Corpus Similarity
According to the Corpus Similarity method, SimLM is consistently ranked as the best model across all datasets, followed by Contriever as the second-best model. The third-place ranking varies between ANCE and MiniLM-L-12 depending on the zero-shot dataset. The bottom-ranking models are almost identical across all datasets. Notably, this method provides the best Kendall Tau correlation among all considered methods, but is inferior to the In-Domain performance baseline. Additionally, unlike most other unsupervised methods, the Frechet distance, which defines the corpus similarity, consistently exhibits a negative correlation with the accuracy across all target datasets. This characteristic makes Corpus Similarity the most reliable measure for unsupervised model selection among considered methods.
### Extracted Document Similarity
The ranking of the top three DRs according to the Extracted Document Similarity method is consistent across all datasets, with Contriever securing the first place, followed by ANCE and MiniLM-12. Although this method is slightly less effective than corpus similarity, it still yields the best \(\Delta_{e}\) and \(\%\Delta_{e}\) values, matching those of the baseline as well as those of some of the other methods.
### Binary Entropy
Binary Entropy is the only method that selects different models across different zero-shot datasets: either DistilBERT (TAS-B) or ANCE. According to Table 3 the cut-off of the documents per query does not matter as it does not affect the ranking of models. Binary Entropy provides the best ranking of DRs for the Arguana and NF datasets (\(\tau=0.491\)), even outperforming the In-Domain performance method on these datasets. However, the biggest drawback of Binary Entropy is that it does not have a consistent correlation pattern across datasets: for example, it shows moderate negative correlation for DBPedia with \(\tau=-0.345\), while moderate positive correlation of the two datasets for which it performs best, Arguana and NF. The analysis of \(\Delta_{e}\) and \(\%\Delta_{e}\) values paint a similar picture. We discuss the origins of this inconsistency in the next section.
### Query Alteration
Across all datasets, Query Alteration indicates as the best model SimLM, followed by DistillBert. The method displays the lowest average \(\%\Delta_{e}\), at par with the baseline; the individual values of \(\%\Delta_{e}\) across the zero-shot datasets are often the best recorded, except in the case of the TREC-COVID dataset. We also note that the \(\%\Delta_{e}\) of this method matches exactly the one of the In-Domain performance baseline. Note that the probability \(p\) of replaced tokens affects the effectiveness of the predictions, with the best \(p\) being 0.1.
## 6. Outlook
We have examined the effectiveness of adapting methods of unsupervised performance evaluation with the presence of domain shift, proposed in the area of computer vision and machine learning, to the problem of choosing which dense retriever model to use when searching on a new collection for which no labels are available, i.e. in a zero-shot setting.
Our first finding is that the in-domain effectiveness of the dense retrievers, trained on a large corpus like MS MARCO, represents a strong indicator of model generalizability on the target domains, despite the existence of domain gap. However, we note that the problem of dense retrieval selection, i.e. which dense retriever model to select in a new domain, remains an important and unresolved issue. Specifically, we observe that the straightforward in-domain-based model selection method fails to choose the best model across all the datasets examined. Furthermore, for certain datasets, the ranking of models based on this method is significantly flawed. The methods we have investigated and that we have adapted from computer vision and machine learning can only do as well as this simple baseline (though some do notably worse). These findings highlight the need for further research to improve the effectiveness of model selection techniques to make feasible the practical application of dense retrieval in domain-specific tasks characterized by low labeled data availability.
In this work, we focus on a realistic model selection scenario in information retrieval, which differs from the artificially-created setups investigated in prior work in machine learning (see Challenge 1 in Section 3.2). Due to the challenges in adapting model selection methods to the information retrieval setup, we primarily considered two broad groups of methods: uncertainty-based and activation-based.
**Uncertainty-based methods.** In the field of machine learning, entropy-based methods are often used to measure the level of uncertainty of a model. These methods analyze the score distribution produced by the model and use it to assess the model's level of confidence in its predictions.
In classification tasks, the concept of uncertainty is naturally linked to the score distribution since there is only one correct class per input sample. When a model predicts two classes with similar probabilities, it indicates uncertainty about the sample's classification.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline & TREC-COVID & NFCorpus & NQ & HotpotQA & FiQA & ArguAna & DBPedia & SciDocs & SciFact & Quora & **Avrg** \\ \hline In-Domain Performance (1) & 29.88 & 3.03 & 0.0 & 10.91 & 9.69 & 15.63 & 14.97 & 16.8 & 17.38 & 7.87 & 12.62 \\ Query Similarity (2) & 13.12 & 28.16 & 11.52 & 29.23 & 10.5 & 6.09 & 31.9 & 26.25 & 24.44 & 1.46 & 18.27 \\ Corpus Similarity (3) & 29.88 & 3.03 & 0.0 & 10.91 & 9.69 & 15.63 & 14.97 & 16.8 & 17.38 & 7.87 & 12.62 \\ Extracted Doc similarly @100 (4) & 29.88 & 3.03 & 0.0 & 10.91 & 9.69 & 15.63 & 14.97 & 16.8 & 17.38 & 7.87 & 12.62 \\ \hline Binary entropy @100 (5) & 13.12 & 21.36 & 11.52 & 29.23 & 10.5 & 4.29 & 31.9 & 26.25 & 5.03 & 0.0 & 15.32 \\ Binary entropy @1000 (5) & 13.12 & 21.36 & 11.52 & 29.23 & 10.5 & 4.29 & 31.9 & 26.25 & 5.03 & 0.0 & 15.32 \\ \hline Query Alternation Std p=0.1 (6) & 29.88 & 3.03 & 0.0 & 10.91 & 9.69 & 15.63 & 14.97 & 16.8 & 17.38 & 7.87 & 12.62 \\ Query Alternation Std p=0.2 (6) & 29.88 & 3.03 & 0.0 & 10.91 & 9.69 & 15.63 & 14.97 & 16.8 & 17.38 & 7.87 & 12.62 \\ \hline \end{tabular}
\end{table}
Table 4. %\(\Delta_{e}\), calculated based on nDCG@10 (the lower, the better)
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline & COVID & NF & NQ & HotpotQA & FiQA & ArguAna & DBPedia & SciDocs & SciFact & Quora & **Avrg** \\ \hline In-Domain Performance (1) & 0.273 & 0.455 & 0.782 & 0.709 & 0.745 & 0.018 & 0.6 & 0.636 & 0.745 & 0.273 & 0.524 \\ Query Similarity (2) & 0.673 & 0.127 & 0.345 & 0.091 & 0.2 & -0.164 & 0.055 & 0.091 & 0.018 & -0.091 & 0.135 \\ Corpus Similarity (3) & 0.2 & 0.056 & 0.345 & 0.309 & 0.455 & 0.055 & 0.200 & 0.382 & 0.309 & 0.164 & 0.247 \\ Extracted Doc similarly @100 (4) & 0.273 & -0.018 & 0.491 & 0.200 & 0.345 & -0.018 & 0.236 & 0.236 & 0.200 & 0.127 & 0.207 \\ \hline Binary entropy 10 (5) & 0.418 & 0.491 & -0.127 & -0.055 & 0.055 & 0.491 & -0.345 & 0.127 & 0.309 & 0.236 & 0.16 \\ Binary entropy 1000 (5) & 0.418 & 0.491 & -0.127 & -0.055 & 0.055 & 0.491 & -0.345 & 0.127 & 0.309 & 0.236 & 0.16 \\ \hline Query Alternation Std p=0.1 (6) & -0.127 & 0.091 & 0.345 & 0.309 & 0.273 & 0.127 & 0.164 & 0.309 & 0.2 & 0.2 & 0.189 \\ Query Alternation Std p=0.2 (6) & -0.164 & 0.055 & 0.345 & 0.273 & 0.273 & 0.164 & 0.164 & 0.2 & 0.2 & 0.236 & 0.175 \\ Query Alternation Std p=0.3 (6) & -0.236 & -0.018 & 0.236 & 0.164 & 0.2 & 0.091 & 0.164 & 0.127 & 0.091 & 0.164 & 0.098 \\ \hline \end{tabular}
\end{table}
Table 2. Kendall Tau Correlation value, calculated based on nDCG@10 (the higher, the better)
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline & COVID & NF & NQ & HotpotQA & FiQA & ArguAna & DBPedia & SciDocs & SciFact & Quora & **Avrg** \\ \hline In-Domain Performance (1) & 0.225 & 0.01 & 0.0 & 0.07 & 0.032 & 0.07 & 0.062 & 0.028 & 0.118 & 0.068 & 0.068 \\ Query Similarity (2) & 0.099 & 0.092 & 0.058 & 0.186 & 0.035 & 0.027 & 0.132 & 0.043 & 0.165 & 0.013 & 0.085 \\ Corpus Similarity (3) & 0.225 & 0.01 & 0.0 & 0.07 & 0.032 & 0.07 & 0.062 & 0.028 & 0.118 & 0.068 & 0.068 \\ Extracted Doc similarly @100 (4) & 0.225 & 0.01 & 0.0 & 0.07 & 0.032 & 0.07 & 0.062 & 0.028 & 0.118 & 0.068 & 0.068 \\ \hline Binary entropy @10 (5) & 0.099 & 0.07 & 0.058 & 0.186 & 0.035 & 0.019 & 0.132 & 0.043 & 0.034 & 0.0 & 0.067 \\ Binary entropy @1000 (5) & 0.099 & 0.07 & 0.058 & 0.186 & 0.035 & 0.019 & 0.132 & 0.043 & 0.034 & 0.0 & 0.067 \\ Query Alternation Std p=0.1 (6) & 0.225 & 0.01 & 0.0 & 0.07 & 0.032 & 0.07 & 0.062 & 0.028 & 0.118 & 0.068 & 0.068 \\ Query Alternation Std p=0.2 (6) & 0.225 & 0.01 & 0.0 & 0.07 & 0.032 & 0.07 & 0.062 & 0.028 & 0.118 & 0.068 & 0.068 \\ Query Alternation Std p=0.3 (6) & 0.225 & 0.01 & 0.0 & 0.07 & 0.032 & 0.07 & 0.062 & 0.028 & 0.118 & 0.068 & 0.068 \\ \hline \end{tabular}
\end{table}
Table 3. \(\Delta_{e}\), calculated based on nDCG@10 (the lower, the better)
In information retrieval tasks, the requirement for an ideal ranker is to maximize the binary entropy of relevant document predictions and minimize the binary entropy of irrelevant ones, as defined by Aslam et al. (Aslam et al., 2017). This presents a challenge since the number of relevant documents in an unlabeled collection is unknown. As a result, the connection between score distribution and model uncertainty in ranking tasks is not intuitive. For instance, a retriever that produces a steep score distribution with a top-ranked document having a significantly higher score than other retrieved documents will have a small entropy score. On the other hand, a retriever with a more gradually-declining score distribution will have a larger entropy score. However, having a large entropy of predictions does not necessarily imply that the model is uncertain. It may instead reflect that the model predicts more of the top-k retrieved documents to be equally relevant. In addition, the nature of the dataset itself could highly influence the score distributions. Large datasets with many documents that are relevant to a query will likely result in rankings with low entropy; while small datasets with very different documents and with just a handful of documents that are close to the query will likely result in rankings with a high entropy. In this example, however, the case with higher entropy does not indicate the model is more certain about its predictions.
Another way to define the uncertainty of the model is to evaluate its robustness to query perturbation. The intuition is that if the model is confident in its predictions, it should not be affected by a slight perturbation of the query. Consequently, network robustness can be used as a quality measure of the model, and employed as an unsupervised criterion for model selection. Indeed, according to our results, this approach provides competitive model ranking, with an average Tau Correlation of 0.189. Despite this, the correlation is very weak and the method falls noticeably behind the top-performing baseline, indicating that there is still ample opportunity for improvement, which we recommend the information retrieval community should explore in future research.
The issue of uncertainty estimation in information retrieval is a significant yet largely under-explored problem, as noted in several prior studies (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016). This is particularly true for rankers that rely on pre-trained language models (Krizhevsky et al., 2015). While some efforts have been made to leverage uncertainty in relevance estimation for traditional keyword-based best-match models like BM25 and language models, the current approaches are based on assumptions and heuristics that use similarities or covariance between term occurrences (Krizhevsky et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2016), follow the Dirichlet distribution (Krizhevsky et al., 2016), or calculate score distributions by resampling query terms (Krizhevsky et al., 2016). Recently, researchers have attempted to model uncertainty for neural rankers, such as with the Transformer Pointer Generator Network (T-PGN) model (Krizhevsky et al., 2015) However, these models are not easily adaptable to the dense retrieval ranker architectures that we have examined in this paper. More effective and meaningful estimations of uncertainty might be a promising future direction also in our context.
**Activation-based methods.** Activation-based methods approximate the domain gap between the source and target distributions via the proximity of its hidden representations. The underlying idea is that a model with good generalization capabilities should produce similar encodings for both datasets. The advantage of this method if adapted to information retrieval is that it is not affected by variations in network architectures or scoring functions used, which enables the comparison of models within the constraints of the information retrieval setup (i.e. across different pre-trained language models and architectures). This makes it easier to identify models that perform well on both source and target datasets, without being biased towards specific network architectures or scoring functions.
In our experiments, we used the Frechet distance between the activations of the network for the source and target datasets (Method 3, Corpus Similarity). This approach, highly effective for example in computer vision tasks, in our experiments selects the same best model for all the datasets, similarly to the in-domain performance-based method, but produces an inferior model ranking (see Table 2). To handle the high dimensionality of the activation spaces, the Frechet-based approximation assumes that they conform to a normal distribution (Deng et al., 2015), which may not hold for NLP-induced hidden representations. We believe that designing NLP-oriented distance approximations between activation spaces is a promising future direction for more accurate unsupervised model selection, especially in the context of selecting dense retrieval models.
**Weight- and Performance-based methods.** The last direction we would like to mention is inspired the recent work by Khratmtsova et al. (Khratmtsova et al., 2018) and Deng et al. (Deng et al., 2019) which involves fine-tuning the network on the target dataset with the unsupervised loss (e.g., it can be a masked token prediction task) and evaluating the degree of the network change or the performance on the unsupervised task. Despite being computationally expensive, these approaches have been shown very effective in general machine learning model selection, and can be further explored in the information retrieval context.
## 7. Conclusion
This paper proposes a novel research direction for zero-shot dense retrieval. While traditional information retrieval research in this area concentrates on developing universal domain-agnostic DR models, our work shifts the focus towards developing a method to rank and select pre-trained state-of-the-art DR models that are best suited for a specific target domain corpus. We acknowledge that the proposed direction does not contradict traditional research on training zero-shot DR models, but rather complements it. As newly developed DR models are likely to have varying effects on different domains, selecting the best model is still beneficial. To explore this research direction, we adapt various methods from computer vision and machine learning, along with some approaches designed for IR. We outline our reasoning and challenges with the investigated approaches and present empirical results on a popular zero-shot benchmark dataset. Our findings shed light on future research avenues within this research direction. We believe that an effective method for selecting a good DR model can provide a principled way for search engine developers to identify the most suitable model for their application, ultimately enhancing user experience.
|
2309.06056 | Convergence to the asymptotic large deviation limit | Large deviation theory offers a powerful and general statistical framework to
study the asymptotic dynamical properties of rare events. The application of
the formalism to concrete experimental situations is, however, often restricted
by finite statistics. Data might not suffice to reach the asymptotic regime or
judge whether large deviation estimators converge at all. We here
experimentally investigate the large deviation properties of the stochastic
work and heat of a levitated nanoparticle subjected to nonequilibrium feedback
control. This setting allows us to determine for each quantity the convergence
domain of the large deviation estimators using a criterion that does not
require the knowledge of the probability distribution. By extracting both the
asymptotic exponential decay and the subexponential prefactors, we demonstrate
that singular prefactors significantly restrict the convergence
characteristics. Our results provide unique insight into the approach to the
asymptotic large deviation limit and underscore the pivotal role of singular
prefactors. | Maxime Debiossac, Nikolai Kiesel, Eric Lutz | 2023-09-12T08:50:38Z | http://arxiv.org/abs/2309.06056v1 | # Convergence to the asymptotic large deviation limit
###### Abstract
Large deviation theory offers a powerful and general statistical framework to study the asymptotic dynamical properties of rare events. The application of the formalism to concrete experimental situations is, however, often restricted by finite statistics. Data might not suffice to reach the asymptotic regime or judge whether large deviation estimators converge at all. We here experimentally investigate the large deviation properties of the stochastic work and heat of a levitated nanoparticle subjected to nonequilibrium feedback control. This setting allows us to determine for each quantity the convergence domain of the large deviation estimators using a criterion that does not require the knowledge of the probability distribution. By extracting both the asymptotic exponential decay and the subexponential prefactors, we demonstrate that singular prefactors significantly restrict the convergence characteristics. Our results provide unique insight into the approach to the asymptotic large deviation limit and underscore the pivotal role of singular prefactors.
Large deviation theory deals with the probabilities of exponentially rare fluctuations in stochastic systems. Such extreme events strongly deviate from typical average values. They hence evade the law of large numbers and the central-limit theorem [1; 2; 3]. As a result, they are in general difficult to investigate both numerically and experimentally [4; 5]. Initiated by Cramer in the 1930s and further developed by Donsker and Varadhan and by Freidlin and Wentzell in the 1970s, the theory of large deviations allows one to estimate the asymptotic distributions of atypical events in the limit of a large scaling parameter. Examples include the distribution of time-averaged observables, such as energy or particle currents flowing through a system, in the limit of long times [1; 2; 3]. Owing to its versatility, the large deviation framework has found widespread applications in the analysis of random processes, from finance [6], statistics [7] and engineering [8], to biology [9] and physics [10]. In this context, large deviation techniques have played an important role in the theoretical investigation of the fluctuating properties of small systems in and out of equilibrium [11; 12]. Experimental studies of large deviation functions in physical systems have been presented in Refs. [13; 14].
Since empirical data are always finite, a crucial issue that restricts the practical applicability of the large deviation method is assessing the convergence towards the asymptotic regime [15; 16; 17; 18; 19]. In particular, a nontrivial task is to determine the convergence region of large deviation estimators from available data for finite samples. Estimators might indeed converge slowly or not converge at all [15; 16; 17; 18; 19]. Typical problems that have to be faced are the artificial linearization of the tails when the statistics is dominated by the largest value in the sample, and the fact that convergence is generally not uniform [15; 16; 17; 18; 19]. Similar issues occur in the study of multifractals [20; 21], glassy phase transitions [22] and free energy estimators [23; 24; 25]. Despite its central importance, the convergence of estimators of large deviation functions has not been investigated experimentally yet.
We here report a systematic experimental study of the convergence properties of the time-averaged heat and work in the context of stochastic thermodynamics [26]. For the implementation, we use an optically levitated nanoparticle driven out of equilibrium with electric feedback control [27; 28; 29; 30; 31; 32]. We analyze both the convergence interval (related to the finite sample size) and the convergence time (related to the finite averaging time). To that end, we use a recently proposed, general convergence criterion, based on the evaluation of the standard error [15], that does not require the knowledge of the probability distribution. Our experimental setup possesses a number of unique features. First, in contrast to usual time series with fixed convergence characteristics, the convergence regions may be controlled using the feedback delay as a parameter. We identify values where only one of the two estimators is expected to converge, and others where both (or none) of them are predicted to be convergent. Second, some large deviation properties, such as the asymptotic scaled cumulant generating functions of work and heat, are analytically known for this system in the limit of high quality factors [35], making a direct assessment of the accuracy of the estimation possible. Third, exploiting the high stability of our device, we measure a sufficiently large data sample to determine not only the asymptotic exponential decay of the distributions, but also the corresponding subexponential prefactors. These prefactors are subdominant in the long-time limit [1; 2; 3]. However, our results reveal that they play a decisive role in the approach to the asymptotic limit. We show that the presence of singularities in the prefactors of the moment generating function, which is the case of the stochastic heat, may dramatically affect the convergence rate and, at the same time, drastically restrict its convergence region.
_Large-deviation estimators._ Let us consider a random variable \(\mathcal{A}\) and its average over a time interval \(\mathcal{T}\), \(a=\int_{0}^{\mathcal{T}}dt\mathcal{A}(t)/\mathcal{T}\). In the following, \(\mathcal{A}\) will be either the stochastic work or the stochastic heat of the levi
tated nanoparticle. The central quantities of large deviation theory are the probability distribution \(P(a)\), the rate function \(I(a)\), and the scaled cumulant generating function \(\mu_{\mathcal{A}}(\lambda)\)[1, 2, 3]. The large deviation principle states that, for large time \(\mathcal{T}\), the probability \(P(a)\) decays exponentially with rate \(I(a)\), \(P(a)\sim e^{-I(a)\mathcal{T}}\). To evaluate the rate function, it is often convenient to introduce the scaled cumulant generating function, \(\mu_{\mathcal{A}}(\lambda)=\lim_{\mathcal{T}\to\infty}\ln\langle e^{-\lambda \mathcal{A}}\rangle/\mathcal{T}\), where \(\langle.\rangle\) denotes the ensemble average. When \(\mu_{\mathcal{A}}(\lambda)\) is differentiable, the rate function follows as the Legendre transform \(I(a)=-\lambda_{*}a-\mu_{\mathcal{A}}(\lambda_{*})\) with \(\lambda_{*}(a)\) being the root of \(\mu^{\prime}_{\mathcal{A}}(\lambda_{*})=-a\)[1, 2, 3].
Estimating the large deviation properties from data is usually a delicate task due to the finite sample size. We employ a block averaging method that divides the total length, \(\mathcal{T}_{\rm tot}=N\mathcal{T}\), of the time series into \(N\) blocks of duration \(\mathcal{T}\)[33, 34]. Finite data imposes constraints on the number \(N\) of trajectories that can be used for ensemble average calculations and on the length \(\mathcal{T}\) of the trajectories. The statistical estimators of the scaled cumulant generating function and of the rate function are respectively given by \(\mu_{\mathcal{A}}(\lambda,\mathcal{T},N)=(1/\mathcal{T})\ln(\sum_{i=1}^{N}e^{ -\lambda\mathcal{A}_{i}}/N)\) and \(I_{\mathcal{A}}(a,\mathcal{T},N)=-\lambda a_{\mathcal{A}}-\mu_{\mathcal{A}}( \lambda,\mathcal{T},N)\), with the average \(a_{\mathcal{A}}=-(1/\mathcal{T})(\sum_{i=1}^{N}\mathcal{A}_{i}e^{-\lambda \mathcal{A}_{i}}/\sum_{i=1}^{N}e^{-\lambda\mathcal{A}_{i}})\)[15, 16, 17, 18, 19]. These estimators are expected to asymptotically converge to their large deviation limits, \(\mu_{\mathcal{A}}(\lambda)\) and \(I(a)\), for sufficiently large \(N\) and \(\mathcal{T}\)[15, 16, 17, 18, 19].
_Experimental system._ Levitodynamics has played a key role in the investigation of the stochastic thermodynamics of small underdamped systems [27, 28, 29, 30, 31, 32]. In our experiment, we trap a levitated silica nanoparticle (295 nm diameter) at an intensity maximum of a standing wave formed by two counterpropagating laser beams (\(\lambda_{0}=1064\) nm) inside a hollow-core photonic crystal fiber (HCPCF) (Fig. 1a) [36]. The particle is characterized by a quality factor \(Q_{0}=\Omega_{0}/\Gamma_{0}=50.2\), where \(\Omega_{0}/2\pi=297.7\) kHz is the resonance frequency and \(\Gamma_{0}/2\pi=5.93\) kHz is the damping due to the gas inside the HCPCF (temperature \(T=293\) K). We detect the position \(x_{t}\) of the particle along the fiber axis with an interferometric readout of the light scattered by the particle, with a sensitivity of \(2\,\mathrm{pm}/\sqrt{\mathrm{Hz}}\)[36]. A feedback force, \(F_{\rm fb}=-g(m\Gamma_{0}\Omega_{0})x_{t-\tau}\), is applied to the nanoparticle (of mass \(m\)) via radiation pressure exerted by an additional laser beam (green beam in Fig. 1a). We vary the delay \(\tau\) using a field-programmable gate array. The feedback loop has a gain \(g=2.4\) and an internal minimum delay of \(3\)\(\mu\)s. We select 19 different values of the delay, belonging to a stability region of the feedback loop, and record for each a long trajectory of duration \(\mathcal{T}_{\rm tot}=1000\,\)s with a sampling rate of 5 MHz. This correspond to 5 hours total data acquisition time, during which the relative uncertainty on the damping rate and feedback gain stays smaller than 5%.
The stochastic work along a trajectory of duration \(\mathcal{T}\) is defined as \(\beta\mathcal{W}=(2g/Q_{0}^{2})\int_{0}^{\mathcal{T}}dtx_{t-\tau}\circ v_{t}\), in dimensionless units, with the Stratonovich-type product \(\circ\)[26] and the inverse temperature \(\beta\). According to the first law, the random heat is given by \(\mathcal{Q}=\mathcal{W}-\Delta\mathcal{U}\), where \(\Delta\mathcal{U}=(x_{\mathcal{T}}^{2}-x_{0}^{2}+v_{\mathcal{T}}^{2}-v_{0}^{2}) /Q_{0}\) is the change in internal energy. Work and heat thus only differ through the temporal boundary term \(\Delta\mathcal{U}\), and have equal mean. However, their fluctuation and large deviation features are different. Figure 1b shows examples of measured work (gray) and heat (blue) realizations for a delay \(\tau=7.07\), as well as the corresponding distributions. The tails of the heat are much broader than those of the work because of the additional fluctuations of the boundary term.
_Convergence regions._ The convergence domain of the statistical estimators may be determined by inspecting the linearization phenomenon of the scaled cumulant generating function [15, 16, 17, 18, 19]. This effect, which depends on the values of \(\lambda\) (nonuniform convergence), occurs when the sum \(\sum_{i=1}^{N}e^{-\lambda\mathcal{A}_{i}}\) is dominated by its largest element. This translates into a finite convergence interval \(\Delta\lambda=\lambda_{+}-\lambda_{-}\) delimited by \(\lambda_{-}<0\) and \(\lambda_{+}>0\). The values \(\lambda_{\pm}\) depend in general on \(\mathcal{T}\) and \(N\). In order to account for possible correlations between data points, we group (reshuffled) data into \(N_{b}\) (asymptotically) independent blocks of size \(N/N_{b}\)[33, 34]. We choose a large value of \(N=5\times 10^{6}\) for the number of trajectories and set \(N_{b}=1000\). Following the theoretical suggestion of Ref. [15], we estimate \(\lambda_{\pm}\) by determining the (posi
Figure 1: Experimental system. a) Schematic of the experimental setup consisting of a nanoparticle trapped in a harmonic potential inside a hollow core photonic crystal fiber (HCPCF). The particle is subjected to a delayed feedback force \(F_{\rm fb}\propto x_{t-\tau}\), with delay \(\tau\) and feedback gain \(g\). b) Example of measured stochastic work (gray) and heat (blue) realizations with their respective probability distributions, for delay \(\tau=7.07\) and duration \(\mathcal{T}=Q_{0}\). The two distributions have the same mean (horizontal lines) but different tails.
tive and negative) maxima of the standard error of \(a_{\mathcal{A}}\) in each block, \(\mathrm{err}_{\mathcal{A}}(\lambda,\mathcal{T})=\mathrm{std}(a_{\mathcal{A}})/ \sqrt{N_{b}}\) where std is the standard deviation, for large \(\mathcal{T}\), since these maxima are related to the onset of linearization (Supplementary Information). Figure 2a displays \(\mathrm{err}_{\mathcal{A}}(\lambda,\mathcal{T})\) for work and heat, as a function of \(\lambda\), for time \(\mathcal{T}/Q_{0}=5\) and delay \(\tau=7.07\) (other values are presented in the Supplementary Information). We associate \(\lambda_{\pm}\) with the (averaged) plateaus of \(\lambda_{\pm}(\mathcal{T})\) seen in Figs. 2b-c, as a function of \(\mathcal{T}\), for \(\mathcal{T}>8Q_{0}\) (dashed lines). The values of \(\lambda_{\pm}\) do not depend on \(N\), for \(N\) large enough (Supplementary Information). The respective standard errors for the scaled cumulant generating functions \(\mu_{\mathcal{A}}\) for work and heat are analyzed in the Supplementary Information.
The respective convergence regions of the estimators of the scaled cumulant generating function of work and heat are obtained by repeating the same procedure for all delays (gray and blue shaded areas in Fig. 2d). The boundaries \(\Delta\lambda\) are set by the delay. We observe three different regimes depending on the value of \(\lambda\): (i) a regime where the statistical estimators of both work and heat are expected to converge (overlapping gray and blue areas), (ii) a domain where only the estimator of work is expected to converge (non-overlapped gray area) and (iii) a region where none of them is predicted to be convergent (white area). Interestingly, the boundaries \(\lambda_{\pm}\) for the stochastic work coincide precisely with the analytically known asymptotic range of definition of the scaled cumulant generating function, where the latter is real (dashed black lines) [35] (Supplementary Information); the cuts seen at the top and at the bottom are due to finite statistics. A similar effect has been noticed theoretically in another context in Ref. [15]. These findings demonstrate the power of the simple convergence criterion based on the standard error, a scheme that does not require the knowledge of the distribution of the random variable considered. Surprisingly, even though work and heat are equal on average, the predicted convergence region for the stochastic heat is markedly smaller than that for work. It also appears to be systematically more restricted than the analytically known asymptotic range of definition of the scaled cumulant generating function (same dashed black lines as for the work) [35].
_Singular prefactors._ In order to elucidate this difference between work and heat, we examine the subexponential prefactors, \(g_{\mathcal{A}}(\lambda)\), of the corresponding moment generating functions, defined as \(\langle e^{-\lambda\mathcal{A}}\rangle\sim g_{\mathcal{A}}(\lambda)e^{\mu_{ \mathcal{A}}(\lambda)/\mathcal{T}}\), for large \(\mathcal{T}\). Theoretical computations of the prefactors are difficult in general. For the problem at hand, analytical formulas for \(g_{\mathcal{A}}(\lambda)\) are unknown for
Figure 2: Convergence regions. a) Standard error \(\mathrm{err}_{\mathcal{W},\mathcal{Q}}(\lambda,\mathcal{T})\) of work (gray) and heat (blue), for duration \(\mathcal{T}=5Q_{0}\) and delay \(\tau=7.07\). The maxima at \(\lambda_{\pm}(\mathcal{T})\) indicate the onset of the linearization of the tails when the statistics is dominated by the largest element of the sample. b)-c) The boundaries \(\lambda_{\pm}\) of the convergence domains are associated with the (averaged) values of the plateaus of \(\lambda_{\pm}(\mathcal{T})\) for large \(\mathcal{T}\). d) Convergence regions of the scaled cumulant generating functions of work and heat for all values of the delay. The boundaries for work (and partially for heat) correspond to the analytically known asymptotic range of definition (black dashed line). The boundaries for heat are restricted by singularities of the prefactor: good agreement is seen between the measured singularities (blue open circle) and the analytic expressions, \(\lambda_{1}=-1-2g\sin\tau\) and \(\lambda_{2}=1\), derived in the high \(Q_{0}\) approximation (blue dashed line). e) The scaled cumulant generating functions \(\mu_{\mathcal{W},\mathcal{Q}}(\lambda,\mathcal{T})\) of work and heat both converge for \(\lambda=-1\), whereas f) only the one of work converges for \(\lambda=-6\). Statistical error bars are discussed in the Supplementary Information.
generic values of \(\lambda\) and \(\tau\), except in the small-\(\tau\) or high-\(Q_{0}\) (Markovian) limits [35]. Likewise, extracting prefactors from experimental data is usually hard, since they correspond to very small deviations for small trajectory length. In order to do so, we fit the functions \(\mu_{\mathcal{A}}(\lambda,\mathcal{T})\) in the range \(1<\mathcal{T}/Q_{0}<10\) for each \(\lambda\), using the expression \(A+B/\mathcal{T}+C/\mathcal{T}^{2}\) (Supplementary Information). The prefactors are accordingly given by \(g_{\mathcal{A}}(\lambda)=e^{B}\). Figures 3a,d present the experimentally determined prefactors for work and heat as a function of \(\lambda\) for the delay \(\tau=7.07\) (other values are again presented in the Supplementary Information). Whereas the prefactor \(g_{\mathcal{W}}(\lambda)\) for work is finite for all \(\lambda\) (Fig. 3a), the prefactor \(g_{\mathcal{Q}}(\lambda)\) for heat exhibits a singularity around \(\lambda=\lambda_{1}\simeq-4.4\) (Fig. 3d). Such a diverging behavior is due to rare but large fluctuations of the boundary term \(\Delta\mathcal{U}\). In both instances, we have excellent agreement between data and the analytical approximations (gray and blue dashed lines) obtained for high quality factors.
The locations of the experimentally evaluated singularities of the heat prefactor \(g_{\mathcal{Q}}(\lambda)\) for all implemented delays are shown in Fig. 2d (blue open circle) together with the corresponding analytical high-\(Q_{0}\) expressions (blue dashed lines). They perfectly match the predicted convergence region based on the standard-error criterion. We may thus conclude that, although subexponential prefactors of the scaled cumulant generating function become irrelevant in the asymptotic large-deviation limit, their singularities strongly restrict the convergence domain of the corresponding statistical estimator.
_Convergence time._ We next analyze the influence of a singular prefactor on the convergence time of the statistical estimators of the scaled cumulant generating function \(\mu_{\mathcal{A}}(\lambda,\mathcal{T})\) and of the rate function \(I_{\mathcal{A}}(a,\mathcal{T})\). Figures 3b,e exhibit the estimator of the scaled cumulant generating function \(\mu_{\mathcal{W},\mathcal{Q}}(\lambda,\mathcal{T})\) of work and heat as a function of \(\lambda\), for different trajectory lengths. We notice that \(\mu_{\mathcal{W}}(\lambda,\mathcal{T})\) (symbols) quickly converges to the analytically known asymptotic limit (black line) within the convergence region (grey dashed area). The linearization of the tail, and the corresponding deviation from the theoretical asymptotic limit, are clearly visible outside the convergence domain. By contrast, the presence of the singularity at \(\lambda=\lambda_{1}\) significantly slows down the convergence of \(\mu_{\mathcal{Q}}(\lambda,\mathcal{T})\) close to \(\lambda_{1}\) at short times. For longer times, as the asymptotic regime is approached, the
Figure 3: Singular prefactors and large deviation estimators. a) and d) Experimentally determined subexponential prefactors \(g_{\mathcal{W},\mathcal{Q}}(\lambda)\) of work and heat (symbols) for \(\tau=7.07\) and analytical results derived in the high-\(Q_{0}\) approximation (dashed lines). The prefactor of heat exhibits a singularity at \(\lambda=\lambda_{1}\). The edge of the asymptotic theoretical range of definition is denoted by \(\lambda_{\max}\). b) and e) Estimators of the scaled cumulant generating functions \(\mu_{\mathcal{W},\mathcal{Q}}(\lambda,\mathcal{T})\) of work and heat for various durations \(\mathcal{T}\) (symbols) and asymptotic theoretical prediction (black solid lines). The convergence domains are represented by the (gray and blue) areas. The pole of the prefactor restricts the convergence interval and the convergence time of the heat. c) and f) Estimators of the rate functions \(I_{\mathcal{W},\mathcal{Q}}(\mathcal{T})\) of work and heat for various \(\mathcal{T}\) (symbols). Whereas \(I_{\mathcal{W}}(\mathcal{T})\) approaches the asymptotic limit (black solid line) for large \(\mathcal{T}\), \(I_{\mathcal{Q}}(\mathcal{T})\) deviates from it due to the linearization induced by the singularity (dotted line). Statistical error bars are discussed in the Supplementary Information.
effect of the divergence of the prefactor is suppressed, as expected. However, linearization occurs at smaller values of \(\lambda\) compared to the work, as discussed above, even before the asymptotic regime can be reached for all \(\lambda\). Singular prefactors therefore reduce both the convergence interval and the converge time of the statistical estimator of the scaled cumulant generating function. Figures 3b,e additionally highlight the danger of evaluating statistical estimators without determining the convergence domain: not only can estimators depart from the asymptotic result (as seen on the left-hand side of the figures), they can also be computed from data for parameters where a scaled cumulant generating does not exist in the asymptotic limit (as seen on the right-hand side).
The statistical estimators of the rate functions \(I_{\mathcal{A}}(a,\mathcal{T})\) for work and heat are shown in Figs. 3c,f. The boundaries \(a_{\pm}\) of the respective shaded areas are defined through \(\mu^{\prime}_{\mathcal{A}}(\lambda_{\pm})=-a_{\pm}\). The estimator \(I_{\mathcal{W}}(w,\mathcal{T})\) of work (symbols) approaches the known analytic asymptotic result \(I(w)\) (black solid line) as the length of the trajectory increases. The situation is again different for heat. The corresponding exact expression for the rate function is not analytically known due to the singularity. The pole of the prefactor indeed modifies the large deviation function \(I(q)\), which is no longer correctly described by the Legendre transform of \(\mu_{\mathcal{Q}}(\lambda)\) (black solid line) close to the singularity \(q_{1}\). The leading contribution to the rate function comes essentially from the pole. This leads to an exponential tail of the distribution, or, equivalently, to a linear large deviation function for \(q>q_{1}\), whose slope is determined by the pole (black dotted line).
_Conclusions._ Large deviation estimators are only useful when they converge to their - usually unknown - asymptotic limit. The convergence region and the converge rate are, however, equally unknown in general. We have experimentally investigated these important issues using a highly stable levitodynamic system subjected to feedback control. We have implemented, for the first time, a simple and reliable convergence criterion based on the standard error, that does not require the knowledge of the probability distribution. We have demonstrated that this criterion is able to identify the asymptotic range of definition of the scaled cumulant generating function, as well as the presence of singular prefactors, which we could independently detect. Such divergences are known to occur in many linear systems, both for heat [37; 38; 39; 40] and work [42; 43; 44; 45], as well as in nonlinear systems [46; 47; 48]. We have shown that they restrict both the convergence interval and the convergence time. These findings highlight the critical role of singular prefactors in the approach to the asymptotic large deviation limit.
_Acknowledgments._ We acknowledge financial support from the German Science Foundation (DFG) (Project FOR 2724) and the Austrian Science Fund (FWF) (Project Y 952-N36, START). We also thank Martin Luc Rosinberg for providing the analytical results for the large-\(Q\) expressions of the prefactors.
|
2309.09365 | Long Electron Spin Coherence Times of Atomic Hydrogen Trapped in
Silsesquioxane Cages | Encapsulated atomic hydrogen in cube-shaped octa-silsesquioxane (POSS) cages
of the Si$_8$O$_{12}$R$_8$ type (where R is an organic group) is the simplest
alternative stable system to paramagnetic endohedral fullerenes (N@C$_{60}$ or
P@C$_{60}$) that have been regarded as key elements of spin-based quantum
technologies. Apart from common sources of decoherence like nuclear spin and
spectral diffusion, all H@POSS species studied so far suffer from additional
shortening of $T_2$ at low temperatures due to methyl group rotations. Here we
eliminate this factor for the first time by studying the relaxation properties
of the smallest methyl-free derivative of this family with R=H, namely
H@T$_8$H$_8$. We suppress nuclear spin diffusion by applying dynamical
decoupling methods and we measure electron spin coherence times $T_2$ up to 280
$\pm$ 76 $\mu$s at $T=90$ K. We observe a linear dependence of the decoherence
rate $1/T_2$ on trapped hydrogen concentrations ranging between 9$\times
10^{14}$ cm$^{-3}$ and 5$\times 10^{15}$ cm$^{-3}$ which we attribute to the
spin dephasing mechanism of instantaneous diffusion and a nonuniform spatial
distribution of encapsulated H atoms. | George Mitrikas | 2023-09-17T20:14:43Z | http://arxiv.org/abs/2309.09365v1 | # Long Electron Spin Coherence Times of Atomic Hydrogen Trapped in Silsesquioxane Cages
###### Abstract
Encapsulated atomic hydrogen in cube-shaped octa-silsesquioxane (POSS) cages of the Si\({}_{8}\)O\({}_{12}\)R\({}_{8}\) type (where R is an organic group) is the simplest alternative stable system to paramagnetic endohedral fullerenes (N@C\({}_{60}\) or P@C\({}_{60}\)) that have been regarded as key elements of spin-based quantum technologies. Apart from common sources of decoherence like nuclear spin and spectral diffusion, all H@POSS species studied so far suffer from additional shortening of \(T_{2}\) at low temperatures due to methyl group rotations. Here we eliminate this factor for the first time by studying the relaxation properties of the smallest methyl-free derivative of this family with R=H, namely H@T\({}_{8}\)H\({}_{8}\). We suppress nuclear spin diffusion by applying dynamical decoupling methods and we measure electron spin coherence times \(T_{2}\) up to 280 \(\pm\) 76 \(\mu\)s at \(T=90\) K. We observe a linear dependence of the decoherence rate \(1/T_{2}\) on trapped hydrogen concentrations ranging between 9\(\times\)10\({}^{14}\) cm\({}^{-3}\) and 5\(\times\)10\({}^{15}\) cm\({}^{-3}\) which we attribute to the spin dephasing mechanism of instantaneous diffusion and a nonuniform spatial distribution of encapsulated H atoms.
Spin-based quantum computing is an active field of research exploring the use of spin particles as quantum bits (qubits). Electron and nuclear spins are particularly important in this context because they are natural quantum objects with relatively long coherence times that can be controlled using well-known magnetic resonance methods [1; 2]. Achieving long coherence times is a significant challenge in this line of research and different molecular systems are continuously being evaluated as qubit candidates [3; 4]. Paramagnetic endohedral fullerenes (e.g. N@C\({}_{60}\) or P@C\({}_{60}\) with electron spin \(S=3/2\)) have received increased attention mainly because they provide a bottom-up route to large-scale quantum register fabrication and because they posses the longest electron spin coherence times of any molecular spin studied to date [5]. For P@C\({}_{60}\) Naydeno [6] reported a maximum \(T_{2}\) value of 113 \(\mu\)s obtained with a two-pulse echo sequence (that was extended to 417 \(\mu\)s using dynamical decoupling methods) at \(T=\)10 K for a low spin concentration of 6.3\(\times\)10\({}^{13}\) cm\({}^{-3}\), whereas, Brown and co-workers [7] measured \(T_{2}=\)190 \(\mu\)s (that could be extrapolated to 300 \(\mu\)s in the limit of infinitively short refocusing pulses) at \(T=\)70 K for N@C\({}_{60}\) with a concentration of 2.5\(\times\)10\({}^{15}\) cm\({}^{-3}\).
In 1994 Matsuda and co-workers [8] discovered that upon \(\gamma-\)irradiation POSS cages can stably trap hydrogen atoms even at room temperature. Atomic hydrogen is the simplest paramagnetic atom with the electron spin \(S=1/2\) coupled to the proton nuclear spin \(I=1/2\) with a large hyperfine coupling constant of 1420.406 MHz. Therefore, the exceptional high stability of atomic hydrogen encapsulated in POSS (H@POSS) triggered several electron paramagnetic resonance (EPR) studies to compare their spin relaxation properties with those of endo-fullerens. Unlike C\({}_{60}\), which is virtually free from nuclear spin noise due to the low natural abundance (1.07 %) of \({}^{13}\)C, Si\({}_{8}\)O\({}_{12}\)R\({}_{8}\) cages constitute concentrated \({}^{1}\)H nuclear spin systems, therefore, the electron spin coherence in H@POSS is dictated, as a rule, by nuclear spin diffusion [9; 10]. Indeed, early pulsed EPR works on alkyl-substituted POSS derivatives [11; 12] revealed a square exponential behavior of the Hahn echo decay at ambient temperature with \(T_{2}\) of the order of 10 \(\mu\)s, in line with the above spin dephasing mechanism.
At temperatures below 200 K all studies published so far reported a shortening of \(T_{2}\) to about 1 \(\mu\)s which was not reversible even at liquid helium temperatures. Using R groups with different rotational degrees of freedom we showed that this peculiarity, initially ascribed to changes in cage symmetry [11], has its origin to the methyl rotation of organic groups [13; 14]. Moreover, our recent study with deuterated methyl groups provided strong evidence that the short \(T_{2}\) values observed at very low temperatures could be assigned to quantum rotational tunneling [15].
Herein we study for the first time the electron spin relaxation properties of H@Si\({}_{8}\)O\({}_{12}\)H\({}_{8}\), also known as H@T\({}_{8}\)H\({}_{8}\), which is the smallest derivative of octa-silsesquioxanes. Interestingly, this species is the less studied among all H@POSS systems, presumably because the proximal proton nuclear spins of R and the larger delocalization of the spin wave function were assumed to contribute much more to decoherence compared to larger species. On the other hand, since H@T\({}_{8}\)H\({}_{8}\) contains no CH\({}_{3}\) units it is an ideal system free from dynamic processes with short correlation times like the rotation of methyl groups. Moreover, nuclear spin diffusion could be efficiently eliminated since deuterium isotopic substitution is straightforward for this system [16; 17].
A practical challenge in studying H@T\({}_{8}\)H\({}_{8}\) is the appearance of strong free radical signals in the \(g\approx 2\) region upon \(\gamma-\)irradiation. Although these signals are spectroscopically well-separated from the EPR signal of atomic hydrogen, the relaxation properties of the latter can be
affected significantly, especially when the concentration of free radicals is quite high. Unlike the majority of studied H@POSS, the free radical signals in H@T\({}_{8}\)H\({}_{8}\) are not affected by the presence of radical scavengers and appear to be quite stable when exposed to air [18]. To minimize the effects of unwanted free radicals we followed a different method for hydrogen encapsulation, namely electric discharge that has been proved to create less than one tenth of the radicals generated by \(\gamma-\)ray irradiation for the same resulting hydrogen encapsulation yield [19] (see SI for details).
Fig. 1 shows the room-temperature EPR signal of H@T\({}_{8}\)H\({}_{8}\) corresponding to atomic hydrogen concentration \(C_{\rm H}=1.9\times 10^{15}\) cm\({}^{-3}\). The obtained parameters \(g=\)2.00290(10), \(A=\)1410.5(2) MHz, and \(\Delta B_{\rm pp}=\)174 \(\mu\)T were determined from numerical simulations using the isotropic spin Hamiltonian \(\hat{\mathcal{H}}=g\beta_{e}B/hS_{z}-g_{n}\beta_{n}B_{0}/hI_{z}+A\hat{\bf S} \cdot\hat{\bf I}\) where \(g\) and \(g_{n}\) are the electron and nuclear \(g\)-factors, \(\beta_{e}\) and \(\beta_{n}\) are the Bohr and nuclear magnetons, \(A\) is the isotropic hyperfine coupling of the encapsulated proton, \(\Delta B_{\rm pp}\) is the linewidth, and \(B_{0}\) is the static magnetic field along \(z\)-axis. These parameters are in good agreement with those obtained in previous studies [12; 18] and verify the larger delocalization of the unpaired electron to the cage atoms for H@T\({}_{8}\)H\({}_{8}\) as compared to all other POSS species [20].
The transverse electron spin relaxation time, \(T_{2}\), can be typically measured with the pulse sequence \(\pi/2-\tau-\pi-\)\(\pi-\)echo shown in Fig. 2. Decay traces measured at different temperatures show stretched-exponential behaviour that can be fitted with
\[I(2\tau)=I_{0}\exp\left[-\left(\frac{2\tau}{T_{\rm M}}\right)^{n}\right], \tag{1}\]
where \(\tau\) is the interpulse delay, \(n\) is a parameter determined by the mechanism of phase memory decay and the rate, \(W\), of the dephasing process relative to \(\tau\), and \(T_{\rm M}\) is the so-called phase memory time encompassing \(T_{2}\) and all other processes that cause electron spin dephasing [9]. The experimentally determined range of parameter \(n\), \(1.5\leq n\leq 2.6\), implies a slow dynamic process with \(W\tau\ll 1\). For systems of low paramagnetic concentration and proton-containing ligands like the ones presented here, a very effective dephasing mechanism is the so-called nuclear spin diffusion [21]. According to this, two neighbouring proton nuclear spins can undergo mutual spin flips with typical rates of \(W/2\pi\sim 10\) kHz, which in turn modulate the electron-nuclear dipolar interaction.
The temperature dependence of \(T_{\rm M}\) is shown in Fig. 3. Contrary to methyl-containing POSS derivatives (see for instance gray squares depicting previously published data for H@Q\({}_{8}\)M\({}_{8}\)), \(T_{\rm M}\) becomes maximum around 150 K and remains constant in the temperature interval 10-150 K with a mean value of 13.4 \(\mu\)s. At room temperature, the obtained \(T_{\rm M}=\)8.9 \(\mu\)s is larger than the previously reported value of 3.8 \(\mu\)s, [12] but this difference could be ascribed to a possible larger concentration of paramagnetic
Figure 2: Two-pulse electron spin echo decays measured at six different temperatures as a function of \(2\tau\), and the superimposed stretched exponential fits using eq1. All traces were recorded at the observer position \(B_{0}=\)319.1 mT corresponding to the low-field EPR transition for the sample with \(\hat{C}_{\rm H}=1.9\times 10^{15}\) cm\({}^{-3}\).
Figure 1: X-band room-temperature EPR spectrum of T\({}_{8}\)H\({}_{8}\). The two insets show details of the H@T\({}_{8}\)H\({}_{8}\) resonances corresponding to atomic hydrogen concentration of \(C_{\rm H}=1.9\times 10^{15}\) cm\({}^{-3}\) (black traces) along with their best fitted simulations (red traces). For fitting parameters see text.
centers in the previous case. The modest temperature dependence of \(T_{\rm M}\) between 150 K and 293 K is assigned to the short spin-lattice relaxation times \(T_{1}\) that range between 115 and 14 \(\mu\)s, respectively, and determine \(T_{\rm M}\) in this temperature interval (see SI for details).
An important aspect of these results is the absence of dynamic effects associated with methyl rotations that were previously observed in all H@POSS species. This paves the way for conducting experiments at much lower temperatures where \(T_{2}\) is not any more limited by \(T_{1}\) which exceeds 1 ms below 90 K. The suppression of nuclear spin diffusion and the investigation of additional underlying decoherence mechanisms can be best performed with dynamical decoupling methods comprising successive refocusing microwave (mw) pulses which are separated by time delays \(\tau\) that are much shorter than the correlation time \(\tau_{c}\) of the dephasing mechanism [2]. The Carr-Purcell-Meiboom-Gill (CPMG) sequence [22; 23], \((\pi/2)_{x}\{-\tau/2-(\pi)_{y}-\tau/2-{\rm echo}\}^{N}\), is a typical dynamical decoupling method that performs very well in nuclear magnetic resonance (NMR) spectroscopy [24]. However, in EPR spectroscopy the unwanted stimulated echo, which appears as a consequence of partial excitation and non-ideal mw pulses, overlaps with the desired refocused primary echo [25]. Since this stimulated echo decays with \(T_{1}\), the CPMG sequence could erroneously result in longer than real \(T_{2}\) values and, therefore, care has to be exercised when used. To ensure reliable \(T_{2}\) measurements, we have used the more robust XY4 and XY8 pulse sequences shown in Fig. 4(A) that eliminate such unwanted signals (see SI for details) [26; 27].
Fig. 4(B) and Fig. 4(C) show the time evolution of electron spin coherence measured with dynamical decoupling sequences that use different number of pulses, \(N\). Starting from the simple two-pulse sequence (\(N=1\)), all traces show stretched exponential decay as well as modulations originating from weak anisotropic hyperfine couplings between the unpaired electron and nearby magnetic nuclei like \({}^{1}\)H and \({}^{29}\)Si, the so-called electron spin echo envelope modulation (ESEEM) effect [28; 29]. As we showed in our previous studies the application of dynamical decoupling methods in such spin systems can greatly enhance decoherence for specific values of \(\tau\)[30] and the degree of this enhancement depends on the strength of hyperfine coupling and \(N\)[31]. In terms of noise spectrum of the system under study, the hyperfine-coupled \({}^{1}\)H and \({}^{29}\)Si nuclear spins can be regarded as a source of high-frequency noise whose effect is enhanced upon application of large number of pulses \(N\). Therefore, although dynamical decoupling suppresses nuclear spin diffusion (low-frequency noise), it may also enhance decoherence if the system bears such a source of high frequency noise.
To reduce the influence of these deep modulations on the determination of coherence times, we consider only the points of maximum echo intensity that define the envelopes of coherence decay curves. Data analysis shows that the maximum \(T_{2}=100\pm 10\)\(\mu\)s is obtained for \(N=24\) in both low- and high-field measurements. The insets of Fig. 4 depict the scaling of \(T_{2}\) with \(N\) which - within experimental error - agrees well with a \(T_{2}\propto N^{2/3}\) behaviour expected for a Lorentzian noise spectrum when \(\tau_{c}\gg T_{2}\)[32]. Under this condition, \(T_{2}\) is expected to increase with increasing \(N\) with an upper limit of \(T_{2}^{max}=2T_{1}\), whereas, for \(\tau_{c}\ll T_{2}\) no improvement of \(T_{2}\) with \(N\) occurs. Our data show a saturation trend of \(T_{2}\) for \(N>8\), however, the reached value of 100 \(\mu\)s is much smaller than \(2T_{1}=2\) ms at this temperature. Interestingly, this value matches the correlation time \(\tau_{c}\sim 100\)\(\mu\)s that corresponds to the proton nuclear spin flip-flop rate, \(W/2\pi\sim 10\) kHz. Consequently, we can assume that for proton nuclear spin diffusion no significant improvement of \(T_{2}\) with \(N>24\) should be expected for the system under study.
Technical limitations of our spectrometer (maximum allowed number of mw pulses \(N=30\) and maximum evolution time of 240 \(\mu\)s) do not allow for using additional mw pulses and test any possible improvement with \(N\). On the other hand, since the correlation time of nuclear spin diffusion is \(\tau_{c}\sim 100\)\(\mu\)s, an effective dynamical decoupling sequence should use interpulse delays \(\tau\ll\tau_{c}\), i.e. shorter than 10 \(\mu\)s in order to suppress this dephasing mechanism. Therefore, instead of using sequences with varying \(\tau\), we can set a constant \(\tau\) value and measure the train of refocusing electron spin echoes occurring at times \(t=\tau,2\tau,...,N\tau\) after the preparation \(\pi/2\)-pulse. With the proper choice of a short enough \(\tau\) value, which at the same time corresponds to \({}^{1}\)H and \({}^{29}\)Si revivals of the spin-echo signals of Fig. 4, this setup ensures simultaneous suppression of both low- and high-frequency noise and allows for probing other dephasing mechanisms. Typical experiments using the XY8-3 sequence (\(N=24\)) with \(\tau=2160\) ns are shown in Fig. 5.
The \(T_{2}\) values obtained with the above dynamical de
Figure 3: Temperature dependence of phase memory times \(T_{\rm M}\) for H@T\({}_{\rm s}\)Hs with \(C_{\rm H}=1.9\times 10^{15}\) cm\({}^{-3}\) (R=H, circles) and H@Q\({}_{\rm s}\)M\({}_{8}\) (R=OSi(CH\({}_{3}\))\({}_{3}\), squares, modified with permission from [13] copyright © the Owner Societies 2020). Curves connect data points.
coupling method can be further analyzed based on the general formula of phase relaxation rate
\[\frac{1}{T_{2}}=\frac{1}{T_{\rm SD}}+\frac{1}{T_{\rm ID}}. \tag{2}\]
The first term, referred to as spectral diffusion, describes decoherence of the central spin (A spins) due to random fluctuations of dipole fields created by neighbour electron spins (B spins) that are not excited by the mw pulses. These fluctuations can either originate from spin-lattice relaxation of B spins (\(T_{1}\)-spectral diffusion) or mutual spin flips among them (\(T_{2}\)-spectral diffusion). For the first case, the contribution to phase relaxation rate is given by [28]
\[\frac{1}{T_{\rm SD}}=\frac{1}{1.4}\sqrt{2.53\frac{\mu_{0}}{4\pi\hbar}g_{A}g_{B }\mu_{B}{}^{2}\frac{C_{\rm B}}{T_{1}{}^{(\rm B)}}}, \tag{3}\]
where \(T_{1}{}^{(\rm B)}\) is the spin-lattice relaxation of B spins and \(C_{\rm B}\) is their concentration. To inspect if this type of spectral diffusion dominates our results, measurements at lower temperatures were performed. At \(T=20\) K, where \(T_{1}=550\) ms, Eq.3 predicts a 23-fold increase of \(T_{\rm SD}\) compared to \(T=90\) K where \(T_{1}=1\) ms. Our data show no sign of temperature dependence for \(T_{2}\) (see SI for details) and therefore we conclude that \(T_{1}\)-spectral diffusion is not the dominant dephasing mechanism in our case.
The second term of Eq.2 is the so-called instantaneous diffusion describing the static spread of the Larmor spin frequencies among excited dipole-coupled A spins which is imposed by the applied mw pulses. The contribution to phase relaxation rate is given by [28]
\[\frac{1}{T_{\rm ID}}=C_{\rm A}\frac{4\pi^{2}}{9\sqrt{3}}\frac{\mu_{0}}{4\pi \hbar}g_{A}^{2}\mu_{B}{}^{2}\mathrm{sin}^{2}\frac{\theta_{2}}{2}=C_{\rm A} \mathrm{ksin}\frac{\theta_{2}}{2}, \tag{4}\]
Figure 4: (A) XY4 and XY8 pulse sequences measuring the intensity of the last refocused echo (marked in black) as a function of the total evolution time \(t\) after the \(\pi/2\) pulse. (B) Electron spin coherence of H@TsHs with \(C_{\rm H}=1.9\times 10^{15}\) cm\({}^{-3}\) measured at \(T=90\) K with DD sequences with different number of pulses at the low-field EPR transition (\(B_{0}=\)319.1 mT). \(N=1\) corresponds to two-pulse echo decay; \(N=2\) uses the sequence \((\pi/2)_{x}-\tau/2-(\pi)_{y}-\tau-(\pi)_{x}-\tau/2-\mathrm{echo}\); \(N=4\) corresponds to XY4; \(N=8,16,24\) correspond to XY8-1, XY8-2, and XY8-3, respectively. Black traces depict best fits with \(I=I_{0}\cdot\mathrm{exp}(-t/T_{2})^{n}\). Inset: Scaling of derived \(T_{2}\) values with the number \(N\) of DD pulses and the fitted curve \(T_{2}\propto N^{x}\); dashed curve connects points as a guide to show the data trend. (C) Same as in (B) measured at the high-field EPR transition (\(B_{0}=\)369.9 mT)
where \(C_{\rm A}\) is the concentration of A spins, \(k=8.2834\times 10^{-13}\)cm\({}^{3}\)s\({}^{-1}\), and \(\theta_{2}\) is the rotation angle of the refocusing pulse in the two-pulse sequence. A standard method to mitigate the effect of instantaneous diffusion is to measure two-pulse echo decays with small rotation angles \(\theta_{2}\) and then extrapolate the data to the limit of infinitively short refocusing pulses in order to estimate the \(T_{2}\) that is free from the instantaneous diffusion effect. Apparently, since our \(T_{\rm M}\) values obtained with the two-pulse sequence are completely masked by nuclear spin diffusion, this methodology can not be applied here. However, as \(T_{\rm ID}\) depends on \(C_{\rm A}\), one can probe the contribution of instantaneous diffusion by comparing the \(T_{2}\) values obtained with dynamical decoupling experiment on samples with different encapsulated hydrogen concentrations \(C_{\rm H}\).
Fig. 5 shows the data measured with the XY8-3 sequence with \(\tau=2160\) ns for four samples with different \(C_{\rm H}\). Clearly, as the encapsulated hydrogen concentration is reduced, the intensity of the refocused echoes is retained for longer evolution times. For the most diluted sample with \(C_{\rm H}=9\times 10^{14}\) cm\({}^{-3}\) dynamical decoupling experiments obtain \(T_{2}\) = 247 \(\pm\) 52 \(\mu\)s which is the longest electron spin coherence time ever measured for the H@POSS system. Again, the limited number of available mw pulses does not allow for observing the full echo decay and thus determining \(T_{2}\) with higher accuracy. Covering the whole necessary time window with \(N=24\) pulses requires sequences with \(\tau\geq 10\)\(\mu\)s which is, however, of no use because nuclear spin diffusion dominates electron spin dephasing in this case. We anticipate that elimination of nuclear spin diffusion by deuterium isotopic substitution will make such scheme possible in our future studies.
The determined phase relaxation rates \(1/T_{2}\) correlate very well with \(C_{\rm H}\) as can be seen in Fig. 6 where data from measurements with four different \(\tau\) values are collected. The apparent linear dependence suggests either instantaneous or \(T_{2}\)-type spectral diffusion. It should be noted that, although the dynamical decoupling methods used here can efficiently suppress spectral diffusion mechanisms with correlation times \(\tau_{c}\geq 10\)\(\mu\)s, they can not refocus interactions between identical spins, so they are completely ineffective in suppressing instantaneous diffusion. Therefore, we can assume that the \(C_{\rm H}\) dependence of \(1/T_{2}\) is virtually governed by the mechanism of instantaneous diffusion.
To further test this assumption, we model our data with a modified version of eq4
\[\frac{1}{T_{2}}=b_{0}+\alpha_{\rm M}\frac{C_{\rm H}}{2}k, \tag{5}\]
where \(b_{0}\) is a constant and \(C_{\rm A}\) has been replaced by \(C_{\rm H}/2\) because since the two EPR transitions are well separated the measurement on each one of them involves only half
Figure 6: Phase relaxation rates versus encapsulated hydrogen concentration \(C_{\rm H}\) for data acquired with four different \(\tau\). The straight line is the linear fit with eq5 giving \(\alpha_{\rm M}=11.1\pm 0.7\) and \(b_{0}=871\pm 934\) Hz.
Figure 5: Time evolution of the spin magnetization under the application of the XY8-3 sequence with \(\tau=2160\) ns for four H@T\({}_{8}\)H\({}_{8}\) samples with different encapsulated hydrogen concentrations, \(C_{\rm H}=4.9\times 10^{15}\) cm\({}^{-3}\) (A), \(3.4\times 10^{15}\) cm\({}^{-3}\) (B), \(1.9\times 10^{15}\) cm\({}^{-3}\) (C), and \(9\times 10^{14}\) cm\({}^{-3}\) (D). Gray circles mark the refocused echo amplitudes; red traces depict their mono-exponential fits with \(I=I_{0}\cdot\exp(-t/T_{2})\). All measurements were performed at \(T=90\) K at the observer position \(B_{0}=\)369.9 mT corresponding to the high-field EPR transition.
of the encapsulated hydrogen atoms. \(\alpha_{\rm M}\) is a scaling factor accounting for a possible deviation of the local spin concentration \(C_{\rm loc}=\alpha_{\rm M}C_{\rm H}\) from \(C_{\rm H}\), the average spin concentration of the encapsulated hydrogen atoms as determined from continuous wave EPR spectroscopy. The linear fit of data with eq5 gives \(\alpha_{\rm M}=11.1\pm 0.7\), i.e. \(C_{\rm loc}\approx 11~{}C_{\rm H}\), implying a nonuniform spatial distribution of paramagnetic centers (\(C_{\rm loc}\geq C_{\rm H}\)), which is a well-known result of track effects in irradiated solids [33]. We have previously observed similar differences between \(C_{\rm loc}\) and \(C_{\rm av}\) for low-dose \(\gamma-\)irradiated POSS cages [15]. Interestingly, the method of electric discharge used in the present work favors the trapping of H atoms mainly on the surfaces of the molecular crystals [19], a fact that can adequately justify the aforementioned nonuniform spatial distribution of encapsulated H atoms.
In conclusion, we have managed for the first time to measure long electron spin coherence times \(T_{2}\) up to 280 \(\pm\) 76 \(\mu s\) at \(T=90\) K for the smallest H@POSS molecule, namely H@T\({}_{8}\)H\({}_{8}\). The essence of this unprecedented level of improvement lies in the lack of methyl rotations that were previously present in all studied H@POSS species acting as the dominant dephasing mechanism especially at low temperatures. Our results showed that instantaneous diffusion is the only limiting decoherence mechanism for H@T\({}_{8}\)H\({}_{8}\) as all other important mechanisms could be suppressed by dynamical decoupling. For real applications it may also be necessary to physically reduce the sources of these mechanisms in order to best exploit the long coherence times of this species and simplify the pulse sequences for building efficient quantum gates. For the case of nuclear spin diffusion this can be easily tackled with deuterium isotopic substitution. Further increase of \(T_{2}\) depends on the ability to achieve a uniform spatial distribution of H atoms and to control their concentration. Although this may require some progress in POSS chemistry to be made, our study showed for the first time the potential of H@T\({}_{8}\)H\({}_{8}\) in spin-based quantum technologies as it can equally compete endohedral fullerenes in terms of coherence times.
|
2302.14720 | GPU Acceleration of Swendson-Wang Dynamics | When simulating a lattice system near its critical temperature, local
algorithms for modeling the system's evolution can introduce very large
autocorrelation times into sampled data. This critical slowing down places
restrictions on the analysis that can be completed in a timely manner of the
behavior of systems around the critical point. Because it is often desirable to
study such systems around this point, a new algorithm must be introduced.
Therefore, we turn to cluster algorithms, such as the Swendsen-Wang algorithm
and the Wolff clustering algorithm. They incorporate global updates which
generate new lattice configurations with little correlation to previous states,
even near the critical point. We look to accelerate the rate at which these
algorithm are capable of running by implementing and benchmarking a parallel
implementation of each algorithm designed to run on GPUs under NVIDIA's CUDA
framework. A 17 and 90 fold increase in the computational rate was respectively
experienced when measured against the equivalent algorithm implemented in
serial code. | Tristan Protzman, Joel Giedt | 2023-02-28T16:31:39Z | http://arxiv.org/abs/2302.14720v1 | # GPU Acceleration of Swendson-Wang Dynamics
###### Abstract
When simulating a lattice system near its critical temperature, local algorithms for modeling the system's evolution can introduce very large autocorrelation times into sampled data. This critical slowing down places restrictions on the analysis that can be completed in a timely manner of the behavior of systems around the critical point. Because it is often desirable to study such systems around this point, a new algorithm must be introduced. Therefore, we turn to cluster algorithms, such as the Swendsen-Wang algorithm and the Wolff clustering algorithm. They incorporate global updates which generate new lattice configurations with little correlation to previous states, even near the critical point. We look to accelerate the rate at which these algorithm are capable of running by implementing and benchmarking a parallel implementation of each algorithm designed to run on GPUs under NVIDIA's CUDA framework. A 17 and 90 fold increase in the computational rate was respectively experienced when measured against the equivalent algorithm implemented in serial code.
+
Footnote †: journal: International Journal of Modern Physics C
## 1 Introduction
Around the critical point(s) of a system modeled on a lattice, many algorithms experience the phenomenon of critical slowing down. This occurs when the critical exponent of the algorithm in use is sufficiently large that data collected around the point of a phase transition has large autocorrelation times. The behavior can be modeled as \(\tau\sim|g-g_{*}|^{-z}\), where \(\tau\) is the autocorrelation time, \(g\) is the coupling or temperature and \(g_{*}\) is its critical value. \(z\) is the critical exponent which controls the slow-down; for a diffusion based algorithm, such as a local Metropolis-Hastings update, one would have \(z=2\). This limits our ability to study systems around the point \(g_{*}\) as it greatly increases the number of iterations needed to collect statistically uncorrelated results. However, there exist clever algorithms which possess a very small critical exponent \(z\approx 0\). One such process is known as Swendsen-Wang dynamics [1].
Swendsen-Wang dynamics is able to offer small autocorrelation times because of how it generates new lattice configurations. Instead of attempting a localized change to the lattice and conditionally accepting it at each iteration as in the Metropolis-Hasting algorithm, clusters of sites (often large) on the lattice are considered and potentially changed as a whole at each iteration. This allows for subsequent configurations to be significantly different because the modifications are non-local.
Crucially, it is not a diffusive process. Therefore, large, collective fluctuations can be simulated efficiently so that the system can be better studied around the critical point.
However, because it is advantageous for scaling studies to simulate as large of a system as possible, it is still important to take advantage of the parallel computing capabilities of modern computers. Note that virtually all performance gains since around 2002 have come from increasing the number of computing cores on processors or the use of accelerators such as graphical processing units (GPUs). So, in order to realize the advances of the last \(\sim\)20 years, it is essential to have a parallel algorithm. The sites of the lattice provide a natural division of work; i.e., a unique thread associated with each site. The limiting factors of such a scheme are memory bandwidth and latency, not computational cost. GPUs have a large memory bandwidth, but high latency on the main memory accesses. Therefore they have both opportunities and challenges; a successful strategy requires that the low latency registers, caches and shared memory be used in a maximal way and that main memory accesses be coordinated to hide latency by having large data streams in flight. This heterogeneity in the memory hierarchy as well as the locality of caches and shared memory to streaming multiprocessors introduces a complexity that the programmer must tackle in order to make efficient use of the device. In this article we illustrate an approach that addresses these issues.
## 2 Swendsen-Wang Algorithm
The Swendsen-Wang algorithm is an iterative process to generate a new configuration of spins consistent with the partition function of the system,
\[Z=\sum_{\{\sigma\}}e^{-H(\{\sigma\})/k_{B}T} \tag{1}\]
There are five steps to the algorithm, of which three require computation. Here each step and its implementation will be described. In what follows, we will be discussing the two-dimensional Ising system on a square lattice. The approach can be generalized to other discrete spin systems on other lattices.
### Bond Formation
The algorithm starts with the formation of bonds between sites with like spins. Starting from an initial configuration (Fig. 1), each site attempts to form a bond with neighboring sites in the North, East, South, and West directions. In addition to the requirement of sites having equal spins, a probability of \(p=1-\exp(-1/k_{B}T)\) governs the formation of bonds. This has the effect of forming large clusters of bonded sites when the temperature is low and smaller clusters for higher temperatures. A depiction of bonds formed is given in Fig. 2.
Achieving this in parallel is trivial. Every site is assigned its own thread which checks the sites to East and South for its spin. Given a match, a random number is generated and used to determine whether or not a bond is formed, with the above-mentioned probability. To increase the speed of checking locations, shared memory is utilized, loading neighboring tiles into it. This is worthwhile for two reasons: the first is that each site is checked twice; therefore we reduce our global memory accesses by a factor of two. Secondly, by putting all the data loads early in the kernel's execution, it allows for the kernel to be completed in a coalesced fashion before any threads diverge.1
Footnote 1: The \(\epsilon\)-function is defined as the sum of the \(\epsilon\)-function.
### Erasure of Spins
Once bonds have been formed between the appropriate sites, all current spins can be disregarded. Since the new configuration is based on the clusters of sites and not the prior spins, they are irrelevant. While this step is dictated by the algorithm, it requires no attention in code as the newly generated spins will overwrite existing values.
### Cluster Formation
Once bonds exist where appropriate, the lattice must be divided into the clusters of connected sites. These clusters contain all the sites which should be assigned the same spin value in the following step. Since this is a nonlocal search, completing this step in parallel was the most challenging to implement properly and efficiently. The algorithm selected to complete this task is a "label equivalence algorithm" described by Kalentev et. al [2]. Additional descriptions of the algorithm are given in [3; 4]. The algorithm works in a three step iterative process where each pass refines the clustering until it is complete. A depiction of complete clustering is shown in figure 3.
In the initialization of the algorithm, each site is labeled with a unique integer incrementing from zero. This initial label serves as the basis for clustering, as each site attempts to find the lowest value label it is connected to. It does so by creating a forest of references. Each site does so by comparing its label to those of its neighbors and forming a reference to the lowest valued neighbor. These forests are then collapsed into trees by a function which follows and links each site to the lowest label in its chain of references. It then looks up the label it has been decided it is equivalent to, and assigns it to itself.
This process will repeat until every site is divided into its appropriate cluster. For lattices sized \(512^{2}\), this process generally completes in 5 or 6 parallel iterations.
### Assignment of New Spins
After all clusters are formed, each cluster is assigned a new random spin value, with an equal probability of it being up or down. All sites are then assigned the spin of the cluster it is contained within. This creates a new lattice configuration which can be very different than the prior one while still preserving the expected characteristics of the system. Since these are large moves that affect many spins in the case of large clusters, the algorithm has much better behavior in sampling configuration space than diffusive algorithms such as Metropolis. Figure 4 depicts the new configuration generated by this algorithm.
### Erasure of Bonds
We can now disregard the bonds formed between sites, leaving a new configuration of spins with little correlation to the prior configuration.
For each new configuration, the average magnetization and energy of the lattice is measured and recorded. Average magnetization is given by \(M=\frac{1}{N}\sum_{i=1}^{N}S_{i}\), while the energy is given by the Hamiltonian \(H=-\sum_{\langle i,j\rangle}\sigma_{i}\sigma_{j}\). These sums are efficiently calculated by reduction routines provided by the CUB library [5].
## 3 Wolff Clustering Algorithm
Similarly, clustering algorithms can be applied to continuous spin models in higher dimension. In particular, the O(4) symmetric \(\phi^{4}\) theory in four dimensions was simulated using the Wolff clustering algorithm [6]. It was shown by Frick et al. [7] that this algorithm exhibits the same desirable properties of the Swendsen-Wang algorithm; thus it is once again desirable to develop a
Figure 1: The algorithm starts with an initial configuration of spins. For the first iteration, this is generated by randomly assigning a spin to each site. For all other iterations, it is the state the system is in at the completion of the prior iteration.
Figure 4: A new spin value is then generated for each cluster, and a thread per site sets each location’s new spin value. This creates a new arrangement of spins which can be very different from the prior configuration while maintaining the expected physical properties.
Figure 3: The formation of clusters is done using a label-equivalence algorithm designed for massively parallel systems. It is able to converge quickly even on large lattices, and assigns each cluster a label such that all connected components share the same label. Here the different colors are used to represent different clusters.
Figure 2: Bonds are formed between neighboring sites if they have the same spin and the temperature allows. This is aided by the GPU’s many threads. Each site is assigned a thread which checks the neighbor below and to the right to determine if a bond should be formed.
GPU accelerated implementation. Though the ideas are similar, there are a few critical differences which must be addressed.
Most trivially, the implementation must be extended to work in four dimensions. This was achieved by extending the array used to hold site values to a four dimensional array. When checking nearest neighbors to form bonds, each site now checks the next index for each of the four array elements. The other significant change comes from the change to a continuous spin model. No longer can the values of neighboring sites be compared in a binary fashion. Instead, bonds between sites are formed with the following probability, where \(\kappa\) is the coupling constant and \(r\) is a random four component direction normalized such that \(r^{\alpha}r^{\alpha}\equiv\sum_{\alpha}r^{\alpha}r^{\alpha}=1\):
\[p=1-\exp\{\min(0,\ -4\kappa\phi_{x}^{\alpha}r^{\alpha}\phi_{x+\mu}^{\beta}r^{ \beta})\} \tag{2}\]
That is, the lattice field is projected into the direction of the four-vector \(r^{\alpha}\) and this projection is the basis of the probability. Finally, we must redefine what it means to flip clusters. Once they have been formed from the bonds created, clusters formed are selected and joined into a multicluster \(C\) with probability \(\frac{1}{2}\). They are then reflected by the random direction used to form bonds by the following:
\[(R^{C}\phi)_{x}=\begin{cases}\phi_{x}-2(\phi_{x}^{\alpha}r^{\alpha})r&\text{ if }\phi_{x}\in C\\ \phi_{x}&\text{ if }\phi_{x}\not\in C\end{cases} \tag{3}\]
## 4 Results
To verify the correctness of the new algorithm, the magnetization, energy, and specific heat across a range of temperatures was calculated and compared to both the CPU algorithm's results as well as previously found results [8][9]. Figure 6 shows the specific heat for the Ising system as a function of temperature. It shows the characteristic peak near the critical temperature where long range order disappears.
All results generated by the GPU implementation of the algorithm very closely match the sets of data generated by the serial CPU algorithm running an equivalent simulation. Additionally, the behaviors and locations of the critical points are equivalent to those described by the above resources. This give confidence in the implementations correctness and accuracy.
Figure 5: The energy per site of the Ising lattice as a function of the system temperature. Notice the inflection point around \(k_{b}T=2.25\). This denotes a phase change, signaling that this is a critical point of the system.
Figure 6: The specific heat per site of the Ising lattice. The specific heat would be expected to drastically increase at the critical point, which we can clearly see happening from the peak in the neighborhood of \(k_{B}T=2.25\).
Figure 7: The magnetization per site of the Ising lattice as a function of temperature. The critical point is once again apparent, above it the magnetization of the system goes to approximately zero.
## 5 Performance
To serve as a benchmark of the GPU accelerated codes performance, a serial algorithm running on traditional CPUs was developed. Resources at the Pittsburgh Supercomputing Center were utilized to compare performance; the serial code was executed on a Intel Haswell E5-2695 process while the GPU code was executed on a NVIDIA Tesla P100 card. On larger grid sizes, the parallel algorithm outperformed the serial code by about a factor of over 17 times on sampled grid sizes.
The performance of this algorithm proved to be bounded by the GPUs memory bandwidth. To mitigate this problem, shared and local memory caches on the device were utilized as much as possible to avoid having slow global memory accesses. A commonly used technique in this code is to load a local section of the lattice into shared memory at the start of kernel execution. This allows for the required global memory accesses to happen in a coalesced fashion, fully taking advantage of the GPUs warp based architecture. Some improvements still exist to be made in the handling of shared memory caches, particularly in the edge cases where the state from another thread block is required. However, some of these locations, particularly those directly left and right of the current thread block cannot be efficiently moved into shared memory because they are too far apart in memory [10]. Loading a row of the thread blocks local lattice currently takes one transfer operation because its width is 32 sites, which is the size of a warp. Loading the left and right neighbors into shared memory at the same time would take and additional memory transaction each. Therefore,
Figure 8: Equivalent simulations were executed on both CPU and GPU code.
it is left to be scheduled by the compiler to be hidden by switching which warp is being executed while the require memory is accessed.
The Wolff cluster algorithm benefits even more than the Swendsen Wang algorithm from a GPU implementation. Largely, this is because there is more computation required for each step, so although the memory bandwidth is still being saturated, it is less of a limiting factor. The additional complexity of calculating the bond probabilities and the reflection of flipped clusters give a large boost to the parallelization of a GPU.
Dynamic parallelization was also utilized to minimize memory transfers between device and host memory. It allows the entire clustering operation to be completed from a single kernel launched from the CPU. This maintenance kernel manages the three phases of the clustering process and watches for the conclusion of the algorithm without continually passing a flag variable from the device to the host.
## 6 Conclusions
As mentioned above, the Swendsen-Wang algorithm allows simulations to occur around the critical points of systems without the large autocorrelation times experienced by other algorithms, such as the Metropolis-Hasting algorithm. By accelerating the rate at which such system can be evolved with efficient GPU code, an even greater number of configurations can be sampled. This combination of a cluster method and GPU accelerations allows for a better understanding the system's properties near the critical temperature. We have shown that such acceleration is also possible in
Figure 9: \(\phi^{4}\) simulation results. Equivalent simulations were executed on both CPU and GPU code.
the Wolff algorithm, using similar algorithmic techniques. Another statistical model where such methods could be applied is the Potts model. We have found that a speedup of one or two orders of magnitude is possible, in spite of the challenges posed by the significant latency of global memory accesses on a GPU. This is achieved through well-designed algorithms that minimize this potential bottleneck.
## 7 Acknowledgements
We would like to acknowledge the Department of Energy, Office of Science, Office of High Energy Physics for their support through Grant Number. DE-SC0013496. Additionally, this work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by the National Science Foundation grant number ACI-1548562. The Pittsburgh Supercomputing Center was utilized using the grant "Many-core accelerated lattice field theory -- TG-PHY150039". Additional funding was provided by the Rensselaer Undergraduate Research Program.
|
2309.12023 | Gravitational Wave imprints of the Doublet Left-Right Symmetric Model | We study the gravitational wave (GW) signature in the doublet left-right
symmetric model (DLRSM) resulting from the strong first-order phase transition
(SFOPT) associated with $SU(2)_R\times U(1)_{B-L}$-breaking. For different
values of the symmetry-breaking scale $v_R =20,~30$, and $50$ TeV, we construct
the one-loop finite temperature effective potential to explore the parameter
space for regions showing SFOPT. We identify the region where the associated
stochastic GW background is strong enough to be detected at planned GW
observatories. A strong GW background favors a relatively light neutral CP-even
scalar $H_{3}$, arising from the $SU(2)_R$ doublet. The $SU(2)_L$ subgroup of
DLRSM is broken by three vevs: $\kappa_1,~\kappa_2$, and $v_L$. We observe a
preference for $\mathcal{O}(1)$ values of the ratio $w=v_L/\kappa_1$, but no
clear preference for the ratio $r=\kappa_2/\kappa_1$. A large number of points
with strong GW background can be ruled out from precise measurement of the
trilinear Higgs coupling and searches for $H_3$ at future colliders. | Siddhartha Karmakar, Dhruv Ringe | 2023-09-21T12:41:21Z | http://arxiv.org/abs/2309.12023v2 | # Gravitational Wave imprints of the
###### Abstract
We study the gravitational wave (GW) signature in the doublet left-right symmetric model (DLRSM) resulting from the strong first-order phase transition (SFOPT) associated with \(SU(2)_{R}\times U(1)_{B-L}\)-breaking. For different values of the symmetry-breaking scale \(v_{R}=20,\ 30,\) and \(50\) TeV, we construct the one-loop finite temperature effective potential to explore the parameter space for regions showing SFOPT. We identify the region where the associated stochastic GW background is strong enough to be detected at planned GW observatories. A strong GW background favors a relatively light CP-even neutral scalar \(H_{3}\), arising from the \(SU(2)_{R}\) doublet. The \(SU(2)_{L}\) subgroup of DLRSM is broken by three _vevs_: \(\kappa_{1},\ \kappa_{2},\) and \(v_{L}\). We also observe a preference for \(\mathcal{O}(1)\) values of the ratio \(w=v_{L}/\kappa_{1}\), but no clear preference for the ratio \(r=\kappa_{2}/\kappa_{1}\). A large number of points with strong GW signal can be ruled out from precise measurement of the trilinear Higgs coupling and searches for \(H_{3}\) at the future colliders.
Introduction
The left-right symmetric model (LRSM) [1; 2; 3; 4; 5] is an attractive extension that addresses several limitations of the standard model (SM). In LRSM, the SM gauge group is extended from \(\mathcal{G}_{\rm SM}=SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y}\) to \(\mathcal{G}_{\rm LRSM}=SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\) and the right-chiral fermions transform as doublets under the \(SU(2)_{R}\) subgroup. There are different realizations of LRSM, depending on the scalars involved in the spontaneous breaking of \(\mathcal{G}_{\rm LRSM}\) to \(\mathcal{G}_{\rm SM}\). These also differ in the mechanism for generating fermion masses. The triplet left-right symmetric model (TLRSM) [6; 7; 8], contains a scalar bi-doublet, and two \(SU(2)\) triplets. The charged fermion masses are then generated by the bi-doublet, whereas the neutrino masses are generated by the type-II seesaw mechanism. On the other hand, the scalar sector of a doublet left-right symmetric model (DLRSM) consists of a scalar bi-doublet and a pair of \(SU(2)\) doublets [5; 9]. In DLRSM, neutrino masses can be incorporated by extending the model with an additional charged singlet scalar [10; 11; 12; 13]. Apart from TLRSM and DLRSM, other variations have also been discussed in the literature and have different experimental consequences [14; 15; 16].
The observation of gravitational waves (GW) from a binary black hole merger by the aLIGO collaboration in 2015 [17] was a landmark discovery that marked the beginning of the era of GW astronomy. At present, two facilities of the LIGO collaboration [18] and one facility of the VIRGO collaboration [19] are functional. Several other ground-based and space-based observatories such as LISA [20], DECIGO [21], BBO [22], ET [23], and CE [24] are planned and will be functional in the coming decades. It has long been known that phenomena in the early universe such as inflation, cosmic strings, domain walls, and strong first-order phase transitions (SFOPT) can lead to a stochastic GW background [25; 26; 27; 28]. The upcoming GW observatories will be capable of detecting the GW background from SFOPT up to symmetry-breaking scales as high as \(10^{6}-10^{7}\) GeV [29; 30; 31], making GW astronomy an important tool to test beyond standard model (BSM) theories.
In this paper, we study the GW production associated with SFOPT in DLRSM. Since a right-handed charged current has not been observed at colliders so far, the \(SU(2)_{R}\times U(1)_{B-L}\) subgroup of DLRSM must be broken at a scale, \(v_{R}\), much higher than the electroweak (EW) scale. This puts a lower bound, \(v_{R}\gtrsim\mathcal{O}(10)\,\mathrm{TeV}\), but there is no upper bound as such. GW astronomy presents a novel approach to probe the scale \(v_{R}\) by studying the possibility of an
observable GW background from SFOPT within the LRSM. Different realizations of LRSM have been explored in the literature for GW imprints: through SFOPT in TRLSM [32; 33] and a minimal LR model with seesaw-like fermion mass generation [16], and also from domain walls arising out of the breaking of the discrete parity symmetry [34].
In DLRSM (TLRSM), electroweak symmetry breaking (EWSB) happens via three _vev_s : \(\kappa_{1},\ \kappa_{2}\), coming from the bi-doublet, and \(v_{L}\) coming from the \(SU(2)_{L}\) doublet (triplet). These are constrained by the relation, \(\kappa_{1}^{2}+\kappa_{2}^{2}+v_{L}^{2}=v^{2}\), where \(v=246\,\)GeV. It is useful to define the _vev_ ratios: \(r=\kappa_{2}/\kappa_{1},\ w=v_{L}/\kappa_{1}\). Contrary to TLRSM, there are no sources of custodial symmetry breaking in DLRSM at the tree level. Hence the _vev_s \(\kappa_{1},\ \kappa_{2}\), and \(v_{L}\) can all be sizable, i.e. \(r\) and \(w\) can be \(\mathcal{O}(0.1)\) and \(\mathcal{O}(1)\) respectively. In fact, it was shown in ref. [35], that the EW precision data prefers a large value of \(w\). In ref. [36] it was further shown that the measurement of the Higgs signal strength and meson mixing bounds prefer large values of \(r\) and \(w\). It is therefore interesting to note that, unlike TLRSM, EWSB in DLRSM can be considerably different from that in SM, even though the \(SU(2)_{R}\times U(1)_{B-L}\)-breaking dynamics is decoupled from the EW scale. In this paper, we ask **(i)** whether DLRSM can lead to a detectable GW background in some region of the parameter space, and **(ii)** whether this region of the parameter space prefers a special pattern of EWSB.
In Sec. II, we give a brief review of DLRSM: field content, symmetry breaking, and mass generation in the gauge, fermion, and gauge sectors. In Sec. II.4 and Sec. II.5, we discuss the theoretical bounds and the constraints from the Higgs data respectively. In Sec. III, we construct the one-loop finite temperature effective potential required to study the phase transition (PT) associated with \(SU(2)_{R}\times U(1)_{B-L}\)-breaking. The separation between \(v_{R}\) and the EW scale allows us to write the effective potential only in terms of the field responsible \(SU(2)_{R}\times U(1)_{B-L}\)-breaking. We then describe our procedure for scanning the parameter space in Sec. IV. To simplify the analysis while extracting crucial information related to SFOPT, we work with a smaller set of parameters, called the _simple basis_, introduced in ref. [36]. In the _simple basis_, we identify the region with SFOPT. Next, in Sec. V, we discuss the GW background obtained for points with SFOPT. In Sec. VI, we compute the signal-to-noise ratio (SNR) for six benchmark points, at various planned GW detectors such as FP-DECIGO, BBO, and Ultimate-DECIGO. In Sec. VII, we discuss future collider probes that can complement the GW signal. Finally, in Sec. VIII, we summarize our key findings and present concluding remarks.
The model
We follow the notation of refs. [35; 36] for the scalar potential and _vev_ structure of the scalar multiplets. The fermion content of the model has the following charges under the LRSM gauge group, \(\mathcal{G}_{\text{LRSM}}=SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B -L}\),
\[Q_{L} = \begin{pmatrix}u_{L}\\ d_{L}\end{pmatrix}\sim(3,2,1,1/3),\hskip 28.452756ptQ_{R}=\begin{pmatrix}u_{R}\\ d_{R}\end{pmatrix}\sim(3,1,2,1/3),\] \[L_{L} = \begin{pmatrix}\nu_{L}\\ e_{L}\end{pmatrix}\sim(1,2,1,-1),\hskip 28.452756ptL_{R}=\begin{pmatrix}\nu_{R} \\ e_{R}\end{pmatrix}\sim(1,1,2,-1), \tag{1}\]
where the quantum numbers of the multiplets under the sub-groups of \(\mathcal{G}_{\text{LRSM}}\) are indicated in brackets. We have suppressed the family index \(i\in\{1,2,3\}\) for three generations of quarks and three generations of leptons.The right-handed neutrino \(\nu_{R}\) is needed to complete the \(SU(2)_{R}\) lepton doublet. This choice of fermions is required for the cancellation of the \(U(1)_{B-L}\) gauge anomaly and ensures that the model is manifestly symmetric under the transformations: \(Q_{L}\leftrightarrow Q_{R}\), \(L_{L}\leftrightarrow L_{R}\).
### Scalar sector
The scalar sector of DLRSM includes a complex bi-doublet \(\Phi\) needed to generate charged fermion masses, and two doublets \(\chi_{L}\) and \(\chi_{R}\), which participate in the EW- and LR-symmetry breaking respectively. These scalar multiplets and their charges under \(\mathcal{G}_{\text{LRSM}}\) are:
\[\Phi=\begin{pmatrix}\phi_{1}^{0}&\phi_{2}^{+}\\ \phi_{1}^{-}&\phi_{2}^{0}\end{pmatrix}\sim(1,2,2,0),\ \chi_{L}=\begin{pmatrix}\chi_{L}^{+}\\ \chi_{L}^{0}\end{pmatrix}\sim(1,2,1,1),\text{ and }\chi_{R}=\begin{pmatrix}\chi_{R}^{+}\\ \chi_{R}^{0}\end{pmatrix}\sim(1,1,2,1). \tag{2}\]
We take the potential to be parity-symmetric, i.e. the couplings of 'L' and 'R' fields are
equal. The most general, CP-conserving, renormalizable scalar potential is then given by,
\[V = V_{2}+V_{3}+V_{4},\] \[V_{2} = -\mu_{1}^{2}{\rm Tr}(\Phi^{\dagger}\Phi)-\mu_{2}^{2}\ [{\rm Tr}(\tilde{\Phi}\Phi^{ \dagger})+{\rm Tr}(\tilde{\Phi}^{\dagger}\Phi)]-\mu_{3}^{2}\ [\chi_{L}^{\dagger}\chi_{L}+\chi_{R}^{ \dagger}\chi_{R}],\] \[V_{3} = \mu_{4}\ [\chi_{L}^{\dagger}\Phi\chi_{R}+\chi_{R}^{\dagger}\Phi^{ \dagger}\chi_{L}]+\mu_{5}\ [\chi_{L}^{\dagger}\tilde{\Phi}\chi_{R}+\chi_{R}^{ \dagger}\tilde{\Phi}^{\dagger}\chi_{L}],\] \[V_{4} = \lambda_{1}{\rm Tr}(\Phi^{\dagger}\Phi)^{2}+\lambda_{2}\ [{\rm Tr}( \tilde{\Phi}\Phi^{\dagger})^{2}+{\rm Tr}(\tilde{\Phi}^{\dagger}\Phi)^{2}]+ \lambda_{3}{\rm Tr}(\tilde{\Phi}\Phi^{\dagger})\,{\rm Tr}(\tilde{\Phi}^{ \dagger}\Phi) \tag{3}\] \[+\lambda_{4}{\rm Tr}(\Phi^{\dagger}\Phi)\,[{\rm Tr}(\tilde{\Phi} \Phi^{\dagger})+{\rm Tr}(\tilde{\Phi}^{\dagger}\Phi)]+\rho_{1}\ [(\chi_{L}^{ \dagger}\chi_{L})^{2}+(\chi_{R}^{\dagger}\chi_{R})^{2}]+\rho_{2}\ \chi_{L}^{ \dagger}\chi_{L}\chi_{R}^{\dagger}\chi_{R}\] \[+\alpha_{1}{\rm Tr}(\Phi^{\dagger}\Phi)[\chi_{L}^{\dagger}\chi_{L }+\chi_{R}^{\dagger}\chi_{R}]+\Big{\{}\alpha_{2}\ [\chi_{L}^{\dagger}\chi_{L}{\rm Tr}(\tilde{\Phi}\Phi^{ \dagger})+\chi_{R}^{\dagger}\chi_{R}{\rm Tr}(\tilde{\Phi}^{\dagger}\Phi)]+{ \rm h.c.}\Big{\}}\] \[+\alpha_{3}\ [\chi_{L}^{\dagger}\ \Phi\Phi^{\dagger}\chi_{L}+\chi_{R}^{ \dagger}\Phi^{\dagger}\Phi\chi_{R}]+\alpha_{4}\ [\chi_{L}^{\dagger}\ \tilde{\Phi}\tilde{\Phi}^{ \dagger}\chi_{L}+\chi_{R}^{\dagger}\tilde{\Phi}^{\dagger}\tilde{\Phi}\chi_{R}],\]
with \(\tilde{\Phi}\equiv\sigma_{2}\Phi^{*}\sigma_{2}\). The potential has mass parameters: \(\{\mu_{1,2,3,4,5}\}\), and quartic couplings: \(\{\lambda_{1,2,3,4},\alpha_{1,2,3,4},\rho_{1,2}\}\). We assume all parameters to be real for simplicity.
The neutral scalars can be written in terms of real and imaginary components,
\[\phi_{1}^{0} = \frac{1}{\sqrt{2}}(\phi_{1r}^{0}+i\phi_{1i}^{0}),\ \ \phi_{2}^{0}=\frac{1}{\sqrt{2}}(\phi_{2r}^{0}+i\phi_{2i}^{0}),\] \[\chi_{L}^{0} = \frac{1}{\sqrt{2}}(\chi_{Lr}^{0}+i\chi_{Li}^{0}),\ \ \chi_{R}^{0}=\frac{1}{\sqrt{2}}(\chi_{Rr}^{0}+i\chi_{Ri}^{0}). \tag{4}\]
We assign non-zero _vev_s only to the real components of the neutral scalars and do not consider CP- or charge-breaking minima. The _vev_ structure is denoted by
\[\langle\Phi\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}\kappa_{1}&0\\ 0&\kappa_{2}\end{pmatrix},\ \langle\chi_{L}\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}0 \\ v_{L}\end{pmatrix},\ \langle\chi_{R}\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}0 \\ v_{R}\end{pmatrix}\,. \tag{5}\]
The pattern of symmetry breaking is as follows:
\[SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\xrightarrow{v_{R}}SU(2)_{L}\times U (1)_{Y}\xrightarrow{\kappa_{1},\kappa_{2},v_{L}}U(1)_{Y}.\]
The _vev_\(v_{R}\) of the doublet \(\chi_{R}\) breaks \(SU(2)_{R}\), while the three _vev_s \(\kappa_{1},\ \kappa_{2}\), and \(v_{L}\) trigger EWSB. As mentioned earlier, the EW _vev_s can be conveniently expressed in terms of the _vev_ ratios \(r\) and \(w\) as, \(\kappa_{2}=r\kappa_{1}\) and \(v_{L}=w\kappa_{1}\). Then, \(\kappa_{1}^{2}(1+r^{2}+w^{2})=v^{2}\), i.e., the value of \(\kappa_{1}\) is fixed for a given \(r\) and \(w\). The absence of a right-handed charged current in collider experiments implies a hierarchy of scales \(v_{R}\gg v\).
In terms of the _vev_s \(\kappa_{1},\kappa_{2},v_{L}\), and \(v_{R}\), the minimization conditions are,
\[\frac{\partial V}{\partial\kappa_{1}}=\frac{\partial V}{\partial\kappa_{2}}= \frac{\partial V}{\partial v_{L}}=\frac{\partial V}{\partial v_{R}}=0. \tag{6}\]
Using the minimization conditions, we trade \(\mu_{1}^{2}\), \(\mu_{2}^{2}\), \(\mu_{3}^{2}\), and \(\mu_{5}\) for the _vevs_, \(\mu_{4}\), and quartic couplings (see appendix A for full expressions). Thus the parameters of the DLRSM scalar sector reduce to
\[\{\lambda_{1,2,3,4},\alpha_{1,2,3,4},\rho_{1,2},\mu_{4},r,w,v_{R}\}. \tag{2.7}\]
The CP-even, CP-odd, and charged scalar mass matrices are obtained using
\[m_{ij}^{2}=\left.\frac{\partial^{2}V}{\partial\varphi_{i}\,\partial\varphi_{j} }\right|_{\langle\varphi\rangle}, \tag{2.8}\]
where
\[\varphi\equiv\{\phi_{1r}^{0},\phi_{2r}^{0},\chi_{Lr}^{0},\chi_{Rr}^{0},\phi_{ 1i}^{0},\phi_{2i}^{0},\chi_{Li}^{0},\chi_{Ri}^{0},\phi_{1}^{\pm},\phi_{2}^{\pm },\chi_{L}^{\pm},\chi_{R}^{\pm}\}. \tag{2.9}\]
Physical scalar masses and mixing angles are obtained by diagonalizing these matrices. We denote the physical spectrum of scalars by: CP-even scalars, \(h,\ H_{1},\ H_{2},\ H_{3}\), CP-odd scalars, \(A_{1},\ A_{2}\), and the charged scalars, \(H_{1}^{\pm},\ H_{2}^{\pm}\).
The lightest CP-even scalar, \(h\) has a mass of the order \(v\), and is identified with the SM-like Higgs with mass \(\sim 125\) GeV. Using non-degenerate perturbation theory, \(m_{h}\) is estimated as [35; 36]
\[m_{h,\text{analytic}}^{2} = \frac{\kappa_{1}^{2}}{2(1+r^{2}+w^{2})}\times \tag{2.10}\] \[\left(4\Big{(}\lambda_{1}(r^{2}+1)^{2}+4r(\lambda_{4}(r^{2}+1)+r \lambda_{23})+w^{2}(\alpha_{124}+r^{2}(\alpha_{1}+\alpha_{3})\right.\right.\] \[\left.\left.+\alpha_{2}r\right)+\rho_{1}w^{4}\Big{)}-\frac{1}{ \rho_{1}}(\alpha_{124}+r^{2}(\alpha_{1}+\alpha_{3})+\alpha_{2}r+2\rho_{1}w^{ 2})^{2}\right),\]
where, \(\alpha_{124}\equiv\alpha_{1}+r\alpha_{2}+\alpha_{4}\), and \(\lambda_{23}=2\lambda_{2}+\lambda_{3}\). In the limit \(r,w\to 0\), the above expression simplifies to
\[m_{h,\text{analytic}}^{2}=v^{2}\left(2\lambda_{1}-\frac{(\alpha_{1}+\alpha_{4} )^{2}}{2\rho_{1}}\right). \tag{2.11}\]
However, it was pointed out in ref. [36] that for certain values of the quartic parameters, the analytical estimate for \(m_{h}\) may not suffice.
The other scalars have masses of the order \(v_{R}\). To \(\mathcal{O}\big{(}\kappa_{1}/v_{R}\big{)}\), these masses are related to
each other as
\[m_{H_{1}}^{2}\simeq m_{A_{1}}^{2}\simeq m_{H_{1}^{\pm}}^{2}\approx \tfrac{1}{2}(\alpha_{3}-\alpha_{4})v_{R}^{2}\,,\] \[m_{H_{2}}^{2}\simeq m_{A_{2}}^{2}\simeq m_{H_{2}^{\pm}}^{2} \approx\tfrac{1}{2}(\rho_{2}-2\rho_{1})v_{R}^{2}\,,\] \[m_{H_{3}}^{2}=2\rho_{1}v_{R}^{2}\,,\] \[m_{H_{2}}^{2}>m_{H_{1}}^{2}\,.\]
The first two mass expressions are valid in the limit \(r,w\to 0\). Positive-definite nature of the CP-even mass matrix leads to two approximate criteria: \(\rho_{2}>2\rho_{1}\) and \(\alpha_{3}>\alpha_{4}\). In our analysis, we calculate the scalar masses and mixing numerically. The full analytic expressions at the leading order can be found in the appendix of ref. [36].
For the CP-even scalars, the mass-squared matrix is diagonalized by the orthogonal matrix \(O\),
\[O^{T}\mathcal{M}_{\rm CPE}^{2}O=\big{(}\mathcal{M}_{\rm CPE}^{ \rm diag}\big{)}^{2},\quad X_{\rm physical}=O^{T}X\,, \tag{2.12}\]
where \(X=(\phi_{1r}^{0},\phi_{2r}^{0},\chi_{Lr}^{0},\chi_{Rr}^{0})^{T}\), \(X_{\rm physical}=(h,H_{1},H_{2},H_{3})^{T}\). The scalars \(H_{1}\) and \(A_{1}\) can contribute to the mixing of \(K_{0}-\bar{K}_{0}\) system, leading to the constraint, \(m_{H_{1},A_{1}}>15\,\text{TeV}\)[37]. The scalar \(H_{3}\) predominantly originates from the doublet \(\chi_{R}\) and its coupling to the SM particles is dominated by the element \(O_{41}\sim v^{2}/v_{R}^{2}\). So, it can be much lighter than \(H_{1}\).
The triple Higgs (\(h^{3}\)) coupling in DLRSM is given by [36]
\[c_{h^{3}} = \frac{\kappa_{1}}{2}\Big{(}2(\lambda_{1}+r\lambda_{4})O_{11}^{3} +2(r\lambda_{1}+\lambda_{4})O_{21}^{3}+2w\rho_{1}O_{31}^{3}+2(r(\lambda_{1}+4 \lambda_{2}+2\lambda_{3})+3\lambda_{4})O_{11}^{2}O_{21} \tag{2.13}\] \[\qquad+2(\lambda_{1}+4\lambda_{2}+2\lambda_{3}+3\lambda_{4}r)O_{ 11}O_{21}^{2}+w(\alpha_{1}+\alpha_{4})O_{11}^{2}O_{31}+(\alpha_{1}+r\alpha_{2 }+\alpha_{4})O_{11}O_{31}^{2}\] \[\qquad+w(\alpha_{1}+\alpha_{3})O_{21}^{2}O_{31}+(\alpha_{2}+r( \alpha_{1}+\alpha_{3}))O_{21}O_{31}^{2}\Big{)}\ \,,\]
with the corresponding coupling multiplier \(\kappa_{h}=c_{h^{3}}/c_{h^{3}}^{\rm SM}\), where \(c_{h^{3}}^{\rm SM}=m_{h}^{2}/2v\).
### Fermion sector
The fermion multiplets couple to the bi-doublet \(\Phi\) via Yukawa terms:
\[\mathcal{L}_{\rm Y}\supset-\bar{Q}_{Li}(y_{ij}\Phi+\tilde{y}_{ij }\tilde{\Phi})Q_{Rj}+\text{h.c.}\, \tag{2.14}\]
which leads to the mass matrices for the quarks:
\[M_{U}=\frac{1}{\sqrt{2}}(\kappa_{1}y+\kappa_{2}\tilde{y}),\ \ M_{D}=\frac{1}{\sqrt{2}}(\kappa_{2}y+\kappa_{1}\tilde{y})\,\]
where \(M_{U}\) and \(M_{D}\) stand for up-type and down-type mass matrices in the flavor basis respectively. To obtain the physical basis of fermions, these mass matrices need to be diagonalized through unitary transformations described by the left- and right-handed CKM matrices (\(V_{L,R}^{\rm CKM}\)). Manifest left-right symmetry implies \(V_{R}^{\rm CKM}=V_{L}^{\rm CKM}\). For the calculation of the effective potential in the next section, it is enough to take \(y\approx{\rm diag}(0,0,y_{33})\) and \(\tilde{y}\approx{\rm diag}(0,0,\tilde{y}_{33})\). In the limit \(V_{33}^{\rm CKM}\approx 1\),
\[y_{33} = \frac{\sqrt{2}(1+r^{2}+w^{2})^{1/2}}{v(1-r^{2})}(m_{t}-rm_{b}),\] \[\tilde{y}_{33} = \frac{\sqrt{2}(1+r^{2}+w^{2})^{1/2}}{v(1-r^{2})}(m_{b}-rm_{t}), \tag{2.15}\]
where the top and bottom quark masses are \(m_{t}=173.5\) GeV, and \(m_{b}\approx 5\) GeV. In the limit \(r,w\to 0\), \(y_{33}\) and \(\tilde{y}_{33}\) reduce to the SM Yukawa couplings \(y_{t}\) and \(y_{b}\) respectively. However, we do not make any such assumption and use eq. (2.15), allowing \(r,w\) to be arbitrary.
The couplings of the SM-like Higgs with the third-generation quarks are given by:
\[c_{htt\,(hbb)}=\frac{\kappa_{1}}{\kappa_{-}^{2}}\,\Big{(}(O_{11}-rO_{21})m_{t (b)}+(O_{21}-rO_{11})(V_{L}^{\rm CKM}\,\hat{M}_{D(U)}\,V_{R}^{\rm CKM\dagger} )_{33}\Big{)}, \tag{2.16}\]
where \(\kappa_{-}^{2}=\kappa_{1}^{2}-\kappa_{2}^{2}=\kappa_{1}^{2}(1-r^{2})\) and \(\hat{M}_{U(D)}\) denotes the diagonal up (down)-type quark mass matrix. Here \(O_{ij}\) are the elements of the orthogonal transformation matrix appearing in eq. (2.12). Then the coupling multipliers, \(\kappa_{b}\) and \(\kappa_{t}\) are: \(\kappa_{f}=c_{hff}/c_{hff}^{\rm SM}\), where \(c_{hff}^{\rm SM}=m_{f}/v\) and \(f=t,b\).
Since \(V_{L,R}^{\rm CKM}\approx{\bf 1}\), eq. (2.16), becomes,
\[c_{htt} \approx \frac{\kappa_{1}}{\kappa_{-}^{2}}\big{(}O_{11}(m_{t}-rm_{b})+O_{ 21}(m_{b}-rm_{t})\big{)},\] \[c_{hbb} \approx \frac{\kappa_{1}}{\kappa_{-}^{2}}\big{(}O_{11}(m_{b}-rm_{t})+O_{ 21}(m_{t}-rm_{b})\big{)}\]
Note that there is a hierarchy, \(O_{21}\ll O_{11}\sim 1\), \(m_{t}\gg m_{b}\), and \(r\ll 1\). The SM couplings are recovered by setting \(O_{11}=1,O_{21}=0,r=0\), in the above expressions. For a large \(\phi_{1r}^{0}-\phi_{2r}^{0}\) mixing, i.e. \(O_{21}\gtrsim{\cal O}(10^{-2})\) or large \(\kappa_{2}\), i.e. \(r\sim{\cal O}(10^{-1})\), the deviation of \(hb\bar{b}\) coupling from the SM value can be quite large due to the multiplicative factors proportional to \(O_{21}m_{t}\), and \(rO_{11}m_{t}\). On the other hand, the deviation of \(ht\bar{t}\) coupling is proportional to \(O_{21}m_{b}\) and \(rO_{21}m_{t}\), and is therefore rather small for the current precision of \(\kappa_{t}\) measurement.
### Gauge sector
In this paper, we work under the assumption of manifest left-right symmetry of the UV-Lagrangian, i.e., \(g_{R}=g_{L}=g\). Here, \(g_{L(R)}\) are the gauge couplings of \(SU(2)_{L(R)}\), and \(g\) is the \(SU(2)_{L}\) gauge coupling of SM. The mass matrix for charged gauge bosons is
\[\mathcal{L}_{\text{mass}}\supset\frac{g^{2}}{8}\begin{pmatrix}W_{L}^{+}&W_{R}^{ +}\end{pmatrix}\begin{pmatrix}v^{2}&-2\kappa_{1}\kappa_{2}\\ -2\kappa_{1}\kappa_{2}&V^{2}\end{pmatrix}\begin{pmatrix}W_{L}^{-}\\ W_{R}^{-}\end{pmatrix}\, \tag{17}\]
where, \(v^{2}=\kappa_{1}^{2}+\kappa_{2}^{2}+v_{L}^{2}\) and \(V^{2}=\kappa_{1}^{2}+\kappa_{2}^{2}+v_{R}^{2}\). The physical charged gauge bosons have masses,
\[m_{W_{1,2}}^{2}=\frac{g^{2}}{4}\Big{(}v^{2}+V^{2}\mp\sqrt{(v^{2}-V^{2})^{2}+1 6\kappa_{1}^{2}\kappa_{2}^{2}}\Big{)}\ \,, \tag{18}\]
\(W_{1}^{\pm}\) is identified as the SM \(W^{\pm}\) boson and \(W_{2}\) is the new charged gauge boson with mass \(\sim\mathcal{O}(v_{R})\). The mixing matrix is characterized by an orthogonal rotation with angle \(\xi\simeq 2\kappa_{1}\kappa_{2}/v_{R}^{2}\).
Similarly, the neutral gauge boson mass matrix is,
\[\mathcal{L}_{\text{mass}}\supset\frac{1}{8}\begin{pmatrix}W_{L\mu} ^{3}&W_{R\mu}^{3}&B_{\mu}\end{pmatrix}\begin{pmatrix}g^{2}v^{2}&-g^{2}\kappa_ {+}^{2}&-gg_{BL}v_{L}^{2}\\ g^{2}V^{2}&-gg_{BL}v_{R}^{2}\\ g_{BL}^{2}(v_{L}^{2}+v_{R}^{2})\end{pmatrix}\begin{pmatrix}W_{L\mu}^{3}\\ W_{R\mu}^{3}\\ B_{\mu}\end{pmatrix}\, \tag{19}\]
where \(\kappa_{+}^{2}=\kappa_{1}^{2}+\kappa_{2}^{2}\), \(g_{BL}\) is the gauge coupling of \(U(1)_{B-L}\) and here some of the elements have been suppressed since the matrix is symmetric. The lightest eigenstate is massless and identified as the photon, while the other two states have masses
\[m_{Z_{1},Z_{2}}^{2} = \frac{1}{8}\Big{(}g^{2}v^{2}+g^{2}V^{2}+g_{BL}^{2}(v_{L}^{2}+v_{R }^{2})\] \[\mp\sqrt{(g^{2}v^{2}+g^{2}V^{2}+g_{BL}^{2}(v_{L}^{2}+v_{R}^{2}))^ {2}+4(g^{4}+2g^{2}g_{BL}^{2})(\kappa_{+}^{4}-v^{2}V^{2})}\Big{)}\.\]
The lighter mass eigenstate \(Z_{1}\) corresponds to the SM \(Z\) boson, while \(Z_{2}\) has a mass \(\sim\mathcal{O}(v_{R})\).
In the limit \(\kappa_{1},\kappa_{2},v_{L}\ll v_{R}\) the mixing matrix is [38]
\[\begin{pmatrix}A_{\mu}\\ Z_{1\mu}\\ Z_{2\mu}\end{pmatrix}=\begin{pmatrix}s_{W}&c_{WSY}&c_{W}c_{Y}\\ -c_{W}&s_{WSY}&s_{W}c_{Y}\\ 0&c_{Y}&s_{Y}\end{pmatrix}\begin{pmatrix}W_{L\mu}^{3}\\ W_{R\mu}^{3}\\ B_{\mu}\end{pmatrix}, \tag{21}\]
where
\[s_{W} \equiv \sin\theta_{W}=\frac{g_{BL}}{\sqrt{g^{2}+2g_{BL}^{2}}}\,\ \ \ c_{W}\equiv\cos\theta_{W}=\sqrt{\frac{g^{2}+g_{BL}^{2}}{g^{2}+2g_{BL}^{2}}}\,\] \[s_{Y} \equiv \sin\theta_{Y}=\frac{g_{BL}}{\sqrt{g^{2}+g_{BL}^{2}}}\,\ \ \ c_{Y}\equiv\cos\theta_{Y}=\frac{g}{\sqrt{g^{2}+g_{BL}^{2}}}. \tag{22}\]
We fix \(g_{BL}=gg^{\prime}/(g^{2}-g^{\prime 2})^{1/2}\), where \(g^{\prime}\) is the gauge coupling for \(U(1)_{Y}\) of SM. Direct searches for spin-1 resonances have put a lower limit on the masses of the new charged and neutral gauge bosons. In DLRSM, the masses of such new gauge bosons are \(m_{W_{2}}\sim gv_{R}/2=\) and \(m_{Z_{2}}\sim m_{W_{2}}/\cos\theta_{Y}\). The lower limit on the mass of \(W_{2}\) is, \(m_{W_{2}}>6\,\text{TeV}\)[39], which leads to a lower bound on \(v_{R}\), \(v_{R}>2m_{W_{2}}/g=18.5\,\text{TeV}\). The constraint on \(m_{Z_{2}}\) is comparatively weaker. Therefore, the lowest value of \(v_{R}\) we use in our benchmark scenarios is \(v_{R}=20\,\text{TeV}\).
### Theoretical bounds
We incorporate the following theoretical constraints:
* _Perturbativity:_ The quartic couplings of the scalar potential, \(\{\lambda_{1,2,3,4},\ \alpha_{1,2,3,4},\ \rho_{1,2}\}\), are subjected to the upper limit of \(4\pi\) from perturbativity. Moreover, the Yukawa couplings of the DLRSM Lagrangian must satisfy the perturbativity bound \(y_{33},\tilde{y}_{33}<\sqrt{4\pi}\), with \(y_{33},\tilde{y}_{33}\) defined in eq. (15), These constrain the value of _vev_ ratios roughly to \(r<0.8\) and \(w<3.5\)[36].
* _Unitarity:_ The scattering amplitudes of \(2\to 2\) processes involving scalars and gauge bosons must satisfy perturbative unitarity. To \(\mathcal{O}(\kappa_{1}/v_{R})\), these constraints can be expressed in terms of the masses of the new scalars in DLRSM [35], \[0<\rho_{1}<\tfrac{8\pi}{3}\,\ \text{or,}\ \tfrac{m_{H_{3}}^{2}}{v_{R}^{2}}< \tfrac{16\pi}{3}\,\] \[\tfrac{(c_{H_{3}})^{2}}{k^{4}}\ \tfrac{m_{H_{3}}^{2}}{v_{R}^{2}}< \tfrac{16\pi}{3}\,\] \[2\tfrac{w^{2}}{k^{2}}\sum_{i=1,2}F_{i}^{2}\tfrac{m_{H_{i}^{+}}^{ 2}}{v_{R}^{2}}+\tfrac{c_{H_{3}}}{k^{2}}\tfrac{m_{H_{3}}^{2}}{v_{R}^{2}}<16\pi \ \,\] \[2\tfrac{w^{2}}{k^{2}}\sum_{i=1,2}S_{i}^{2}\tfrac{m_{H_{i}^{+}}^{ 2}}{v_{R}^{2}}+\tfrac{c_{H_{3}}}{k^{2}}\tfrac{m_{H_{3}}^{2}}{v_{R}^{2}}<16\pi \ \,\] (23) where \(k^{2}=1+r^{2}+w^{2}\) and \(F_{i},S_{i}\), and \(c_{H_{3}}\) are defined in terms of the parameters of the potential [35].
* _Boundedness from below:_ The scalar potential must be bounded from below (BFB) along all directions in field space. This leads to additional constraints on the quartic couplings of the model. The full set of such constraints was derived in ref. [36], which we have implemented in our numerical analysis.
### Constraints from \(h(125)\) data
In the following, we qualitatively describe the constraints on DLRSM from Higgs-related measurements at the LHC.
* The key constraint comes from the measurement of the mass of SM-like Higgs, \(m_{h}=125.38\pm 0.14\,\text{GeV}\)[40]. If the theoretical bounds of perturbativity and boundedness from below are taken into account together with \(m_{h,\text{analytic}}\simeq 125\,\text{GeV}\), it leads to an upper bound on the _vev_ ratio, \(w\lesssim 2.93+4.35r-0.48r^{2}\).
* One of the most stringent constraints on the DLRSM parameter space comes from the measurement of \(hb\bar{b}\) coupling, \(\kappa_{b}=0.98^{+0.14}_{-0.13}\)[41]. If the mixing between \(\phi_{1r}^{0}\) and \(\phi_{2r}^{0}\) takes large values, \(\kappa_{b}\) can deviate from unity, thereby ruling out a large region of parameter space allowed by theoretical bounds and the measurement of \(m_{h}\). However, \(ht\bar{t}\) coupling is not significantly modified and does not result in any new constraints.
* As discussed in Sec. II.3, a large value of \(v_{R}\) ensures that the mixings between the SM-like and heavier gauge bosons are rather small, \(\xi\sim\mathcal{O}(v^{2}/v_{R}^{2})\). Therefore, the \(hW_{1}W_{1}\) and \(hZ_{1}Z_{1}\) couplings are quite close to their SM values and do not lead to any additional constraints on the DLRSM parameter space.
* The trilinear coupling of the SM-like Higgs given in eq. (13), does not necessarily align with the SM value. As can be seen from eq. (13), some of the terms appearing in the parenthesis can be of the same order as \(\lambda_{SM}\), because \(\mathcal{O}(1)\) values of \(w\) are allowed and the quartic couplings can be of the \(\mathcal{O}(0.1)\). In our analysis, we impose the ATLAS bound of \(\kappa_{h}=[-2.3,10.3]\) at 95% CL [42].
## III Effective potential
In this section, we construct the full one-loop finite temperature effective potential [43; 44] required to study the nature of the PT associated with the breaking of \(SU(2)_{R}\times U(1)_{B-L}\). Below we describe the procedure step by step.
The tree-level effective potential is obtained by setting all the fields to their respective background field value in the potential given in eq. (3). The CP-even neutral component of \(\chi_{\rm R}\) is responsible for breaking the \(SU(2)_{\rm R}\times U(1)_{B-L}\) gauge group, whose background value we denote by \(R\). Since \(v_{R}\gg v\), all other field values can be set to zero. Hence, in the notation of eq. (9), the background fields are
\[\langle\varphi\rangle=\{0,0,0,R,0,0,0,0,0,0,0\}.\]
The tree-level effective potential is then given by
\[V_{0}(R)=-\frac{\mu_{3}^{2}}{2}R^{2}+\frac{\rho_{1}}{4}R^{4}. \tag{10}\]
At the one-loop level, the zero-temperature correction to the effective potential is given by the Coleman-Weinberg (CW) formula [45]. In the Landau gauge, with \(\overline{\rm MS}\) renormalization scheme, the CW potential is [43]
\[V_{\rm CW}(R)=\frac{1}{64\pi^{2}}\sum_{i}(-1)^{f_{i}}n_{i}m_{i}^{4}(R)\left[ \log\left(\frac{m_{i}^{2}(R)}{\mu^{2}}\right)-c_{i}\right], \tag{11}\]
where \(i\) runs over all species coupling to the \(SU(2)_{R}\times U(1)_{B-L}\)-breaking field \(\chi_{Rr}^{0}\). The field-dependent mass, \(m_{i}(R)\) is the mass of the species \(i\) in the presence of the background field \(R\). When there is mixing between the different species, the masses are extracted as the eigenvalues of the corresponding mass matrices. The expressions for the field-dependent masses can be found in Appendix B. Since all the SM fields receive mass from _vev_s responsible for EWSB: \(v_{L}\), \(\kappa_{1}\), and \(\kappa_{2}\), they do not contribute to the CW potential here. In Appendix C we take the minimal mechanism of neutrino mass generation of refs. [10; 11] and show that the right-handed neutrino \(\nu_{R}\) and the extra charged scalar do not contribute to the effective potential. Therefore the contributions only come from the CP-even scalars: \(\{\phi_{1r}^{0},\phi_{2r}^{0},\chi_{Lr}^{0},\chi_{Rr}^{0}\}\), CP-odd scalars: \(\{\phi_{1i}^{0},\phi_{2i}^{0},\chi_{Li}^{0},\chi_{Ri}^{0}\}\), charged scalars: \(\{\phi_{1}^{\pm},\phi_{2}^{\pm},\chi_{L}^{\pm},\chi_{R}^{\pm}\}\), and gauge bosons \(W_{R}^{\pm}\) and \(Z_{R}\). The factor \(f_{i}\) is 0 (1) for bosons (fermions), and the number
of degrees of freedom \(n_{i}\) are,
\[n_{\phi_{1r}^{0}}=n_{\phi_{2r}^{0}}=n_{\chi_{Lr}^{0}}=n_{\chi_{Rr}^{0}}=1,\]
\[n_{\phi_{1i}^{0}}=n_{\phi_{2i}^{0}}=n_{\chi_{Li}^{0}}=n_{\chi_{Ri}^{0}}=1,\]
\[n_{\phi_{1}^{+}}=n_{\phi_{2}^{+}}=n_{\chi_{L}^{+}}=n_{\chi_{R}^{+}}=2,\]
\[n_{W_{L}^{\pm}}=n_{W_{R}^{\pm}}=6,\]
\[n_{Z_{L}}=n_{Z_{R}}=3.\]
The constant \(c_{i}=5/6\) for gauge bosons, and \(3/2\) for all other fields. We set the renormalization scale \(\mu=v_{R}\) to ensure the validity of the CW formula by having \(\mathcal{O}(1)\) logs.
We choose finite renormalization conditions such that the one-loop potential does not change the minimum of the effective potential, and the mass of the CP-even scalar \(\chi_{Rr}^{0}\). This is achieved by introducing a counter-term potential
\[V_{\text{c.t.}}(R)=-\frac{\delta\mu_{3}^{2}}{2}R^{2}+\frac{\delta\rho_{1}}{4} R^{4}\,, \tag{3.3}\]
where the unknown coefficients \(\delta\mu_{3}^{2}\) and \(\delta\rho_{1}\) are fixed by demanding
\[\left.\frac{\partial}{\partial R}(V_{\text{CW}}+V_{\text{c.t.}})\right|_{R=v_ {R}}=0\,, \tag{3.4a}\] \[\left.\frac{\partial^{2}}{\partial R^{2}}(V_{\text{CW}}+V_{\text{c.t.}})\right| _{R=v_{R}}=0\,. \tag{3.4b}\]
This leads to
\[\delta\mu_{3}^{2}=\frac{3}{2v_{R}}\left.\frac{\partial V_{\text{CW}}}{ \partial R}\right|_{R=v_{R}}-\frac{1}{2}\left.\frac{\partial^{2}V_{\text{CW}}}{ \partial R^{2}}\right|_{R=v_{R}}, \tag{3.5a}\] \[\delta\rho_{1}=\frac{1}{2v_{R}^{3}}\left.\frac{\partial V_{\text{CW}}}{ \partial R}\right|_{R=v_{R}}-\frac{1}{2v_{R}^{2}}\left.\frac{\partial^{2}V_{ \text{CW}}}{\partial R^{2}}\right|_{R=v_{R}}. \tag{3.5b}\]
Then the one-loop contribution to the effective potential is
\[V_{1}=V_{\text{CW}}+V_{\text{c.t.}}. \tag{3.6}\]
Next, we include the one-loop finite temperature correction [43; 46]
\[V_{1T}(R,T)=\frac{T^{4}}{2\pi^{2}}\sum_{i}(-1)^{f_{i}}n_{i}J_{b/f}\left(\frac {m_{i}^{2}}{T^{2}}\right), \tag{3.7}\]
where the functions \(J_{b/f}\) are given by
\[J_{b/f}(x^{2})=\int_{0}^{\infty}dy\ y^{2}\log[1\mp e^{-\sqrt{y^{2}+x^{2}}}]\,. \tag{3.8}\]
In the high-T approximation, i.e. \(x^{2}\equiv\frac{m_{i}^{2}}{T^{2}}\ll 1\), eq. (3.8) simplifies to [47]
\[J_{f}(x^{2})\approx - \frac{7\pi^{4}}{360}+\frac{\pi^{2}}{24}x^{2}+\mathcal{O}(x^{4})\,,\] \[J_{b}(x^{2})\approx - \frac{\pi^{4}}{45}+\frac{\pi^{2}}{12}x^{2}-\frac{\pi}{6}(x^{2})^{3 /2}+\mathcal{O}(x^{4})\,. \tag{3.9}\]
The non-analytic \((x^{2})^{3/2}\) term present in the bosonic case is mainly responsible for the formation of a barrier between the minima of the effective potential at zero and non-zero field values, leading to a FOPT.
In addition to the one-loop terms, multi-loop contributions from daisy diagrams need to be re-summed to cure the infrared divergence arising from the bosonic zero-modes [48]. There are two ways to do this: the Parwani method [49] and the Arnold-Espinosa method [50]. In the Parwani method, the field-dependent mass is replaced with thermally corrected mass, i.e., \(m_{i}^{2}(R)\to m_{i}^{2}(R)+\Pi_{i}(T)\), in the expressions of \(V_{\rm CW}\) and \(V_{1T}\). Here \(\Pi_{i}\) is the thermal mass obtained using the high-T expansion of \(V_{1T}\), as shown in Appendix B. The daisy re-summed effective potential is given by
\[V_{\rm eff}=V_{0}+V_{\rm CW}(m_{i}^{2}(R)+\Pi_{i}(T))+V_{\rm c.t.}+V_{1T}(m_{i }^{2}(R)+\Pi_{i}(T))\,. \tag{3.10}\]
In the Arnold-Espinosa method, no such replacement for field-dependent mass is made, but an extra daisy term is added to the effective potential:
\[V_{\rm D}=-\frac{T}{12\pi}\sum_{i}n_{i}\bigg{(}(m_{i}^{2}(R)+\Pi_{i}(T))^{3/2} -(m_{i}^{2}(R))^{3/2}\bigg{)}\,. \tag{3.11}\]
Thus the effective potential is given by
\[V_{\rm eff}=V_{0}+V_{\rm CW}+V_{\rm c.t.}+V_{\rm 1T}+V_{\rm D}. \tag{3.12}\]
In our analysis, we use the Arnold-Espinosa method, as it takes into account the daisy resummation consistently at the one-loop level, while the Parwani method mixes higher-order loop effects in the one-loop analysis.
## IV Parameter Scan
As discussed earlier, DLRSM has a large number of parameters: ten quartic couplings, along with \(r,w,\mu_{4}\), and \(v_{R}\). This is called the _generic basis_. To reduce the number of
parameters for our analysis, we work in the _simple basis_, introduced in ref. [36]. The condition of boundedness from below, discussed in Sec. II.4, requires that the ratio \(x=\lambda_{2}/\lambda_{4}\) is restricted to the range \(x\in[0.25,0.85]\). Therefore, we keep \(\lambda_{2}\) as a separate parameter, while we equate \(\lambda_{1}=\lambda_{3}=\lambda_{4}\equiv\lambda_{0}\). Similarly, guided by the approximate mass relation, \(m_{H_{1}}\approx\frac{1}{2}(\alpha_{3}-\alpha_{4})\), we allow for the possibility of having \(\alpha_{3}\neq\alpha_{4}\) by keeping them independent, while setting \(\alpha_{1}=\alpha_{2}=\alpha_{4}\equiv\alpha_{0}\). Thus the _simple basis_ contains six quartic couplings
\[\left\{\lambda_{0},\ \lambda_{2},\ \alpha_{0},\ \alpha_{3},\ \rho_{1},\ \rho_{2} \right\}. \tag{10}\]
Along with these quartic couplings, we scan over the three parameters \(r\), \(w\), and \(v_{R}\) that represent the energy scales of the model. As the mass parameter \(\mu_{4}\) plays an insignificant role in the effective potential, we set \(\mu_{4}=0\) in our analysis. Using the _simple basis_ allows us to capture the key features of GW phenomenology of DLRSM while retaining the interplay of the existing theoretical and collider constraints.
In preliminary scans, we find that promising scenarios of strong first-order phase transition occur for small values of \(\rho_{1}\). For points with relatively large couplings, the daisy potential, \(V_{\rm D}\), given in eq. (12) starts dominating over the contribution from the thermal potential, \(V_{1T}\), given in eq. (9). When this happens, the symmetry-restoring property of
Figure 1: Points with strong FOPT in the \(\rho_{1}-\rho_{2}\) plane, for \(v_{R}=30\) TeV. The grey points have passed the theoretical and experimental constraints. The left panel shows all points showing FOPT with \(\xi_{c}>0\), whereas in the right panel, the points satisfy the condition \(\xi_{c}>1\).
the finite temperature effective potential is lost and instead, symmetry non-restoration is observed. Then the minimum at the non-zero field value becomes deeper at high temperatures, implying the absence of a phase transition, as discussed in refs. [51; 52; 53]. Based on these observations, we choose the following parameter ranges:
\[\log\alpha_{0}\in[-3,0],\ \log\alpha_{3}\in[-3,0],\ \log\rho_{1}\in[-3.5,-0.5],\ \rho_{2}\in[0,4\pi],\] \[x\in[0.25,0.85],\ \log r\in[-3,0],\ \log w\in[-6,1],\ v_{R}=20,30,50\, \mathrm{TeV}. \tag{4.2}\]
Each parameter is selected randomly from a uniform distribution in the respective range. The parameter \(\lambda_{0}\) is chosen in the following manner:
* To increase the number of points satisfying the bound on SM-like Higgs mass (\(m_{h}\)), we solve the equation, \(m_{h,\mathrm{analytic}}\left(\lambda_{0}=\Lambda_{0}\right)=125.38\,\mathrm{GeV}\), for a fixed set of values \(\{\alpha_{0},\alpha_{3},\rho_{1},\rho_{2},x\}\).
* Using the solution \(\Lambda_{0}\), we choose a random value of \(\lambda_{0}\) as, \(\lambda_{0}=\left(1+y\right)\Lambda_{0}\), with \(y\in[-0.1,0.1]\).
* Finally, each parameter point is defined by the set: \[\left\{\lambda_{0},\ \lambda_{2}=x\lambda_{0},\ \alpha_{0},\ \alpha_{3},\ \rho_{1},\ \rho_{2},\ r,\ w,\ v_{R}\right\}\]
Given a parameter point, we first check if it satisfies the theoretical constraints: boundedness from below, perturbativity, and unitarity, discussed in Sec. II.4. Next, the Higgs constraints described in Sec. II.5 are checked. Furthermore, the constraint from meson mixing \(m_{H_{1}}>15\,\mathrm{TeV}\) is imposed.
If the parameter point passes all the aforementioned theoretical and experimental constraints, we construct the effective potential using the Arnold-Espinosa method. We satisfy the Linde-Weinberg bound [54; 55] by numerically checking that the minimum of the zero-temperature effective potential at \(R=v_{R}\) is the absolute minimum. We reject the point if symmetry non-restoration persists at high temperatures. Next, we check for a possible first-order phase transition, using the python-based package CosmoTransitions[56]. The strength of FOPT can be quantified by the ratio
\[\xi_{c}=\frac{v_{c}}{T_{c}}, \tag{4.3}\]
where \(T_{c}\) is the critical temperature at which the two minima become degenerate and \(v_{c}\) is the _vev_ at \(T_{c}\). The FOPT is considered to be strong if the following criterion is met [57],
\[\xi_{c}>1. \tag{10}\]
In fig. 1, we show the points with FOPT projected onto the \(\rho_{1}-\rho_{2}\) plane for \(v_{R}=30\) TeV, color-coded according to the value of \(\xi_{c}\). The left panel shows all points with \(\xi_{c}>0\), while the right panel only shows points satisfying the SFOPT criterion \(\xi_{c}>1\). The grey dots depict parameter points passing the existing theoretical and experimental bounds. As suggested by the preliminary scans, SFOPT prefers \(\rho_{1}\lesssim\mathcal{O}(0.1)\). Points with \(\rho_{1}\lesssim\mathcal{O}(10^{-2})\) and \(\rho_{2}\gtrsim\mathcal{O}(1)\) violate the Linde-Weinberg bound. Therefore, there are no points showing SFOPT in this region. A large number of points with \(\rho_{2}\gtrsim 6\) also exhibit symmetry non-restoration at high temperatures.
Fig. 2 shows various two-dimensional projections of the DLRSM parameter space for \(v_{R}=30\,\)TeV, depicting points with SFOPT. The parameter \(\alpha_{0}\) is always smaller than 1,
Figure 2: Projections showing the points with SFOPT on different parameter planes, for \(v_{R}=30\) TeV. The grey points show all points passing the theoretical and experimental constraints. The points satisfy the condition \(\xi_{c}>1\).
as indicated by the left panels in the top and the bottom row. We also restrict ourselves to \(\alpha_{3}<1\) to avoid points showing symmetry non-restoration. Along the \(\alpha_{3}\) direction, there is a sharp change in the density of points around \(\alpha_{3}\approx 0.5\), coming from the bound \(m_{H_{1}}>15\,\)TeV. The value of \(\alpha_{3}\) where the density changes is different for \(v_{R}=20\), \(30\), and \(50\) TeV. Since the couplings are small for a large number of parameter points, the approximate relation given in eq. (2.11) tells us that \(\lambda_{1}\) can take values close to \(\lambda_{\rm SM}\approx 0.13\). In the top right and bottom left panels, we indeed observe an over-density of points clustered around \(\lambda_{0}\approx 0.13\). In the \(\rho_{1}-\lambda_{0}\) plane, a majority of points with large \(\xi_{c}\) occur for small \(\rho_{1}\), and large \(\lambda_{0}\). In the \(r-w\) plane, points with large \(\xi_{c}\) occur mostly at higher values of \(w\) (\(\gtrsim\mathcal{O}(0.1)\)) and are less frequent for smaller values of \(w\). So this parameter region can lead to a detectable GW background. There is no preference along the \(r\) direction. The points with large \(\xi_{c}\) also have relatively large values of \(y_{33}\), as indicated by the contours corresponding to \(y_{33}=1,\ 1.5\), and \(\sqrt{4\pi}\).
The strength of FOPT is more rigorously characterized by three parameters, \(\alpha,\ \beta/H_{*}\), and \(T_{n}\), which are required to compute the GW spectrum. These are defined as follows:
* The probability of tunneling from the metastable to the stable minimum is given by [58] \[\Gamma(T)\approx T^{4}\left(\frac{S_{3}}{2\pi T}\right)^{3/2}e^{-\frac{S_{3}}{ T}},\] (4.5) where \(S_{3}\) is the \(O(3)\)-symmetric Euclidean bounce action. This is calculated using the tunneling solution of the equation of motion of the scalar field. We use CosmoTransitions[56] to compute \(S_{3}\). The probability of nucleating a bubble within a Hubble volume increases as the universe cools below \(T_{c}\), and becomes \(\mathcal{O}(1)\) at the nucleation temperature, \(T_{n}\). This happens when \[\Gamma(T_{n})\approx\left(H(T_{n})\right)^{4}.\] (4.6) In the radiation-dominated era, this implies [59] \[\frac{S_{3}(T_{n})}{T_{n}}\simeq-4\ln\left(\frac{T_{n}}{m_{\rm Pl}}\right),\] (4.7) where the Planck mass \(m_{\rm Pl}=1.22\times 10^{19}\) GeV.
* The parameter \(\alpha\) is the vacuum energy released during the transition, \(\rho_{\rm vac}\), normalized
by the radiation density at the time of FOPT [60], \[\alpha\equiv\frac{\rho_{\rm vac}}{\rho_{\rm rad}},\] (10) where, \[\rho_{\rm vac} = \left(\left.V_{\rm High}-V_{\rm Low}\right)-\frac{T}{4}\bigg{(} \frac{\partial V_{\rm High}}{\partial T}-\frac{\partial V_{\rm Low}}{\partial T }\bigg{)}\right|_{T=T_{*}}\,,\] (11) \[\rho_{\rm rad} = \frac{\pi^{2}}{30}g_{*}T_{*}^{4}\,.\] (12) Here \(T_{*}\) is the temperature of the universe at the time when dominant GW production takes place. We take \(T_{*}\simeq T_{n}\) in our calculations. The subscripts 'High' and 'Low' refer to the metastable and stable minima respectively, at the time of tunneling. \(g_{*}\) is the number of relativistic degrees of freedom at \(T=T_{*}\). For DLRSM, \(g_{*}=130\).
* \(\beta\) is related to the rate or inverse duration of the phase transition, defined as [25] \[\beta\equiv-\left.\frac{dS}{dt}\right|_{t=t_{*}}=TH_{*}\left.\frac{dS}{dT} \right|_{T=T_{*}},\] (13) where, \(S=S_{3}/T\) and \(H_{*}\) is the Hubble's constant at \(T=T*\).
For points satisfying \(\xi_{c}>1\), we compute the nucleation temperature \(T_{n}\). We find the solution of eq. (9) using the secant method, where the tunneling action is calculated by CosmoTransitions. We remove any points with \(T_{n}<0\), as this indicates that the PT is not completed till the present time. Moreover, we set a lower bound of \(T_{n}>500\,\)GeV to
Figure 3: Variation of PT parameters in the \(\rho_{1}-\rho_{2}\) plane. Color code shows the variation of \(\alpha\) (left panel), \(\beta/H_{*}\) (middle panel), and \(T_{n}\) (right panel). Here, \(v_{R}=30\) TeV.
ensure that the PT is completed before the EW epoch. Once \(T_{n}\) is obtained, \(\alpha\) and \(\beta/H_{*}\) can be computed using eqs. (4.8) and (4.11) respectively. Fig. 3 shows the variation of the PT parameters \(\alpha\) (left panel), \(\beta/H_{*}\) (middle panel), and \(T_{n}\) (right panel), in the \(\rho_{1}-\rho_{2}\) plane. The evaluated ranges roughly are, \(\alpha\in[0,0.8]\), \(\beta/H_{*}\in[10^{2},10^{6}]\), and \(T_{n}\in[2,16]\) TeV. \(T_{n}\) is observed to take smaller values in regions where the strength of SFOPT is high.
## V Gravitational wave background
The GW spectrum is defined as [25]
\[\Omega_{\rm GW}(f)\equiv\frac{1}{\rho_{c}}\frac{d\rho_{\rm GW}}{d\ln f}, \tag{5.1}\]
where \(f\) is the frequency, \(\rho_{\rm GW}\) is GW energy density, and \(\rho_{c}\) is the critical energy density of the universe, given by,
\[\rho_{c}=\frac{3H_{0}^{2}}{8\pi G}. \tag{5.2}\]
Here, \(H_{0}=100\,h\) km s\({}^{-1}\)Mpc\({}^{-1}\) is the Hubble constant with the current value of \(h=0.6736\pm 0.0054\)[61] and \(G\) is Newton's gravitational constant.
A strong FOPT proceeds by nucleation of bubbles of the stable phase which expand rapidly in the sea of the metastable phase. GWs are produced when the expanding bubbles collide and coalesce with each other. If sufficient friction exists in the plasma, the bubble walls may reach a terminal velocity \(v_{w}\). We take \(v_{w}=1\) in our analysis. GW production happens via three main processes: bubble wall collisions (\(\Omega_{\rm col}\)), sound waves produced in the thermal plasma (\(\Omega_{\rm sw}\)), and the resulting MHD turbulence (\(\Omega_{\rm turb}\)). For a recent review of the different GW production mechanisms, please refer to [26]. In the non-runaway scenario [25], GW production happens primarily through sound waves and turbulence, i.e.,
\[h^{2}\Omega_{\rm GW}\simeq h^{2}\Omega_{\rm sw}+h^{2}\Omega_{\rm turb}\,, \tag{5.3}\]
where [25; 62],
\[h^{2}\Omega_{\rm sw}(f) = 2.65\times 10^{-6}\left(\frac{100}{g_{*}}\right)^{1/3}\left( \frac{H_{*}}{\beta}\right)^{2}\left(\frac{\kappa_{\rm sw}\alpha}{1+\alpha} \right)^{2}v_{w}\ S_{\rm sw}(f)\ \Upsilon(\tau_{\rm sw}), \tag{5.4}\] \[h^{2}\Omega_{\rm turb}(f) = 3.35\times 10^{-4}\left(\frac{100}{g_{*}}\right)^{1/3}\left( \frac{H_{*}}{\beta}\right)^{2}\left(\frac{\kappa_{\rm turb}\alpha}{1+\alpha} \right)^{3/2}v_{w}\ S_{\rm turb}(f)\,. \tag{5.5}\]
Here, \(\kappa_{\rm sw}\) and \(\kappa_{\rm turb}\) are the efficiency factors for the respective processes. The efficiency factor \(\kappa_{\rm sw}\) is given by
\[\kappa_{\rm sw}=\frac{\alpha}{0.73+0.083\sqrt{\alpha}+\alpha}, \tag{5.6}\]
and \(\kappa_{\rm turb}\) is known to be at most \(5-10\%\) of \(\kappa_{\rm sw}\). Here we take \(\kappa_{\rm turb}=0.05\ \kappa_{\rm sw}\). We have included the suppression factor \(\Upsilon(\tau_{\rm sw})\) that arises due to the finite lifetime \(\tau_{\rm sw}\) of sound waves [62],
\[\Upsilon(\tau_{\rm sw})=1-\frac{1}{1+2\tau_{\rm sw}H_{*}}\,, \tag{5.7}\]
with
\[\tau_{\rm sw}=\frac{R_{*}}{\overline{U}_{f}}\,, \tag{5.8}\]
where the mean bubble separation \(R_{*}\simeq(8\pi)^{1/3}v_{w}/\beta\) and the mean square velocity is
\[\overline{U}_{f}^{2}=\frac{3}{4}\frac{\alpha}{1+\alpha}\kappa_{\rm sw}\,. \tag{5.9}\]
The spectral shape functions, \(S_{\rm sw}\) and \(S_{\rm turb}\) determine the behavior of each contribution at low and high frequencies. These are
\[S_{\rm sw}(f) = \left(\frac{f}{f_{\rm sw}}\right)^{3}\left(\frac{7}{4+3(f/f_{\rm sw })^{2}}\right)^{7/2},\] \[S_{\rm turb}(f) = \left(\frac{f}{f_{\rm turb}}\right)^{3}\frac{1}{[1+(f/f_{\rm turb })]^{11/3}(1+8\pi f/h_{*})}\,. \tag{5.10}\]
Here, \(h_{*}\) is the Hubble rate at \(T=T_{*}\),
\[h_{*}=1.65\times 10^{-7}\ {\rm Hz}\left(\frac{T_{*}}{100\ {\rm GeV}}\right) \left(\frac{g_{*}}{100}\right)^{1/6}. \tag{5.11}\]
The red-shifted peak frequencies, after taking into account the expansion of the universe, are,
\[f_{\rm sw} = 1.9\times 10^{-5}{\rm Hz}\left(\frac{g_{*}}{100}\right)^{1/6}\ \frac{1}{v_{w}}\left(\frac{\beta}{H_{*}}\right)\left(\frac{T_{*}}{100\ {\rm GeV}}\right), \tag{5.12}\] \[f_{\rm turb} = 2.7\times 10^{-5}{\rm Hz}\left(\frac{g_{*}}{100}\right)^{1/6}\ \frac{1}{v_{w}}\left(\frac{\beta}{H_{*}}\right)\left(\frac{T_{*}}{100\ {\rm GeV}}\right). \tag{5.13}\]
From the expressions of \(\Omega_{\rm sw}\) and \(\Omega_{\rm turb}\), it is clear that large \(\alpha\) and small \(\beta/H_{*}\) lead to a strong GW spectrum. The peak frequency is proportional to \(T_{n}\sim v_{R}\) and hence, the peak shifts to the right for larger \(v_{R}\). This is illustrated in fig. 4, where we show scatter plots of the parameter points for which \(\alpha,\ \beta/H_{*}\), and \(T_{n}\) have been computed. Each point represents
the peak value corresponding to the GW spectrum, \(h^{2}\Omega_{\rm GW}\). The left panel shows that these points shift to the right as \(v_{R}\) is progressively increased between \(v_{R}=20,\ 30\) and \(50\) TeV. The strength of the GW signature is not affected by varying \(v_{R}\). The right panel shows the variation of \(\alpha\) for the points corresponding to \(v_{R}=20,\ 30\), and \(50\) TeV combined. There is clearly a positive correlation between large \(\alpha\) and the strength of GW. The solid lines represent the power-law integrated sensitivity curves corresponding to planned detectors such as LISA, FP-DECIGO, BBO, Ultimate-DECIGO, ET, and CE. Points lying above the sensitivity curve of a detector would have strong detection prospects. The DLRSM phase transition has good detection prospects for the detectors FP-DECIGO, BBO, and Ultimate-DECIGO for the chosen set of \(v_{R}\) values. The GW spectrum is too weak to be detected at ET and CE for the chosen range of \(v_{R}\). If the scale \(v_{R}\) is increased by a factor of \(\sim 10-100\), these two detectors may be able to detect them, but we ignore this region as the complementary collider constraints would be too weak.
In fig. 5 we illustrate the distribution of the points with detectable GW signal in the \(r-w\) plane. The grey points pass all the theoretical and experimental constraints. The blue points are only detectable at Ultimate-DECIGO, the green points are detectable by Ultimate-DECIGO as well as BBO, and the red points can be detected at all three detectors.
Figure 4: The peak of the GW spectrum \(\Omega_{\rm GW}\) for points with SFOPT, along with the sensitivity curves of various upcoming GW detectors. Left panel: points corresponding to \(v_{R}=20,\ 30\), and \(50\) TeV are shown. Right panel: Points are color-coded according to the value of \(\alpha\), for \(v_{R}=20,\ 30\), and \(50\) TeV combined.
Interestingly, for \(v_{R}=20\) TeV, the red, green, and blue points are densely clustered around \(w\sim\mathcal{O}(1)\). For most of these points, \(y_{33}\) is also large, \(y_{33}\sim 1.5-\sqrt{4\pi}\). In the middle panel, i.e. \(v_{R}=30\) TeV, the majority of points still prefer \(w\sim\mathcal{O}(1)\), but now there are also points at lower values of \(w\). In the case of \(v_{R}=50\) TeV, we see that the clustering of points around \(\mathcal{O}(1)\) values of \(w\) is even more diffuse. In all three cases, i.e. \(v_{R}=20,~{}30\), and \(50\) TeV, there is no particular preference in the \(r\) direction, as also seen from the SFOPT plots given in fig. 2.
## VI Detection prospects
The prospect of detecting a GW signal in a given GW observatory can be quantified using the signal-to-noise ratio (SNR), defined as [26]
\[\text{SNR}=\sqrt{\tau\int_{f_{\text{min}}}^{f_{\text{max}}}df\left[\frac{ \Omega_{\text{GW}}(f)h^{2}}{\Omega_{\text{sens}}(f)h^{2}}\right]^{2}}, \tag{10}\]
where \(\tau\) is the time period (in seconds) over which the detector is active and the integration is carried out over the entire frequency range \([f_{\text{min}},f_{\text{max}}]\) of the detector. For calculations, we take \(\tau=2\) years. \(\Omega_{\text{sens}}(f)\) is the noise energy density power spectrum for the chosen detector. A signal is detectable if the observed SNR value exceeds a threshold SNR, denoted as \(\text{SNR}_{\text{thres}}\). We take \(\text{SNR}_{\text{thres}}=10\) for the purpose of discussion.
Figure 5: Points with detectable GW signature at upcoming observatories: Ultimate-DECIGO, BBO, and FP-DECIGO. The scale is chosen to be, \(v_{R}=20\) TeV (left panel), \(v_{R}=30\) TeV (middle panel), and \(v_{R}=50\) TeV (right panel). The purple, blue, and yellow contours represent the upper limits on \(y_{33}=1,1.5,\sqrt{4\pi}\) respectively based on eq. (15).
Table 1 presents six benchmark points (BP) with high SNR values for FP-DECIGO, BBO, and Ultimate-DECIGO, obtained using eq. (6.1). BP1, BP2, BP3, and BP4 have been chosen at the \(SU(2)_{R}\) breaking scale \(v_{R}=30\) TeV, while for BP5 and BP6 the chosen scales are \(v_{R}=20\) TeV and \(50\) TeV respectively. The top segment of the table shows the values of the quartic couplings, while the middle segment gives the mass spectrum corresponding to each BP. The bottom segment gives the values of PT parameters \(\alpha\), \(\beta/H_{*}\), \(T_{c}\) and \(T_{n}\). Barring BP1, all other BPs have \(w\sim\mathcal{O}(1)\). All BPs have \(\rho_{1}\lesssim\mathcal{O}(10^{-1})\) and hence smaller values of \(m_{H_{3}}\) are preferred.
The full GW spectra for the BPs are shown in fig. 6. The peak of the spectrum corresponds
\begin{table}
\begin{tabular}{c|c c c c c c} \hline & BP1 & BP2 & BP3 & BP4 & BP5 & BP6 \\ \hline \(v_{R}\) (TeV) & 30 & 30 & 30 & 30 & 20 & 50 \\ \(\lambda_{0}\) & 0.126796 & 0.466090 & 0.308396 & 0.324564 & 1.982649 & 0.799371 \\ \(\lambda_{2}\) & 0.097015 & 0.253725 & 0.141320 & 0.267655 & 1.670007 & 0.413236 \\ \(\alpha_{0}\) & 0.004789 & 0.003504 & 0.007640 & 0.012450 & 0.012042 & 0.021020 \\ \(\alpha_{3}\) & 0.957421 & 0.005786 & 0.006466 & 0.004839 & 0.001015 & 0.003094 \\ \(\rho_{1}\) & 0.019071 & 0.001274 & 0.001929 & 0.005930 & 0.009976 & 0.003445 \\ \(\rho_{2}\) & 2.003479 & 0.627225 & 1.166146 & 1.674371 & 5.574184 & 2.275937 \\ \(r\) & 0.008261 & 0.008136 & 0.418869 & 0.020970 & 0.390416 & 0.424048 \\ \(w\) & \(4\times 10^{-6}\) & 0.950364 & 1.439902 & 0.766492 & 2.702912 & 2.018973 \\ \hline \(m_{W^{\pm}_{R}}\) (TeV) & 9.81 & 9.81 & 9.81 & 9.81 & 6.54 & 16.35 \\ \(m_{Z_{R}}\) (TeV) & 11.58 & 11.58 & 11.58 & 11.58 & 7.72 & 19.30 \\ \(m_{H_{1}}\) (TeV) & 20.72 & 15.97 & 32.79 & 20.89 & 81.06 & 99.90 \\ \(m_{H_{2}}\) (TeV) & 29.74 & 23.13 & 45.58 & 34.46 & 116.99 & 144.97 \\ \(m_{H_{3}}\) (TeV) & 5.86 & 1.51 & 1.86 & 3.27 & 2.82 & 4.15 \\ \hline \(\alpha\) & 0.280 & 0.274 & 0.243 & 0.122 & 0.428 & 0.273 \\ \(\beta/H_{*}\) & 422 & 1050 & 2648 & 8267 & 975 & 3204 \\ \(T_{c}\) (TeV) & 5.78 & 3.26 & 3.46 & 4.83 & 2.82 & 5.87 \\ \(T_{n}\) (TeV) & 3.08 & 1.68 & 1.86 & 2.91 & 1.37 & 3.26 \\ \hline \end{tabular}
\end{table}
Table 1: Benchmark points for DLRSM in the simple basis.
to the frequency \(f_{sw}\) defined in eq. (5.12) since \(\Omega_{\rm sw}\) gives the dominant contribution. The peak of BP4 lies only above Ultimate-DECIGO and below BBO and FP-DECIGO, while all other BPs have GW peaks above the sensitivity curves of Ultimate-DECIGO, BBO, and FP-DECIGO. The low- and high-frequency tails are dominated by the power law behavior of \(\Omega_{\rm turb}\).
The SNR of the BPs are listed in table 2. As proclaimed in the previous section, the BPs generally yield high SNR values (\(\sim 10^{5}\) or above) for FP-DECIGO, BBO, and Ultimate-DECIGO. The SNR values for BP1, BP2, BP3, BP5 and BP6 are higher than \(10^{5}\) for FP-DECIGO, BBO, and Ultimate-DECIGO, and hence have very good detection prospects. Ultimate-DECIGO, being the most sensitive, can detect all the BPs listed in table 2. The point BP4 is not detectable at FP-DECIGO and BBO, but can be detected by Ultimate-DECIGO.
Figure 6: GW spectra for the benchmark points listed in table 1.
## VII Complementary Collider Probes
Now we describe the collider probes that complement the GW signatures discussed in the previous sections. We discuss two important collider implications, namely the precision of \(\kappa_{h}\) and detection of \(H_{3}\).
* As argued in Sec. II.5, in DLRSM the trilinear Higgs coupling can deviate significantly from its SM value. In Table 3, we present the percentage of points leading to detectable GW signal at Ultimate-DECIGO, which also show deviation of \(\kappa_{h}\) at \(5\%,~{}10\%,~{}20\%,\) and \(50\%\). The current ATLAS measurement allows for a rather large range of \(\kappa_{h}\in[-2.3,10.3]\). However, future colliders will significantly tighten the bound. Here we quote the projected sensitivities of \(\kappa_{h}\) from ref. [63]. HL-LHC will achieve a sensitivity of \(50\%\) from the di-Higgs production channel. The proposed colliders, such as HE-LHC, CLIC\({}_{3000}\), and FCC-hh are expected improve the sensitivity of \(\kappa_{h}\) to \(\sim 20\%,10\%,\) and \(5\%\) respectively. These colliders therefore will rule out a considerable number of points showing a strong GW signal.
\begin{table}
\begin{tabular}{|c|c c c c c c|} \hline SNR & BP1 & BP2 & BP3 & BP4 & BP5 & BP6 \\ \hline FP-DECIGO & \(1.4\times 10^{8}\) & \(1.8\times 10^{7}\) & \(5.5\times 10^{5}\) & \(-\) & \(6.8\times 10^{7}\) & \(1.3\times 10^{5}\) \\ BBO & \(4.0\times 10^{8}\) & \(6.3\times 10^{7}\) & \(2.8\times 10^{6}\) & \(-\) & \(1.9\times 10^{8}\) & \(5.9\times 10^{5}\) \\ Ultimate-DECIGO & \(1.7\times 10^{11}\) & \(4.4\times 10^{10}\) & \(1.3\times 10^{10}\) & \(3.6\times 10^{7}\) & \(8.7\times 10^{10}\) & \(7.7\times 10^{9}\) \\ \hline \end{tabular}
\end{table}
Table 2: SNR values corresponding to different detectors for the benchmark points.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\delta\kappa_{h}\) & 20 TeV & 30 TeV & 50 TeV & Combined \\ \hline \(>5\%\) & 52\% & 58\% & 50\% & 54\% \\ \(>10\%\) & 21\% & 34\% & 33\% & 30\% \\ \(>20\%\) & 8\% & 20\% & 25\% & 17\% \\ \(>50\%\) & 1.3\% & 12\% & 15\% & 9\% \\ \hline \end{tabular}
\end{table}
Table 3: Percentage of points detectable at Ultimate-DECIGO to be ruled out when the sensitivity of \(\kappa_{h}\) reaches \(5\%,10\%,20\%,\) and \(50\%\), for \(v_{R}=20,30,\) and \(50\,\)TeV.
* The scalar \(H_{3}\) can be produced at \(pp\) colliders through several channels, for example [38], 1. \(H_{1}\)-decay, \(pp\to H_{1}\to hH_{3}\), 2. decay of boosted \(h\), \(pp\to h^{*}\to hH_{3},H_{3}H_{3}\), 3. Higgsstrahlung, \(pp\to V_{R}^{*}\to V_{R}H_{3}\), 4. \(V_{R}V_{R}\) fusion, \(pp\to H_{3}jj\). The relative strength of these processes depends on the mass spectrum of DLRSM.In fig. 7, we show the distribution of SFOPT points in the \(m_{H_{1}}-m_{H_{3}}\) plane for \(v_{R}=20\,\)TeV, overlaid with points which are detectable at Ultimate-DECIGO, BBO, and
Figure 7: The mass spectrum of DLRSM for \(v_{R}=20\,\)TeV depicting points with \(\xi_{c}>1\). The cyan and orange points lead to a GW signal detectable at Ultimate-DECIGO and Ultimate-DECIGO+BBO+FP-DECIGO respectively.
FP-DECIGO. The detectable points mostly occur for small \(m_{H_{3}}\), with the minimum value of \(m_{H_{3}}=741\,\)GeV. For the range \(m_{H_{3}}=740\,\)GeV \(-\) 1.2 TeV, the production cross-section of \(H_{3}\) at FCC-hh with \(\sqrt{s}=100\,\)TeV can be \(\sim\mathcal{O}(\text{fb})\)[38]. For \(v_{R}=20\,\)TeV, \(m_{H_{3}}\lesssim 500\,\)GeV can be ruled out from the observations of the channel (iv) at FCC-hh with a luminosity of \(30\,\text{ab}^{-1}\). For large values of quartic couplings, the decay width \(h^{*}\to hH_{3}\) and \(h^{*}\to H_{3}H_{3}\) can be large and subsequently, channel (ii) can rule out \(m_{H_{3}}\lesssim 700\,\)GeV. For channel (i), \(H_{1}\) with mass \(15\,\)TeV can be produced with a cross-section \(\sim 0.5\,\)fb and have sizable branching ratios of \(H_{1}\to hH_{3}\), \(H_{3}H_{3}\). As a result, channel (i) can rule out masses up to \(m_{H_{3}}\sim 2\,\)TeV. Thus, these searches are capable of ruling out a large number of points with low-\(m_{H_{3}}\), thus low-\(\rho_{1}\), providing a complementarity to the GW probe of DLRSM.
## VIII Summary and Conclusions
In this paper, we studied the possibility of an observable stochastic GW background resulting from SFOPT associated with the spontaneous breaking of \(SU(2)_{R}\times U(1)_{B-L}\) in DLRSM. The gauge symmetry of DLRSM breaks in the following pattern:
\[SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\xrightarrow{v_{R}}SU(2)_{L}\times U (1)_{Y}\xrightarrow{\kappa_{I},\kappa_{Z},v_{L}}U(1)_{Y}.\]
The non-observation of a right-handed current at colliders puts a lower bound on the scale \(v_{R}\) to be around 20 TeV. Due to the hierarchy \(v_{R}\gg v\), the \(SU(2)_{R}\times U(1)_{B-L}\)-breaking dynamics is decoupled from the EWPT. We chose the scale \(v_{R}=20,\ 30\), and 50 TeV to study the possible detection of GW background at the planned observatories. For these values of \(v_{R}\), complementary searches for new scalars of DLRSM are feasible at future colliders.
We imposed a discrete left-right symmetry on the model. Our analysis was carried out using the _simple basis_ defined in ref. [36], to reduce the number of independent parameters. It should be noted that analysis with the full set of parameters also gives similar patterns of SFOPT in the \(\rho_{1}-\rho_{2}\) and \(r-w\) planes. The parameters in the _simple basis_ include the quartic couplings: \(\lambda_{0},\lambda_{2},\alpha_{0},\alpha_{3},\rho_{1},\rho_{2}\). In addition, we defined EW _vev_s through the ratios \(r\) and \(w\). Most studies on LRSM take the simplified limit \(r,w\to 0\). However, it was pointed out in refs. [35; 36] that the DLRSM phenomenology allows for significant deviation from this limit. Therefore, we also scanned over \(r\) and \(w\).
We constructed the one-loop finite temperature effective potential for each parameter point and analyzed the nature of PT using the package CosmoTransitions. Due to the large separation between \(v_{R}\) and the EW scale, the effective potential depends solely on the background field value of the neutral CP-even scalar, \(\chi_{Rr}^{0}\). The condition for SFOPT, \(\xi_{c}>1\) was used to identify viable regions of the parameter space. SFOPT favors small values of the quartic coupling \(\rho_{1}\lesssim\mathcal{O}(10^{-1})\), which leads to \(m_{H_{3}}\ll v_{R}\). This feature has also been observed in other variants of LRSM, discussed in refs. [16; 32; 33].
For very small values, \(\rho_{1}\lesssim 10^{-3}\) however, the zero temperature minimum of the one-loop effective potential at \(R=v_{R}\) becomes metastable, violating the Linde-Weinberg bound. Hence there is a lower bound on \(\rho_{1}\) below which FOPT is not observed. Most points with SFOPT also feature \(w\sim\mathcal{O}(1)\), while for smaller values of \(w\), very few points show SFOPT. Out of the chosen set of parameters, the SFOPT region is most sensitive to the parameters \(\rho_{1}\) and \(w\) and to some extent \(\lambda_{0}\). However, we see no particular preference for the _vev_ ratio \(r\) and the quartic couplings relating the bidoublet and the doublet fields, i.e., \(\alpha_{0}\) and \(\alpha_{3}\), as illustrated by the projections given in fig. 2.
For parameter points showing SFOPT, we computed the PT parameters, \(\alpha\), \(\beta/H_{*}\), and \(T_{n}\), needed for the calculation of the GW spectrum. In the non-runaway scenario, the stochastic GW background resulting from SFOPT comes primarily from sound waves and turbulence, while the contribution from bubble wall collisions remains sub-dominant. Fig. 4 shows the position of the peak of the GW spectrum for points satisfying the SFOPT criterion. While for a large number of points, the GW spectrum is too weak to be detected, there is a significant number of points lying above the sensitivity curves for Ultimate-DECIGO, BBO, and FP-DECIGO. Such points will be accessible to these detectors in the coming years. The detectable points also prefer \(w\sim\mathcal{O}(1)\), which in turn, correspond to a large value of \(y_{33}\) as seen in fig. 5.
The strength of the GW spectrum does not depend on the scale \(v_{R}\). On the other hand, since the peak frequency is proportional to \(T_{n}\sim v_{R}\), the points shift to the right as \(v_{R}\) changes from 20 to 30 to 50 TeV. To quantify the detection prospects, we computed the signal-to-noise ratio at these detectors for the detectable points. Six benchmark points are given in table 2, featuring SNR values higher than \(10^{5}\). We see that for all the BPs, \(m_{H_{3}}\lesssim 5\) TeV.
There are primarily two complementary collider probes for the points with detectable
GW signals. It was found that a significant fraction of points leads to \(50,20,10,\mathrm{and}\,5\%\) deviation of \(k_{h}\) from unity, which can be ruled out at HL-LHC, HE-LHC, CLIC\({}_{3000}\), and FCC-hh respectively. Due to a relatively low mass of \(H_{3}\), it can be produced at future colliders through various channels. In particular, FCC-hh can rule out up to \(m_{H_{3}}\sim 2\,\mathrm{TeV}\).
Although DLRSM does not account for neutrino masses, it is interesting to ask if incorporating them by adding extra fields to the model could modify the strength of FOPT. In Appendix C, we show that it is possible to include neutrino masses without impacting the results of our analysis.
Here we also note that the discrete LR symmetry imposed on the model also breaks spontaneously during the \(SU(2)_{R}\times U(1)_{B-L}\to U(1)_{Y}\) PT, leading to the formation of domain walls. The domain wall problem can be avoided by introducing explicit LR-breaking terms in the potential. These domain walls can also produce a stochastic GW background peaking at very low frequencies, possibly detectable by pulsar timing arrays.
###### Acknowledgements.
DR is thankful to Subhendu Rakshit for his useful suggestions. SK acknowledges discussions with S. Uma Sankar during an earlier collaboration. This research work uses the computing facilities under DST-FIST scheme (Grant No. SR/FST/PSI-225/2016) of the Department of Science and Technology (DST), Government of India. DR is thankful for the support from DST, via SERB Grants no. MTR/2019/000997 and no. CRG/2019/002354. DR is supported by the Government of India UGC-SRF fellowship. SK thanks DTP, TIFR Mumbai for funding the Visiting Fellow position, where part of the work was completed.
## Appendix A Minimization at the EW vacua
The minimization conditions are given by:
\[\mu_{1}^{2} = \frac{1}{2(r^{2}-1)}\Bigg{(}\kappa_{1}^{2}\Big{(}w^{2}((r^{2}-1) \alpha_{1}+r^{2}\alpha_{3}-\alpha_{4})+2(r^{2}-1)((r^{2}+1)\lambda_{1}+2r\lambda _{4})\Big{)}\] \[+2\sqrt{2}rv_{R}w\mu_{4}+v_{R}^{2}\Big{(}(r^{2}-1)\alpha_{1}+r^{2 }\alpha_{3}-\alpha_{4}+2w^{2}\rho_{12}\Big{)}\Bigg{)}\,\] \[\mu_{2}^{2} = \frac{1}{4(r^{2}-1)}\Bigg{(}\kappa_{1}^{2}\Big{(}w^{2}(r^{2}-1) \alpha_{2}-w^{2}r\alpha_{34}+2(r^{2}-1)(2r\lambda_{23}+(r^{2}+1)\lambda_{4}) \Big{)}\] \[-\sqrt{2}(r^{2}+1)v_{R}w\mu_{4}+v_{R}^{2}\Big{(}(r^{2}-1)\alpha_{ 2}-r\alpha_{34}-2w^{2}\rho_{12}\Big{)}\Bigg{)}\,\] \[\mu_{3}^{2} = \frac{1}{2}\kappa_{1}^{2}((r^{2}+1)\alpha_{1}+2r\alpha_{2}+r^{2} \alpha_{3}+\alpha_{4}+2w^{2}\rho_{1})+v_{R}^{2}\rho_{1}\,\] \[\mu_{5} = -r\mu_{4}-\sqrt{2}v_{R}w\rho_{12}, \tag{10}\]
where, \(\rho_{12}=\rho_{2}/2-\rho_{1}\), \(\alpha_{34}=\alpha_{3}-\alpha_{4}\), and \(\lambda_{23}=2\lambda_{2}+\lambda_{3}\).
## Appendix B Field-dependent masses
The field-dependent mass matrices are obtained from the tree-level effective potential:
\[m_{ij}^{2}(R)=\left.\frac{\partial^{2}}{\partial\varphi_{i}\partial\varphi_{j }}V_{0}\right|_{(\cdots)} \tag{11}\]
where \(\langle\cdots\rangle\) denotes the background field value.
For the CP-even sector, we obtain,
\[\mathcal{M}_{\rm CPE}^{2}=\begin{pmatrix}-\mu_{1}^{2}+\frac{1}{2}(\alpha_{1} +\alpha_{4})R^{2}&-2\mu_{2}^{2}+\frac{1}{2}\alpha_{2}R^{2}&\frac{1}{\sqrt{2}} \mu_{5}R&0\\ -2\mu_{2}^{2}+\frac{1}{2}\alpha_{2}R^{2}&-\mu_{1}^{2}+\frac{1}{2}(\alpha_{1}+ \alpha_{3})R^{2}&\frac{1}{\sqrt{2}}\mu_{4}R&0\\ \frac{1}{\sqrt{2}}\mu_{5}R&\frac{1}{\sqrt{2}}\mu_{4}R&-\mu_{3}^{2}+\frac{1}{2 }\rho_{2}R^{2}&0\\ 0&0&0&-\mu_{3}^{2}+3\rho_{1}R^{2}\end{pmatrix}. \tag{12}\]
For the CP-odd scalars,
\[\mathcal{M}_{\rm CP0}^{2}=\begin{pmatrix}-\mu_{1}^{2}+\frac{1}{2}(\alpha_{1} +\alpha_{4})R^{2}&2\mu_{2}^{2}-\frac{1}{2}\alpha_{2}R^{2}&-\frac{1}{\sqrt{2}} \mu_{5}R&0\\ 2\mu_{2}^{2}-\frac{1}{2}\alpha_{2}R^{2}&-\mu_{1}^{2}+\frac{1}{2}(\alpha_{1}+ \alpha_{3})R^{2}&\frac{1}{\sqrt{2}}\mu_{4}R&0\\ -\frac{1}{\sqrt{2}}\mu_{5}R&\frac{1}{\sqrt{2}}\mu_{4}R&-\mu_{3}^{2}+\frac{1}{2 }\rho_{2}R^{2}&0\\ 0&0&0&-\mu_{3}^{2}+\rho_{1}R^{2}\end{pmatrix}, \tag{13}\]
and for the charged scalars, we get,
\[\mathcal{M}^{2}_{\rm charged}=\begin{pmatrix}-\mu_{1}^{2}+\frac{1}{2}(\alpha_{1}+ \alpha_{4})R^{2}&2\mu_{2}^{2}-\frac{1}{2}\alpha_{2}R^{2}&-\frac{1}{\sqrt{2}} \mu_{5}R&0\\ 2\mu_{2}^{2}-\frac{1}{2}\alpha_{2}R^{2}&-\mu_{1}^{2}+\frac{1}{2}(\alpha_{1}+ \alpha_{3})R^{2}&\frac{1}{\sqrt{2}}\mu_{4}R&0\\ -\frac{1}{\sqrt{2}}\mu_{5}R&\frac{1}{\sqrt{2}}\mu_{4}R&-\mu_{3}^{2}+\frac{1}{2 }\rho_{2}R^{2}&0\\ 0&0&0&-\mu_{3}^{2}+\rho_{1}R^{2}\end{pmatrix}. \tag{104}\]
For the neutral gauge bosons, we have the mass matrix,
\[\mathcal{M}^{2}_{Z}=\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&\frac{1}{4}g_{R}^{2}R^{2}&-\frac{1}{4}g_{BL}g_{R}R^{2}\\ 0&-\frac{1}{4}g_{BL}g_{R}R^{2}&\frac{1}{4}g_{BL}^{2}R^{2}\end{pmatrix}. \tag{105}\]
For the charged bosons,
\[\mathcal{M}^{2}_{W}=\begin{pmatrix}0&0\\ 0&\frac{1}{4}g_{R}^{2}R^{2}\end{pmatrix}. \tag{106}\]
In addition to the field-dependent masses, we also need thermal self-energies of the fields for daisy resummation. These are obtained from the high-T expansion of the one-loop thermal potential. Substituting eq. (3.9) in eq. (3.7) gives, to leading order,
\[V^{\rm high}_{1T}=\frac{T^{2}}{24}\left(\sum_{b}n_{b}m_{b}^{2}+\frac{1}{2}\sum_ {f}n_{f}m_{f}^{2}\right). \tag{107}\]
Here, index \(b\) runs over bosons, while index \(f\) runs over fermions. Each sum can be expressed as the trace of the respective matrix. Thermal mass matrices are then expressed as, \(\Pi_{ij}=c_{ij}T^{2}\), where \(c_{ij}\) are,
\[c_{ij}=\frac{1}{T^{2}}\left.\frac{\partial^{2}}{\partial\varphi_{i}\partial \varphi_{j}}V^{\rm high}_{1T}\right|_{(\cdots)}. \tag{108}\]
We define,
\[d_{1} = \frac{1}{48}(9g_{L}^{2}+9g_{R}^{2}+8(2\alpha_{1}+\alpha_{3}+ \alpha_{4}+5\lambda_{1}+2\lambda_{3})), \tag{109}\] \[d_{1}^{\prime} = d_{1}+\frac{y_{33}^{2}}{4}+\frac{\tilde{y}_{33}^{2}}{4},\] (110) \[d_{2} = \frac{1}{3}(2\alpha_{2}+3\lambda_{4}),\] (111) \[d_{2}^{\prime} = d_{2}+\frac{y_{33}\tilde{y}_{33}}{4},\] (112) \[d_{L} = \frac{1}{48}(3g_{BL}^{2}+9g_{L}^{2}+8(2\alpha_{1}+\alpha_{3}+ \alpha_{4}+3\rho_{1}+\rho_{2})),\] (113) \[d_{R} = \frac{1}{48}(3g_{BL}^{2}+9g_{R}^{2}+8(2\alpha_{1}+\alpha_{3}+ \alpha_{4}+3\rho_{1}+\rho_{2}))\,. \tag{114}\]
We obtain the following thermal mass matrices:
\[\Pi_{\rm CPE}=T^{2}\begin{pmatrix}d_{1}^{\prime}&d_{2}^{\prime}&0&0\\ d_{2}^{\prime}&d_{1}^{\prime}&0&0\\ 0&0&d_{L}&0\\ 0&0&0&d_{R}\end{pmatrix}, \tag{108}\]
\[\Pi_{\rm CP0}=T^{2}\begin{pmatrix}d_{1}^{\prime}&-d_{2}^{\prime}&0&0\\ -d_{2}^{\prime}&d_{1}^{\prime}&0&0\\ 0&0&d_{L}&0\\ 0&0&0&d_{R}\end{pmatrix}, \tag{109}\]
\[\Pi_{\rm charged}=T^{2}\begin{pmatrix}d_{1}&-d_{2}&0&0\\ -d_{2}&d_{1}&0&0\\ 0&0&d_{L}&0\\ 0&0&0&d_{R}\end{pmatrix}. \tag{110}\]
The gauge boson thermal mass matrices are,
\[\Pi_{Z}=\frac{T^{2}}{6}\begin{pmatrix}13g_{L}^{2}&0&0\\ 0&13g_{R}^{2}&0\\ 0&0&6g_{BL}^{2}\end{pmatrix}, \tag{111}\]
\[\Pi_{W}=\frac{13}{6}T^{2}\begin{pmatrix}g_{L}^{2}&0\\ 0&g_{R}^{2}\end{pmatrix}. \tag{112}\]
## Appendix C Neutrino masses in DLRSM
We have not taken into account a mechanism for generating neutrino mass in our version of DLRSM. In this section, we argue the minimal way of incorporating neutrino mass in this model do not give any additional contribution to the GW phenomenology of the model.
To demonstrate our point, we consider the model discussed in refs. [11; 12]. Small neutrino masses are generated radiatively by the Zee mechanism, by adding a charged singlet scalar \(\delta^{+}\sim(1,1,1,2)\) to DLRSM. In our notation, the Majorana Lagrangian is,
\[-\mathcal{L}_{LR}^{M}=\gamma_{L}L_{L}L_{L}\delta^{+}+\gamma_{R}L_{R}L_{R} \delta^{+}+\gamma_{1}\chi_{L}^{T}i\sigma_{2}\Phi\chi_{R}\delta^{-}+\gamma_{1} \chi_{L}^{T}i\sigma_{2}\tilde{\Phi}\chi_{R}\delta^{-}+\text{h.c.}\,, \tag{113}\]
where, \(\gamma_{L,R},~{}\gamma_{1},~{}\gamma_{2}\) are the new Yukawa couplings. As there is no tree-level right-handed neutrino mass, the contribution of the RH neutrinos to the effective potential is zero. However, the quartic terms involving \(\delta^{+}\) modify the mixing between the charged scalars. In the basis of \(\{\phi_{1}^{\pm},\phi_{2}^{\pm},\chi_{L}^{\pm},\chi_{R}^{\pm},\delta^{\pm}\}\), the additional contribution to the charged mass matrix, \(\mathcal{M}_{\text{charged}}^{2}\) is,
\[\delta\mathcal{M}_{\text{charged}}^{2}=v_{R}^{2}\left(\begin{array}{cccccc}0& 0&0&0&\frac{\gamma_{2}}{2}\frac{v_{L}}{v_{R}}\\ 0&0&0&0&-\frac{\gamma_{1}}{2}\frac{v_{L}}{v_{R}}\\ 0&0&0&0&\frac{(\gamma_{1}\kappa_{2}+\gamma_{2}\kappa_{1})}{2v_{R}}\\ 0&0&0&0&-\frac{v_{L}(\gamma_{1}\kappa_{1}+\gamma_{2}\kappa_{2})}{2v_{R}^{2}}\\ \frac{\gamma_{2}}{2}\frac{v_{L}}{v_{R}}&-\frac{\gamma_{1}}{2}\frac{v_{L}}{v_ {R}}&\frac{(\gamma_{1}\kappa_{2}+\gamma_{2}\kappa_{1})}{2v_{R}}&-\frac{v_{L}( \gamma_{1}\kappa_{1}+\gamma_{2}\kappa_{2})}{2v_{R}^{2}}&0\end{array}\right). \tag{10}\]
Each of the non-zero entries is suppressed by a factor \(v_{L}/v_{R}\) or \(\kappa_{1,2}/v_{R}\) compared to \(v_{R}^{2}\). Therefore the mixing of the charged scalars of DLRSM with \(\delta^{+}\) is negligible, while their mixing among themselves remains unchanged. In the field-dependent mass matrix, we put \(v_{L}\to 0,~{}\kappa_{1,2}\to 0\), and \(v_{R}\to R\), by which the additional mixing matrix, \(\delta\mathcal{M}_{\text{charged}}^{2}(R)\), vanishes entirely. Hence the presence of \(\delta^{+}\) does not alter the field-dependent mass matrices and therefore does not contribute to the effective potential.
|
2309.04641 | Exploring Domain-Specific Enhancements for a Neural Foley Synthesizer | Foley sound synthesis refers to the creation of authentic, diegetic sound
effects for media, such as film or radio. In this study, we construct a neural
Foley synthesizer capable of generating mono-audio clips across seven
predefined categories. Our approach introduces multiple enhancements to
existing models in the text-to-audio domain, with the goal of enriching the
diversity and acoustic characteristics of the generated foleys. Notably, we
utilize a pre-trained encoder that retains acoustical and musical attributes in
intermediate embeddings, implement class-conditioning to enhance
differentiability among foley classes in their intermediate representations,
and devise an innovative transformer-based architecture for optimizing
self-attention computations on very large inputs without compromising valuable
information. Subsequent to implementation, we present intermediate outcomes
that surpass the baseline, discuss practical challenges encountered in
achieving optimal results, and outline potential pathways for further research. | Ashwin Pillay, Sage Betko, Ari Liloia, Hao Chen, Ankit Shah | 2023-09-08T23:43:57Z | http://arxiv.org/abs/2309.04641v1 | # Exploring Domain-Specific Enhancements for a Neural Foley Synthesizer
###### Abstract
Foley sound synthesis refers to the creation of authentic, diegetic sound effects for media, such as film or radio. In this study, we construct a neural Foley synthesizer capable of generating mono-audio clips across seven predefined categories. Our approach introduces multiple enhancements to existing models in the text-to-audio domain, with the goal of enriching the diversity and acoustic characteristics of the generated foley. Notably, we utilize a pre-trained encoder that retains acoustical and musical attributes in intermediate embeddings, implement class-conditioning to enhance differentiability among foley classes in their intermediate representations, and devise an innovative transformer-based architecture for optimizing self-attention computations on very large inputs without compromising valuable information. Subsequent to implementation, we present intermediate outcomes that surpass the baseline, discuss practical challenges encountered in achieving optimal results, and outline potential pathways for further research. Note: This system was submitted to Task 7 of the DCASE 2023 challenge, and the relevant codebase can be accessed at: [https://github.com/ankitshah009/foley-sound-synthesis_DCASE_2023](https://github.com/ankitshah009/foley-sound-synthesis_DCASE_2023).
Ashwin Pillay\({}^{1*}\), Sage Betko\({}^{1*}\), Ari Liloia\({}^{1}\), Hao Chen \({}^{1}\), Ankit Shah\({}^{1*}\)\({}^{1}\)Carnegie Mellon University, Pittsburgh, PA
{apillay, sbetko, alliola, haoc3, apsl}@andrew.cmu.edu
## 1 Introduction
Foley sound refers to diegetic, non-musical sound effects that convey the sounds produced by events depicted in a piece of media, such as radio or film. The process of creating complex sound environments from scratch is time-consuming and expensive; a method for convincingly synthesizing sounds could improve the content creation workflow. It could also be used to synthesize and augment other datasets. In this project, we create a machine learning model that generates original audio clips belonging to one of seven foley sound categories, namely _DogBark, Fostostedo, GunShot, Keyboard, MovingMotVelhic, Rain, and Sneeze/Cough_[1]. Evaluating present results, our system has exceeded the performance of the DCASE baseline model in six out of seven categories, as measured via Frechet Audio Distance (FAD).
## 2 Literature Review
Previous work by Ghose & Provost [2], _AutoFoley_, describes an ensemble of a CNN + Fast-Slow LSTM model and a CNN + Temporal Relation Network (TRN) to generate foley for the provided silent video input. The model is trained on a novel dataset to generate several classes of foley. This is done by predicting a sound class matrix and combining each component with the average spectrogram of the corresponding foley class to generate a final audio output for the given video frame.
Additionally, foley synthesis is a subset of the text-to-audio (TTA) generation problem that has received considerable deep-learning research attention in recent times. Kreuk et al. [3] developed _Audiogen_, a TTA generator using a combination of autoregressive audio encoder-decoder and language transformer-decoder model that can outperform prior work in this field by Yang et al. [4]. Audiogen is trained end-to-end on a combination of input audio and a corresponding textual description. Internally, the audio and text are encoded into compressed representations for improving the speed and generalization of the model. While Audiogen can generate audio for text prompts it was not explicitly trained on, the resulting output may not follow the temporal token ordering of the input prompts.
Recently, _AudioLDM_ developed by Liu et al. [5] achieved improved results over Audiogen in terms of both subjective metrics and objective metrics such as Frechet Audio Distance (FAD). AudioLDM is a Latent Diffusion Model (LDM) based TTA generator that uses contrastive language-audio pretraining (CLAP) models [6] to represent audio-text cross-modalities and a Variational Autoencoder (VAE) + HiFi-GAN [7] combination to synthesize audio from its latent space representation. Using CLAP enables the model to be trained on embeddings directly derived from audio, bypassing the intrinsic inefficiencies and human-induced inconsistencies of textual audio description. During inference, the text prompt provided is converted into its audio embedding by CLAP, and is subsequently converted into a latent audio representation by the LDM. While AudioLDM is a good reference for our research, it would be imperative to significantly optimize the model specific to fixed-class foley generation while being closer to the respective ground truth on subjective and objective evaluation metrics.
It is also appropriate to note the success of the three-stage DTFR model [8], which is the baseline model for the DCASE challenge and is explored in detail later in this report.
## 3 Model Description
We began our work by reproducing the baseline provided by the DCASE2023 Task 7 organizers to recreate the stated results and identify strategies to improve upon them. Our work was originally concerned with making optimizations to the given baseline that enabled us to regenerate the provided results on a single GPU. Subsequently, we made enhancements to several components of the baseline with the goal of improving the quality and variety of the generated foleys.
### Optimizations to the baseline
Upon experimenting with the baseline model, we observed that the learning rate for the VQ-VAE model was too large to yield any meaningful result, so we added a cyclic learning rate scheduler. Considering our time and compute constraints, we developed an optimized training scheme that could give us acceptable results within a days worth of training on a single, consumer-grade GPU. We accomplished this by implementing mixed precision training. We also ablated our batch size, reducing them to 16 for VQ-VAE and 8 for PixelSNAIL training. Lastly, we also implemented a system that employs the trained model on inference mode to return
the FAD scores for 32 randomly-generated foleys of each of the aforementioned classes for subsequent evaluation.
### Using CEmbed: An enhanced audio representation
The baseline model is trained on, and generates melspectrogram representations of foley audios. Specifically, it uses 80 mel filter banks, an FFT size of 1024 and a hop size of 256 to obtain the melspectrogram. This converts 4s of foleys sampled at 22050 Hz to 80x344 vectors, ie, a \(\approx 3.2\)x compression of data. We speculate that this compression undergoes significant compromises in acoustical and spectral information that could have improved the quality and accuracy of the underlying statistical distributions our downstream model approximates each foley class to.
As an alternative, we propose enhancing the melspectrogram input with higher-level audio features corresponding to factors like its key and acoustics. We believe such representations aid the model to utilize more domain-specific information while learning intra-class and inter-class qualities of the foleys. To this end, we integrated a pretrained encoder, MERT-v1-330M, as a preprocessor to our system.
MERT [9]1 is a large-scale model trained on music audio for general music understanding. It has an architecture similar to HuBERT[10], a model for self-supervised speech representation learning that has been proven to capture higher-level acoustical features than melspectrograms. While HuBERT is trained on 16 kHz speech data, MERT has been specifically trained using a Masked Language Model (MLM) paradigm on 24 kHz music / audio data. The audio-specificity of MERT embeddings and its higher sampling rate results in more granular and meaningful representation of foley features than HuBERT embeddings. Moreover, MERT has been validated against a variety of music information retrieval (MIR) tasks like genre classification and key detection. The developers of MERT state that across the zeroth dimension of its embeddings, there is a gradual increase in the level of features, e.g. the first few dimd features represent lower-level features like the time-frequency variations and the last few represent higher-level features like the key to which the piece of input audio belongs. While features like the key are more relevant to music than foleys, we believe the model could utilize this information to identify differences between foleys of the same class; for e.g. differences in the bark of a young Chihuahua and an adult Bulldog.
Footnote 1: MERT-v1-330M Huggingface: [https://huggingface.co/m-a-p/MERT-v1-330M](https://huggingface.co/m-a-p/MERT-v1-330M)
To aid concatenation of the melspectrograms with MERT embeddings, we modified how the former was obtained. This was done by increasing the mel frequency bands to 129 and increasing the hop size to 320 samples. We hypothesize that the increased features provided by MERT will compensate for the increase in melspectrogram hop size. Finally, we combined the two embeddings to form Combined Embeddings (**"CEmbed"**), as shown in Fig 1.
The use of CEmbed over plain melspectrograms required retraining all the downstream models in our system, along with significant changes to their architectures as described in the following subsections. For a brief comparison of the changes made to the input embedding of the baseline and the final model, refer Table 1.
### Vq-Vae
A latent variable model works under the assumption that given a vector of latent variables \(z\) and a dataset with data points \(x\), the model can closely approximate \(x\) using different values of \(z\). Formally, we wish to optimize some vector \(\theta\) in some space defined by the dimensions of \(z\) and \(x\) such that the probability of generating each \(x\) in the dataset is maximized, according to
\[p(x)=\int p(x|z;\theta)p(z)dz \tag{1}\]
where \(p(x|z;\theta)\) is a Gaussian distribution, such that optimization techniques can be used to increase \(p(x)\). A variational autoencoder (VAE) attempts to calculate \(p(x)\) only based on the values of \(z\) which are most likely to have produced \(x\). We define a posterior categorical distribution \(q(z|x)\) that gives the distribution over the values of \(z\) likely to produce \(x\). [11] These function make up a VAE, which consists of an encoder network that parameterizes \(q(z|x)\), a prior distribution \(p(z)\), and a decoder with a distribution \(p(x|z)\) over input data.
\[z_{q}(x)=e_{k} \tag{2}\] \[k=\arg\min_{j}||z_{e}(x)-e_{j}||_{2} \tag{3}\]
This estimator is used to calculate the reconstruction loss, \(\log(p(x|z_{q}(x))\). The gradient for this function can be approximated by the gradients from the decoder input; however, making this approximation effectively bypasses the embeddings during backpropagation, so a different method is necessary to learn the codebook. [12] For this task, the Vector Quantization (VQ) algorithm is used to form a quantized approximation to a distribution of input data vectors using a finite number of codebook vectors, then uses the Euclidean distance between them to adjust the latter toward the former. This results in a VQ loss term, \(||\text{sg}[z_{e}(x)]-e||_{2}^{2}\), where sg denotes the storgradient operator, which detaches its argument from the computational graph. The volume of the embedding space is not constrained, so it is necessary to add another loss term to prioritize committing to an existing embedding over adding a new \(e_{i}\) to the codebook. This term is \(\beta||z_{e}(x)-\text{sg}[e]||_{2}^{2}\), where \(\beta\) is a tunable hyperparameter. The full loss function for the VQ-VAE
\begin{table}
\begin{tabular}{l|c|c} Variable & Baseline Shape & Final Shape \\ \hline Audio Input & (22050 x 4, 1) & (24000 x 4, 1) \\ Melspectrogram & (80, 344) & (129, 300) \\ MERT Encodings & - & (1023, 300) \\ Input to VQVAE & (80, 344) & (1152, 300) \\ VQ-VAE Latent & (20, 80) & (288, 75) \\ \end{tabular}
\end{table}
Table 1: Differences in sizes between analogous variables used in the baseline and final models. The melspectrogram sizes are (frequency band step, time step).
Figure 1: Plot of a CEmbedding for one sample in the development set. The lowest and least uniform-looking rows represent the melspectrogram, while the upper rows are made up of the MERT-generated embeddings.
is then
\[L=\log(p(x|z_{q}(x))+||\text{sg}[z_{e}(x)]-e||_{2}^{2}+\beta||z_{e}(x)-\text{sg}[ e]||_{2}^{2} \tag{4}\]
[12] Within the baseline implementation provided by the DCASE organizers, a VQ-VAE model is used to learn a discrete-time frequency representation of the sounds in the training dataset.
### Enhancements to VQ-VAE: MVQVAE
To ensure that the latent representations of foleys generated from the incoming CEmbeds utilize as much of the useful information as efficiently as possible, we proposed several changes to the baseline VQVAE architecture. The resulting model is termed MERT - VQVAE (**MVQVAE**) with its main enhancements described as follows:
#### 3.4.1 Foley Conditioning
The baseline VQ-VAE model learns an unconditional representation of sound, without any additional information about the category of sound during optimization or inference. Hence the responsibility of conditional sound generation lies solely with PixelSNAIL, which is tasked with learning to sample from the generalized codewords that make of the VQ-VAE's codebook, in order to assemble sequences based on the unique distribution of each sound category. However, the baseline VQ-VAE tends to produce codewords with similar conditional distributions across foley categories, which can make it difficult for PixelSNAIL to learn category-specific distributions. We hypothesize that this difficulty arises because the similar distributions cause PixelSNAIL to confuse categories, resulting in poor generation quality. To address this, we introduce a single linear layer that receives the average pre-quantization channel values of the latent representation and predicts the foley category to which the input belongs. The cross-entropy loss between the predicted and the actual foley category is added to the total loss scaled by a factor of \(1\times 10^{-2}\).
#### 3.4.2 CEmbed-specific model expansions
Since the CEmbeddings in our new model are \(\approx 14\) times larger than the melspectrograms, the baseline VQVAE cannot operate on them as is. Thus, one key enhancement brought by MVQVAE include increasing the size of the dictionary maintaining the codebook vectors that can represent a single encoder output from 512 to 1024. Additionally, we added a parallel ResNet block in the encoder and decoder to increase its capacity to grasp the increased information provided by the CEmbeds. Further, we included asynchronous time and frequency-masking data augmentations in the training paradigm to prevent the model from over-associating redundant relationships that may exist in the melspectrogram and the MERT embedddding of a given CEmbed. Fig 2 demonstrates this training paradigm.
### PixelSNAIL
As mentioned in the previous section, generative models estimate the p(x), or the probability of observing some trait x. Autoregressive models factor the joint distribution as a product of conditionals over each feature.
\[p(x)=p(x_{1},\dots x_{n})=\prod_{i=1}^{n}p(x_{i}|x_{1},\dots x_{i-1}) \tag{5}\]
Autoregressive models implemented using traditional RNNs generally under-perform, possibly due to the temporally linear dependency of the information kept within hidden states from one time step to the next. Other architectures that would allow a model to easily refer to earlier parts of an input sequence are causal convolutions, which allow high bandwidth access over a finite context size, and self-attention models, which convert an input sequence into a set of key-value pairs, allowing access to an infinitely large context with low bandwidth access. The SNAIL method combines the two approaches by using the convolutions to aggregate information over which to build context and perform an attentive lookup. [13] PixelSNAIL applies this strategy to autoregressive models. PixelSNAIL is comprised of residual blocks that carry out causal convolutions and attention blocks that produce keys and values from input data. [14] Within the baseline model provided by the DCASE organizers, PixelSNAIL is trained to learn the joint distribution of the discrete time frequency representation (DTFR) conditional on class label in order to autoregressively generate DTFR components.
#### 3.5.1 Optimizing PixelSNAIL for CEmbed: Zen Mode
When applied to CEmbeddings, the baseline version of PixelSNAIL suffers from impractical matrix multiplications. The scaled dot-product attention used in PixelSNAIL has an \(O((TF)^{2})\) memory requirement, where \(T\) and \(F\) are the time and feature dimensions of the quantized encodings from MVQVAE. This quadratic scaling makes self-attention impractical for longer sequence lengths, especially with the increased feature dimensionality introduced by MERT. Our group proposed an approach called **Zen Mode** to balance PixelSNAIL's efficiency with the preservation of CEMbeddings' additional dimensionality.
Zen mode reduces the computation complexity of the self-attention mechanism in PixelSNAIL by incorporating trainable strided causal convolutional layers over the key and query vectors and transposed causal convolutions over the attention output. The convolutional layers downsample the input to the attention
Figure 2: From top to bottom: the actual CEmbedding, an example of an augmented training input to the model, and the reconstructed output of MVQVAE
block, representing higher-level, coarser information from the embeddings and decreasing computational complexity. Our model applies a downsampling factor of 4, reducing the cost of computing the self-attention matrix by a factor of 16. Meanwhile, the actual Cembed data is used without any downsampling. This allows us to model longer sequences while not sacrificing useful CEmbedding feature data in PixelSNAIL's decoder hidden states.
In the context of autoregressive models like PixelSNAIL, maintaining causality is essential. Standard transposed convolutions do not inherently possess causal properties, To address this, we introduce a novel technique called _causal transposed convolution_. Causal transposed convolutions combine the upsampling capability of transposed convolutions with the causality property required for autoregressive modeling. This ensures that the generated output maintains causality, preserving the autoregressive nature of PixelSNAIL.
To the best of our knowledge, the use of zen mode and causal transposed convolutions have not yet been proposed in the machine learning literature, making this a unique contribution of this work. With these enhancements, we term the new model as **Zen PixelSNAIL**.
### Modifications to HiFi-GAN: MHifiGAN
The pre-trained HiFi-GAN provided by the challenge organizers expects VQVAE-decoded melspectrograms to generate audio at 22050 Hz. Since MVQVAE returns decoded Cembeds, we propose MERT HiFiGAN (**MHiFiGAN**), a model trained from scratch to vocode Cembeds to audio at 24000 Hz. In contrast from HiFiGAN that performed dilation by a factor of 256, MHifiGAN dilates incoming Cembeds, which have a feature rate of 75 Hz, by a factor of 320. This also accounts for errors in rounding the duration of the foley sounds to 4 seconds, an area in which the previous model was prone to error.
To make MHiFiGAN robust against imperfections in the MVQVAE-decoded Cembeds, we modified the training paradigm of MHiFiGAN such that its trained on time and frequency masked Cembeds.
## 4 Evaluation Metrics
Our model will be evaluated on the quality of its output, which will be evaluated quantitatively via Frechet Audio Distance (FAD) and qualitatively via a subjective test.
FAD is an adaptation of the Frechet Inception Distance (FID) from the visual to the audio domain. Embedding statistics are extracted from a full evaluation set and a training set using VGGish, a pre-trained audio classification model. Multivariate Gaussians \(\mathcal{N}_{\text{c}}(\mu_{e},\Sigma_{e})\) and \(\mathcal{N}_{b}(\mu_{b},\Sigma_{b})\) are then computed on the evaluation and training sets. The Frechet distance between two Gaussians,
\[\mathbf{F}(\mathcal{N}_{b},\mathcal{N}_{e})=||\mu_{b}-\mu_{e}||^{2}+tr(\Sigma_ {b}+\Sigma_{e}-2\sqrt{\Sigma_{b}\Sigma_{e}}) \tag{6}\]
is called the FAD score [15]. FAD does not require a piece of reference audio to evaluate input, making it well-suited to evaluate this problem, as there is no specific ground truth for a clip falling into one of the categories.
The subjective test will be carried out by the challenge organizers and members of other submission teams. Evaluators will judge the similarity between audio clips generated using the baseline model, audio clips generated using submitted models, and non-synthesized audio clips. Both fidelity and the degree to which the generated sound suits a category will be considered [1].
## 5 Development Set
The development dataset provided by the DCASE organizers consists of 4,850 mono 16-bit 22,050 Hz sound clips from the Urban-Sound8K, FSD50K, and BBC Sound Effects datasets. Each sound clip is exactly four seconds long and belongs to one of seven categories: DogBark, Footstep, GunShot, Keyboard, MovingMotorVehicle, Rain, and Sneeze/Cough. Per the challenge regulations, additional samples from these datasets are not permitted for training the foley synthesis system.
## 6 Preliminary Results
The DCASE development dataset was split into a train and validation set for model evaluation. The train set consisted of 4360 samples and the validation set contained 245 samples. The validation set was constructed with a stratified random sample where 35 samples were randomly selected from each category, and the remaining samples were assigned for training.
### Baseline Model
We have successfully implemented and trained the baseline solution described in [8], surpassing the results of the challenge organizers in all seven foley sound categories. The baseline model's FAD scores evaluated on the DCASE development dataset are provided in Table 3.
All code is available on our project GitHub repository, as noted in the abstract. This includes our implementation of the baseline
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**ID** & **Category** & **Number of Files** \\ \hline
0 & DogBark & 617 \\ \hline
1 & Footstep & 703 \\ \hline
2 & GunShot & 777 \\ \hline
3 & Keyboard & 800 \\ \hline
4 & MovingMotorVehicle & 581 \\ \hline
5 & Rain & 741 \\ \hline
6 & Sneeze/Cough & 631 \\ \hline \end{tabular}
\end{table}
Table 2: The number of foleys belonging to each category in the development set and their class ID.
Figure 3: Frequency against time melspectrogram output of HiFi-GAN during training, at epochs 1, 94, and 188 (top to bottom) - over multiple epochs, the melspectrograms become more refined
model and the FAD computation. Our training runs for VQ-VAE and PixelSNAIL are openly available to view on Weights & Biases.2 We would also like to present a few example sounds generated by our current model in each category.3
Footnote 2: [https://wandb.ai/audio-idl/Foley-sound-synthesis_DCASE_2023-baseline_dcase2023_task7_baseline](https://wandb.ai/audio-idl/Foley-sound-synthesis_DCASE_2023-baseline_dcase2023_task7_baseline): [https://drive.google.com/drive/folders/10LdqxEeVerVNEqcAb3wWjjppxn1m27vd](https://drive.google.com/drive/folders/10LdqxEeVerVNEqcAb3wWjjppxn1m27vd)
Following the training procedure by [8], we trained the VQ-VAE for 800 epochs with a learning rate of \(3\times 10^{-3}\), although we reduced the batch size to 16 from 64 in order to fit within a single GPU. Notably, however, we have exceeded the baseline performance with only 265 training epochs of PixelSNAIL, whereas [8] train for 1500 epochs. We attribute this improvement in efficiency primarily to our reduction in batch size from 32 to 8 and our addition of a cyclic learning rate scheduler with a reduced initial learning rate of \(1\times 10^{-5}\). Our use of PyTorch's automatic mixed-precision (AMP) training enabled us to complete the training of both the baseline VQ-VAE and PixelSNAIL models in under 24 hours on a single NVIDIA RTX A4000 with 16GB of VRAM.
### Conditioned VQ-VAE and MVQVAE
Table 4 presents the results for the baseline and conditioned VQ-VAE models trained on Melspectrograms. Table 6 shows the same but for the MVQVAE. The addition of the classification loss term reduces both the train and validation MSE reconstruction loss.
Most notably, we see an extremely significant reduction in latent loss, which measures the difference between the pre- and post-quantization encodings. Since the encoder output is mapped once to the codewords to obtain training data for PixelSNAIL, and then again to decode PixelSNAIL generation output during synthesis, it is critical to obtain a low latent loss. This measures the degree of misalignment between the codebook and the encoder output, and hence the level of noise introduced by mapping between the encoding and the latent codes.[8]
We hypothesize that the addition of class-conditioning described in section 3.4.1 while training the VQ-VAE/MVQVAE helps to better structure the latent space, as it allows the model to separate features unique to each sound category. This separation enables the codebook to hold more meaningful codewords that cater to individual sound categories, ultimately leading to a more effective use of the codebook's capacity.
### Mihfi-Gan
Since the baseline HiFiGAN model was pretrained and provided to us, we are unable to report its metrics to compare it with the results of MiHi-GAN. However, through playback of the audio generated, we can validate that the model improves the quality of CEMbed to audio conversion over several epochs. Table 6 summarizes the validation and training metrics obtained for MiHiFi-GAN after training it for 180 epochs.
## 7 Obstacles to Final Results
In our research, we propose a solution that consists of a cascade of three large models. Due to upstream modifications made to accommodate more detailed input representations, we had to enhance and train these models ourselves. One of the main challenges we faced during the training process was the requirement of having a fully trained MVQVAE to extract codes for Zen PixelSNAIL training. Despite implementing Zen mode optimizations, the increased size of Zen PixelSNAIL introduced numerous engineering challenges that impeded training.
We experimented with a few different MVQVAE configurations. The first of these, which we called MVQVAEv1, contained 512 codewords - the same number as the baseline Melspectrogram-based VQ-VAE. To train Zen PixelSNAIL on MVQVAEv1 codes and include CEMbeddings within our fixed compute budget of 16GB VRAM, we decreased its parameter count by reducing the number of channels from 256 to 128 and the number of residual blocks from 4 to 3. However, after several days, the model reached a saturation point at 50% accuracy and could not learn further, necessitating training from scratch on a larger model.
Concurrently, we discovered that the 512-codeword limitation
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Train} \\ \hline Model & MSE & Cross-Entropy & Latent Diff \\ \hline Conditioned & **0.357** & 0.0859 & **0.0179** \\ Unconditioned & 0.4084 & – & 0.2973 \\ \hline \multicolumn{3}{|c|}{Validation} \\ \hline Model & MSE & Cross-Entropy & Latent Diff \\ \hline Conditioned & **0.2636** & 0.02145 & **0.0208** \\ Unconditioned & 0.3196 & – & 0.3669 \\ \hline \end{tabular}
\end{table}
Table 5: Loss terms in the conditioned and unconditioned MVQ-VAE.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**ID** & **Category** & **FAD (DCASE)** & **FAD (Ours)** \\ \hline
0 & DogBark & 13.411 & **8.958** \\ \hline
1 & Footstep & 8.109 & **4.189** \\ \hline
2 & GunShot & 7.951 & **6.765** \\ \hline
3 & Keyboard & 5.230 & **3.086** \\ \hline
4 & MovingMotorVehicle & 16.108 & **11.319** \\ \hline
5 & Rain & 13.337 & **9.321** \\ \hline
6 & Sneeze/Cough & 3.770 & **2.675** \\ \hline \end{tabular}
\end{table}
Table 3: FAD scores on DCASE development set (lower is better).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{Train} \\ \hline Model & MSE & Cross-Entropy & Latent Diff \\ \hline Conditioned & **0.357** & 0.0859 & **0.0179** \\ Unconditioned & 0.4084 & – & 0.2973 \\ \hline \multicolumn{3}{|c|}{Validation} \\ \hline Model & MSE & Cross-Entropy & Latent Diff \\ \hline Conditioned & **0.2636** & 0.02145 & **0.0208** \\ Unconditioned & 0.3196 & – & 0.3669 \\ \hline \end{tabular}
\end{table}
Table 6: Training & Validation Metrics for MHiFiGAN.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{3}{|c|}{Train} \\ \hline Model & MSE & Cross-Entropy & Latent Diff \\ \hline Conditioned & **0.357** & 0.0859 & **0.0179** \\ Unconditioned & 0.4084 & – & 0.2973 \\ \hline \multicolumn{3}{|c|}{Validation} \\ \hline Model & MSE & Cross-Entropy & Latent Diff \\ \hline Conditioned & **0.2636** & 0.02145 & **0.0208** \\ Unconditioned & 0.3196 & – & 0.3669 \\ \hline \end{tabular}
\end{table}
Table 4: Loss terms in the baseline (unconditioned) and conditioned Melyectrogram based VQ-VAE.
of MVQVAEv1 hindered its ability to reconstruct CEmbeddings. Consequently, we trained a second model, MVQVAEv2, with 1024 codewords, which resulted in better reconstruction MSE and significantly improved qualitative reconstruction during listening tests on the HiFi-GAN waveform output. Subsequently, we restored the channel count and the number of residual blocks in Zen PixelSNAIL to their original values and trained on the larger MVQVAEv2 codes.
Our final configuration, which is currently being trained on four NVIDIA A40 (48GB) GPUs, faced a multitude of engineering challenges as we attempted to scale up. The larger model was particularly susceptible to exploding gradients, which corrupted the optimizer state. Due to Zen PixelSNAIL's four serial decoder blocks, a significant accumulation of error occurred when applied to the larger CEmbeddings. To stabilize training, we implemented gradient clipping and experimented with different values of the maximum gradient norm. The training is currently ongoing, and we hope to achieve further advancements with this configuration.
Once Zen PixelSNAIL is sufficiently trained, we expect the overall system to be able to generate the specified number of fooles of each class with the fidelity and variety of each foley being considerable better than the baseline. We intend to validate the same using the evaluation strategies described in Section 4.
## 8 Conclusion
In our work, we aim to develop a neural sound synthesis engine capable of generating foleys belonging to predefined classes. Our goal is for the generated sounds to exhibit higher quality (comparable to human-generated foleys in a studio) and increased variety compared to general-purpose text-to-audio models and existing baselines. To achieve this, we create embeddings that represent both the lower-level time-frequency variances and the higher-level acoustical and musical features of the foleys. We then enhance our models to utilize this information for the intrinsic development of more detailed and distinguishable statistical distributions of each foley class.
Regarding model improvements, we introduced potentially innovative techniques such as class conditioning to increase the inter-class distance between foleys, Zen Mode to streamline attention-context computations without sacrificing input quality, and Causal Transpose CNNs to support dilation in auto-regressive prediction problems.
### Future Work
1. **Reducing the dimensions of MVQVAE latent encodings**: To alleviate Zen PixelSNAIL training, a simple strategy would be to modify MVQVAE ResNet's to output encodings of lower dimensionality at a lower vector rate. Another modification would be to add a CNN layer to compress MERT embeddings effectively. However, a major challenge in this case would be to identify the right tradeoff between granularity of information and computational load.
2. **Identifying alternatives to Zen PixelSNAIL**: It would be logical to identify complete alternatives to autoregressive approaches like PixelSNAIL. Diffusion models used in works like AudioLDM would be a popular option to consider in this case.
3. **Identifying alternatives to MVQVAE**: Alternatives like improved VQ-Diffusion models [16] may eliminate the unidirectional bias and accumulation of errors of the auto regressive approach, thus avoiding the quadratic attention cost of PixelSNAIL.
|
2309.13329 | Unveiling Ethereum's Hidden Centralization Incentives: Does Connectivity
Impact Performance? | Modern public blockchains like Ethereum rely on p2p networks to run
distributed and censorship-resistant applications. With its wide adoption, it
operates as a highly critical public ledger. On its transition to become more
scalable and sustainable, shifting to PoS without sacrificing the security and
resilience of PoW, Ethereum offers a range of consensus clients to participate
in the network. In this paper, we present a methodology to measure the
performance of the consensus clients based on the latency to receive messages
from the p2p network. The paper includes a study that identifies the incentives
and limitations that the network experiences, presenting insights about the
latency impact derived from running the software in different locations. | Mikel Cortes-Goicoechea, Tarun Mohandas-Daryanani, Jose Luis Munoz-Tapia, Leonardo Bautista-Gomez | 2023-09-23T10:20:04Z | http://arxiv.org/abs/2309.13329v1 | # Unveiling Ethereum's Hidden Centralization Incentives: Does Connectivity Impact Performance?
###### Abstract
Modern public blockchains like Ethereum rely on p2p networks to run distributed and censorship-resistant applications. With its wide adoption, it operates as a highly critical public ledger. On its transition to become more scalable and sustainable, shifting to PoS without sacrificing the security and resilience of PoW, Ethereum offers a range of consensus clients to participate in the network. In this paper, we present a methodology to measure the performance of the consensus clients based on the latency to receive messages from the p2p network. The paper includes a study that identifies the incentives and limitations that the network experiences, presenting insights about the latency impact derived from running the software in different locations.
Ethereum, Ethereum2, Ethereum Consensus Layer, Ethereum Rewards, The Merge
## I Introduction
Ethereum [1] has been an important achievement on the road to ubiquitous blockchain technology. It has shown remarkable adaptability over time, leading the technical research vanguard of the blockchain industry after being the first decentralized platform that offered a general-purpose virtual machine capable of processing the so-called _smart contracts_[2]. Over the last five years, Ethereum has been transitioning from an energy-hungry Proof of Work (PoW) [3] to a more efficient and scalable Proof of Stake (PoS) [4] protocol. A transition that relies on GasperFFG [5] and RANDAO [6] to replace the consensus and randomness provided by PoW.
Since _the merge_[7], running a validator in Ethereum's ecosystem requires two codependent software: an execution layer (EL) client and a consensus layer (CL) client or beacon node. The EL client is responsible for receiving and validating transactions or smart contracts [2] in the execution layer, tracking the interaction between users and the Ethereum Virtual Machine (EVM) and rewarding the proposer validator with the referenced tips of each transaction. On the other hand, the CL client is in charge of operating, validating, and recording the interaction between validators to find consensus over the chain's state. Ethereum validators run on top of these nodes, which can interact with the rest of the beacon chain network, earning rewards based on the quality of their contribution towards consensus.
The successful transition from PoW to PoS of Ethereum [8] implies a radical change in the consensus mechanism. Validators must actively participate in the consensus to keep finalizing previous epochs, assigning them periodical duties they must accomplish over their lifetime [9]. However, to perceive the maximum remuneration that PoS can grant to honest participating validators, the quality of their implication has a significant weight on their reward. Each validator has the following duties to fulfill: i) attest on a slot every epoch (defined randomly on the state transition between epochs), ii) sign sync-committee duties if the validator belongs to a sync-committee, and iii) generate and propose blocks when they are chosen to do so. The validator is now in charge of proposing blocks when they become block-proposers.
All these duties and their implication in the consensus are defined in the Ethereum specification [10]. However, although there is one single specification, Ethereum relies on a wide variety of implementations to introduce the resilience of the protocol. Five main clients are consolidated to participate in the network: Lighthouse, Lodestar, Nimbus, Prysm, and Teku. Each uses a different programming language to implement the networking and consensus spec. Even though this multiple-spec implementation significantly impacts the feature development time (extra complexity making the implementations inter-operable between clients), it makes the protocol fault-tolerant. Ensuring that with proper client diversity in the network, a bug on a single client won't break the aggregation of new blocks to the chain. However, although all the implementations are spec-compliant, the wide variety of conditions in the protocol makes some algorithms more optimal than others under specific circumstances, i.e., certain network instabilities or the resulting latency delay derived from the geo-location of the node. Of course, having a faster or wiser algorithm might get mirrored in higher rewards. Thus, in this paper, we analyze the performance differences between geographical regions from a CL client perspective, with the final intention of spotting any existing relevant performance difference that could compromise the client diversity in the network.
From the premise that a better accomplishment of duties generally means more reward for validators, this paper analyzes the direct implications of the networking conditions on the stability and performance of the five main CL clients. We present a study that measures the duties completion of multiple clients across multiple locations in live networks, which has not been previously done to the best of our knowledge. The contribution of this work is to identify any missed performance
between different geographical locations that could compromise the network's decentralization while proving that client diversity in the network is mature enough to be achieved in production without significant sacrifices. We demonstrate that all locations perform similarly under standard networking and hardware conditions, achieving an average extracted reward of \(80.18\)%. However, when we deploy Ethereum clients in virtual machines or less globally connected regions, the reward can drop to \(74.34\)%.
The paper is organized as follows: Section II introduces the state of the art of the paper, going through previously done work on the topic, Section III introduces all the methodology and tools used to perform the study, Section IV presents the results obtained by our study, Section V discusses the insights presented in the paper, and Section VI summarizes all the highlights of the study.
## II Related Work
Distributed ledgers are an interesting phenomenon in the internet space. In such a critical environment where users trust the network to track their economic balances, participants must agree on a set of rules to reach a consensus over the interaction with the ledger for personal and shared interests.
From the nature of distributed networks, peers can join, leave, and disconnect the network as they please [11][12], i.e., users turning off their nodes to upgrade their version. However, this tends to decrease when discussing proof of stake systems [13]. Validators have incentives for actively participating in consensus, although they can also be penalized if they don't do it [9]. This leads PoS blockchains' networks to be more stable in general. Nonetheless, they still represent Byzantine fault tolerance systems [14], where the system can overcome at some degree the sudden decrease of the honest participants ensuring a unanimous consensus over the state of the chain. Beyond a fault-tolerant consensus mechanism, Ethereum adds a second layer of resilience by having multiple software to participate in the network. Each one is written in a different language and by a different team, targeting various end-users. Thus, they are optimized for different situations.
Ethereum aims to be light and portable, substantially reducing the hardware requirements that the prior PoW meant. It has been proved [15] that the new PoS version of Ethereum can be successfully run on medium-low hardware machines, with some clients needing less than 4GB of RAM and two cores to participate in the CL network successfully. However, no prior work analyzes the impact on the performance of such a wide variety of end-users. The question of whether hardware-resource optimal clients, which allow solo stackers to validate from home, can perform as well as less hardware-restricted ones is still unanswered.
Previous studies like [16] have demonstrated that the node's location directly affects the networking performance and, inevitably, the performance of PoW-based application's nodes. Message distribution is essential in PoS systems such as Ethereum. Although each slot gives a time window of 12 seconds to propose a block, commit the attestations, and aggregate them, receiving half a second sooner or later the block can directly impact the attestation of validators, as their attestation could shift from, the block is valid, to there wasn't a block at all. This means nodes in regions far from the core of the network could be disadvantaged. Placing a client in a poorer connected region can incentivize other validators to keep concentrating in the same geographical locations. Suppose a latency increase can put at risk the performance of a validator (and the economic stimulus attached to it). In that case, the short window of action that the protocol suggests would incentivize the centralization of clients in regions with the counterpart that it increases the exposition to censorship of the local authorities.
This paper will present the Ethereum CL performance comparison results based on network latency in different locations. In the study, we empirically analyze the stability of the clients under the real network behaviors of Ethereum's mainnet and the Goerli testnet. We analyze the performance of the different implementations by comparing the quality of the accomplished validator duties, comparing the score of the blocks generated by the other implementations. Furthermore, we reproduced the experiments in different geographical locations to explore the impact of the message propagation latencies on the ability to perform consensus duties.
## III Methodology
As Ethereum keeps all the interaction with the chain available on the public blockchain, debugging the performance of validators from different locations remains, in most cases, accessible. However, processing and indexing the necessary information to determine the performance of a client or a validator in a human-readable way imply reconstructing, in some occasions, the chain's status in the past, and this is not that easy to reproduce. In this section of the paper, we will introduce the basic fundaments of Ethereum to understand the methodology we used to measure the performance of Ethereum nodes, i.e., slot time utilization, block generation, or attestation flags. Furthermore, we will also introduce the support software we built and used to generate the data we will later discuss in Section IV.
### _Scoring Ethereum CL's duties_
Shifting Ethereum's consensus mechanism to PoS unarguably increased the complexity of the protocol. Since the merge, only active validators in the beacon chain can participate in the consensus, having to do it at least once every epoch. Validator's block proposals, block attestations, and sync committee votes are the duties that ensure that the blockchain keeps adding blocks under a consensus. Thus, the quality of these duties determines how well each validator contributed to the consensus, which ultimately defines the reward they get.
The consensus layer of Ethereum is organized in epochs (see Figure 1 for reference). Each epoch contains \(32\) time windows of \(12\) seconds called _slots_ where a single validator elected from the RANDAO Reveal [6] algorithm has the chance to aggregate a new block to the beacon chain. Since the rest of
the existing active validators must reach a consensus over each proposed block, splitting the epoch in \(32\) slots helps reduce the computational load of processing the duties of \(750.000+\) (at the time of writing this paper) active validators. Thus, the whole list of validators is divided into the \(32\) slots and then into a maximum of \(64\) committees. This way, each added block serves as the main unit of time where new historical data is added to the beacon chain.
#### Iii-A1 Attestations
Attestations or votes are the statements each validator must make to help finalize1 beacon epochs. Therefore, each committee's resulting votes are aggregated before adding them to the following proposed blocks. This helps considerably reduce the block size by keeping track of the duties and saves time for future block proposers as they only have to listen to the latest aggregations of each slot. Inside the participation of each validator, the following three main flags determine the "quality" of the attestation:
Footnote 1: Finalization is used to express when a block has been validated by more than 66% of the network and for over two entire epochs. It represents the moment when the data inside the blocks of that epoch is no longer mutable.
* Source: hash of the justified checkpoint2 at the moment the attested block was proposed. Footnote 2: Checkpoints in the CL represent the Beacon State root of the epoch’s first slot, including the result of the state transition from the previous epoch.
* Target: hash of the first block at the epoch.
* Beacon block root: hash of the attested block.
Each validator has \(32\) slots to produce these attestations, leading to a second parameter that interferes with the "quality" of the attestation, the inclusion delay. The inclusion delay refers to the number of slots it took for an attestation to get included in a block after the attested one. This means that the optimal performance for a validator is to produce a vote with the three flags correct and include it in the next block, meaning an inclusion delay of 1 slot.
#### Iii-A2 Sync committees
Since the _Altair Hard Fork_[17], sync committees were added to help light clients validate blocks without fully downloading and processing the beacon chain. Each sync committee comprises 512 randomly selected validators who sign new block headers every slot and rotate every 256 epochs (8192 slots).
#### Iii-A3 Block proposals
In every slot, a single active validator has the chance to generate and propose a beacon block. When that moment arrives, the validator adds the needed metadata of the block with as many aggregated attestations as possible. With an upper limit of 128 aggregations that can fit into a single beacon block, the CL reward that the proposer gets directly depends on the quantity and quality of the included attestations. From the reward that each non-previously included attestation flag generates, there is a separate percentage that gets saved for the proposer of the block that includes it. Thus, the more new attestations we add to a block, the greater the reward it generates. The same happens with the sync committee rewards; the block proposer gets a percentage of the total reward that the included sync committee duties generate. For this reason, the block proposer is incentivized to include as many sync committee duties as possible.
### _Slot time ranges_
We have already introduced the time division of Ethereum CL's blockchain. However, as Figure 1 shows, there is still a smaller subdivision inside each slot. Although these numbers are just guidelines, following them is crucial to avoid generating confusion in the network. To achieve the best performance in the network, the following tasks need to be performed in order inside the slot:
1. Block proposers are expected to create and broadcast a new block at the beginning of the slot (second 0 of the slot). This gives \(4\) entire seconds for the message to reach the rest of the participants in the network. To do so, they have a time window of 4 seconds prior to the start of the slot to receive and group aggregated attestations from the previous \(32\) slots.
2. After the first \(4\) seconds of the slots, validators assigned to attest to it are expected to generate and broadcast their votes with their perception of the chain (attesting to the _source_, _target_, and _head_ they see). They share this vote with the corresponding beacon committee aggregators, and the spec assigns the same time range of \(4\) seconds to broadcast the message.
3. Finally, the committee aggregators must collect votes between seconds \(4\) to \(8\), producing the aggregated attestations at the \(8\)th second of the slot. In all committees, 16 validators are randomly selected to aggregate and broadcast the attestations. After that \(8\)th second, the network disposes of 4 extra seconds so that the next block proposer has enough time to receive all the aggregations.
Keeping the correct timing between these tasks is crucial to avoid confusion in the network. For example, if a block proposer extends the creation of its block for \(10\) seconds, the block could be received later than 12 seconds since the start of the slot, risking being voted as a missed block. If a validator waits too long to generate and send the attestation, the aggregators might not include that vote in the same slot, increasing the inclusion delay and reducing the final reward.
### _Support softwares_
Although blockchains keep most of the interactions and balances publicly available on-chain, in some occasions, that information (i.e., validator duties) has to be reconstructed from
Fig. 1: Slot time division between duties.
the locally stored beacon states in the clients. As this information is essential to quantify and qualify the performance of a validator and client, we have relied on a set of tools that helped us gather and index all the necessary information.
#### Iii-B1 Conensus rewards
To compute the rewards obtained by a validator over the Maximum Extractable Reward (MER) on an epoch, we measured each validator's attestation and sync committee rewards on each epoch. We relied on the attestation and sync committee rewards models proposed by [9] by using the same software _GotEth_[18], an open-sourced tool that indexes the following items from a trusted beacon node:
* Validator individual duties.
* The quality of these duties (i.e., if validators missed a block proposal, the number of flags successfully voted).
* The max attestation and sync committee (if the validator during that epoch was inside a sync committee) reward that each validator could have achieved.
#### Iii-B2 Consensus block scorer
To measure the capabilities of the different clients to generate beacon blocks, we created a custom open-sourced tool based on the beacon node multiplexer _Vouch_[19]. This custom tool can be connected to as many beacon nodes as we want, indexing some metrics from the live network into a PostgreSQL database. The tool can communicate with these provided beacon nodes, requesting them to generate a block at the beginning of every slot. With the final intention of analyzing the content of each proposed block, the tool aggregates the number of the included new votes, sync aggregates, attester slashing, and proposer slashing, generating a synthetic scoring system that later on will be used to compare them. The score is derived directly from the beacon chain rewards formulas, removing the actual _Base Reward_ from the equation to make the score calculation faster. Furthermore, the tool can stream and record some events from each beacon node's API. For example, the tool tracks and timestamps every time it gets notified when a new block message is received. This allows us to compare the arrival time of messages such as new blocks.
## IV Evaluation
To measure the performance of each Ethereum node, we deployed a set of experiments that would allow us to compare the results fairly. To keep the experiments away from simulations, we relied on Ethereum's live networks to perform these experiments. We chose Ethereum's mainnet as a mature, stable, and reliable network. However, since activating a validator in mainnet requires a deposit of \(32\) ETH, we relied on Ethereum's Goerli testnet to activate \(3000\) validators with the help of the EF [20].
The correctness of the attestation flags significantly impacts validators' rewards. To investigate why a single location would achieve fewer rewards than the rest, Figure 2 shows the ratio of missed flags aggregated by location. We performed two main experiments divided into two sections IV-A and IV-B. The first will evaluate the accomplishment of the validator's duties, and the second will introduce the differences when composing block proposals. Overall, for both studies, the control clients were grouped in groups of five. Each of the main available clients was paired with an EL client, which became mandatory after the merge. Thus, we spawned five pairs of CL clients + Netherrmind [21] in four to six locations. However, each of the following sections will further introduce its configuration details.
### _Validator performance_
The current PoS consensus mechanism drastically changed the reward system in Ethereum. The protocol prioritizes rewarding validators' stability and continuous duty compliance. The reward retrieved through attestations represents the \(61\)% of the gross reward a single validator can achieve. Thus, this first experiment compares the stability of performing attestations. To replicate the study and measure the impact of running clients in what we consider regions with more significant latency, Table I includes the configuration of nodes we designed to perform the study.
Most cloud service providers cannot offer the exact same hardware resources in all the regions that we wanted to test. Therefore, we used multiple cloud providers with different capabilities. To ensure that all clients had roughly the same hardware resources, we broke the set of clients into two different machines in locations such as Sydney, Singapore, and Toronto. The hardware limitations outside the EU and the US were noticeable.
#### Iv-A1 Reward comparison based on Goerli validators
The first part of the study compares the aggregated validator rewards per location between epochs \(157835\) and \(158835\), or in a human-readable format, between dates February 23rd 2023, and February 27th 2023. Intending to discover any hints of a possible underperformance of any specific region, Figure 3 shows the achieved reward by the aggregated validators per location out of their respective MER.
Considering the aggregation of the validators per location as our validator control pools, we observe that most nodes
achieved a similar reward when comparing it with their MER. It is expected to see drops in the achieved rewards when we compare validators in testnets with validators in mainnet. In this case, the Goerli testnet is publicly open for participants to collaborate without needing fiat collateral to ensure their participation. Thus, there is a higher ratio of participating nodes that are not properly maintained, wrongly configured, or directly disconnected. This ultimately impacts the stability of the network, experiencing more missed blocks, more reorganizations, more missed flags, and thus bigger inclusion delays lowering the MER compared with validators in mainnet.
With all these said, in Figure 3, we still find a slight variation around an average reward of \(80.2\)% for most locations. The most significant exceptions are recorded in Sydney and Warsaw, which fall to \(74.3\)% and \(76.8\)%, respectively. As explained before, the instance deployment differs in some locations. Nodes located in Frankfurt, London, and Warsaw share a similar infrastructure setup, in which the achieved reward is similar, varying between \(76\)% and \(82\)%. Nodes located in Singapore, Sydney, and Toronto also share a similar infrastructure setup between each other. However, Sydney achieved \(10\)% less MER than the other two locations (\(84\)%). Since Sydney is the most remote location of the chosen setup, as most nodes concentrate in Europe and North America [13], a message is expected to have a slightly higher latency to reach nodes in such remote locations. Thus, this could show that network latency significantly impacts performance. It is remarkable that validators hosted in nodes with apparently "worst" connections, i.e., nodes in Singapore, extract more rewards than "better" connected ones, such as nodes in Frankfurt. This indicates that hardware also plays a vital role in achieving a more significant share of the achievable rewards.
Missed headsThe head flag inside the attestation points to the head slot in the canonical chain. To send a correct head attestation, the validator must point to the head root and provide this attestation with an inclusion delay of 1 block. Otherwise, the head attestation is given as wrong. This explains why it is the most commonly failed flag among validators in the network. Failing the head attestation flag could mean that the node falls more into reorgs or that the attestation is not sent in time and, thus, not included in the next block (inclusion delay=1). Figure 2 shows the average flag failure per location, where we can see that the average head attestation flags' failure rate is \(27.4\)%. The figure shows that nodes in London, Sydney, and Warsaw failed over that average \(28.7\)%, \(29.9\)%, and \(30.8\)% missed head flags, respectively.
Missed targetsConversely, the target attestation flag is the least failed flag and the one that brings the most rewards out of all three flags. This is why we define that if the target flag is failed, the node is most likely out of sync and has been like this for a while. Figure 2 shows a similar pattern for the target flags, with nodes in Frankfurt, Toronto, and Singapore missing them \(5\)% and \(6.3\)% of the time. On the other hand, London, Warsaw, and Sydney nodes stay beyond that average, reaching the missed ratio of \(8\)% \(8.9\)% and \(10.2\)%, respectively. It is clear, then, by analyzing the aggregation between the number of missed head and target flags, that despite Sydney sharing the same hardware with Toronto and Singapore, it falls out of sync almost double the times.
#### Iv-B2 Proposer duties
Validators also earn rewards from proposing blocks when they are randomly chosen. Block rewards are sporadic but very high, so it is frustrating for validator owners to miss the chance of gaining such a high reward with a straightforward duty. When proposing a new block, the node must be fully synced with the network and follow the chain head without delays. Not doing so could cause the validator to miss the block proposal (or do it very late), missing out the substantial block rewards it generates.
As block proposers are randomly chosen at every epoch, Figure 4 shows the aggregated ratio of missed proposer duties between locations. The figure shows that pool nodes on each location failed an average of \(1,66\)% of block proposal duties, with Sydney nodes failing up to \(4\)% of the block proposal duties, while Frankfurt, Toronto, and Singapore didn't miss any of the proposals. Once again, it is most likely that Sydney nodes tend to fall more into the out-of-sync state and, therefore, can not perform their duties in time.
Chain reorganizations on clientsNodes have different behaviors depending on the hardware they are running on and the location where they are placed on. However, validators' achieved rewards or the ratio of missed duties are not the only methods to measure the performance of a node. The high ratio of missed target flags when attesting is the first indicator of a stability problem on a particular node, or in this case, in a location. It is hard for a validator to perform its duties correctly when the underneath node is not fully
synced with the chain. Thus, we can interpret that nodes in Sydney, as a representation of less well-connected nodes, tend to lose synchronization more often. We have chosen reorgs (chain reorganizations) as a way of measuring the stability of a node. A chain reorg represents having to drop a number of blocks (with their states) and sync the canonical version of them because the node was in a non-valid variation of the chain. As the late arrival of messages can generate those minor forks, Figure 5 shows the aggregation of reorgs registered in each location. In the graph, we can observe that Sydney has the biggest reorg average from the six different places, with eight more registered events than the average of \(102\) of all the locations. Without being an extremely large number, we can attribute the differences between duties accomplishment to the fact that the reorgs can also be defined by the number of blocks you had to drop and resync. Unfortunately, the node does not offer this information to us. We will discuss this step in detail in the following section.
### _Block scores_
With the major differences we spotted between the arrival times of the different instances deployed in _mainnet_ (the most stable network in the Ethereum ecosystem), we wanted to study the impact of this arrival latency on the capabilities of the clients to compose blocks. Following in order the rewards quantities that a validator received over time in the Consensus Layer, the resulting rewards of block proposals follow attestations with a \(7,6\)% of the total reward. Thus, we decided to benchmark, with a synthetic block score (III-C2), the differences that each client and each location produce. We experienced similar problems to the ones deploying the validator rewards study; fitting five CL clients with their respective five Nethermind clients under the same or similar machines was unmanageable. Thus, the clients and the machines were organized and deployed as Table III summarizes. The data displayed and analyzed in the following sections belongs to the range of slots \(5760722\) to \(5888722\), which belongs to the range from February 9th, 2023, to February 27th, 2023.
#### Iv-B1 Beacon block generation differences
Figure 6 shows the aggregated score of each block across the clients of each location, where we appreciate insignificant differences. Once again, there is a clear dominance of nodes located in Helsinki, outstanding over nodes in Sydney, London, and Warsaw that achieved \(1.37\)%, \(4.29\)%, and \(5.16\)% less score, respectively. Although each of the locations should have enough time to get the same attestation aggregations in the course of the last \(4\) seconds of a slot, different factors could produce this difference in the average score of the blocks:
Message propagation latencyHigher latencies when receiving messages clearly limit the number of aggregations that you can include in a block. Figure 7 displays the Cumulative Distribution Function (CDF) of the message arrival time in each location, where the Y axis represents the normalized percentiles between ranges \(0\) and \(1\), and the X axis the arrival time in seconds. We can read the figure as the \(50\)th percentile of blocks (\(0.5\) on the Y axis) in Helsinki arrived in \(1.44\) seconds or less. We can see that, despite London having the second-best median of arrival times, \(2.18\) seconds, it has one of the worst \(90\)th percentiles of \(4.15\) seconds. The large tail of block arrivals beyond \(4\) seconds represents \(10\)% of the total tracked messages and it partially explains the block score differences between locations.
The aggregation of more new votes in a blockReceiving messages later means adding fewer votes in a new block, which are the ones producing the rewards for the block proposer. Figure 8 displays the average number of new votes included at each location, including the average of their correctness. The figure clearly shows the dominance of nodes in Helsinki that not only aggregate more new votes but also have, on average, more correct attestation flags.
Desynchronization of beacon nodesDesynchronization is, even if we put endless effort into avoiding it, one
of the major drawbacks we found to explain the difference in average score between locations. Of course, a beacon node can not process new messages and generate new blocks if it gets out of sync with the head of the chain. Several events could cause this to happen, such as big local reorgs caused by higher latencies or by hardware limitations (slow disk where to prune states or not enough CPU available to validate messages on time). Figure 9 displays the percentage of slots each node was down versus the number of slots we measured. In the picture, we can appreciate that Helsinki was barely unsynchronized. We can also appreciate how Lodestar in London affected the previous averages, with a \(37,74\)% of the measured slots out of sync. The results are more puzzling when we compare it with Warsaw, concluding that despite having similar hardware, it was stable and in harmony with the rest of the clients at around \(5,25\)% downtime, and despite this, it performs worse than London both in correct flags and block score.
#### V-B2 Latencies on block arrival
We have already introduced the importance of message broadcasting latencies within the slot time range. Shorter notice of messages such as blocks or aggregations might be critical to ensure that validator duties are correctly achieved. As we would imagine, different locations with different connections with the rest of the network could generate different network perspectives. This way, if the defined 4 seconds to distribute a message weren't enough, we could expect that those regions with higher latencies would perform poorer than the better-connected ones. To corroborate our hypothesis, we tracked the arrival of block messages to each client in the four different mainnet instances. The tool would first subscribe to the beacon node's API to stream the arrival of new blocks, indexing the event with the notification timestamp. From the difference between the local timestamp and the time the slot started (second "0" of the slot, when block proposers should publish the block), we have aggregated the arrival times, distinguishing the distribution from the following figure. Figure 10 shows how clients in Helsinki received the block messages significantly sooner. We would expect from previous experiences that the European area has better connectivity in general terms, and this graph proves it: on average, nodes in Helsinki received blocks \(1.06\) seconds sooner than nodes in Sydney. However, even though we are not making any distinction across clients (all clients were aggregated) to have a fair comparison between the locations, the differences between London, Sydney, and Warsaw are not as remarkable as the arrival times in Helsinki. With an average block arrival time of \(2.60\), \(2.68\), and \(2.53\) for each, the figure shows that the major differences between the instances could be originated from the distinct machines chosen per location.
Although we tried to have the most similar machines in each location, Cloud Service providers couldn't offer the same hardware tier across the four locations. With a clearer dominance of resources from the machine in Helsinki, even though it had to share the disk among ten different clients (five CL + five EL), it still had a powerful bare metal machine with \(32\) CPU cores and \(128\)GB of memory. Limited CPU resources could become a bottleneck when a client receives and validates a block message, as they generate computational spikes every \(4\) seconds. Thus, to some degree, this CPU bottleneck can increase the measured latency of processing blocks, including the time our tool got notified. Of course, different clients mean different implementations. The heat map in Figure 10 breaks down the block arrival times per client and location showing some differences across the clients. Without being highly dispersed among the clients, it is clear that some clients received and processed the block faster than others during our study. Despite showing the same distribution as the previous figure 9 among the different locations, Lodestar is in the order of \(300\)ms slower in receiving and processing new blocks.
There are many reasons that could cause a later message arrival. On the one hand, we have the number of connected peers, where Lodestar and Prysm stay at \(50\) to \(55\) number of simultaneously connected peers, Lighthouse and Teku stay with an average of \(80\) to \(100\) connections and Nimbus outstands the rest with \(160\) direct peers. However, on the other hand, clients must allocate more computational power at the arrival of messages to process and validate the messages or to update the local beacon state. For this reason, the bigger the number of simultaneous connections the node has, the more messages you need to process. Thus, this normally generates CPU usage peaks every \(4\) seconds, and as it happens with higher latencies, both have a direct impact on the performance of the client.
Fig. 8: Number of new votes and their correct flags Fig. 9. Percentage of slots each beacon node was Fig. 10. Average block arrival time per client and on each location.
## V Discussion
This paper presents a distinct methodology that can be used to analyze the impact of latency on the performance of Ethereum CL nodes located around different geographical locations. In the presented results, we have identified that Ethereum CL's default \(4\) seconds for broadcasting gives enough margin to propagate and receive all the necessary messages (i.e., sync committee attestations and block proposals) in most geographical locations, at least if the clients are running in instances with minimum hardware specifications.
We have empirically demonstrated that some regions, such as Oceania and Southeast Asia, have higher latency distributions when receiving block messages for the first time; with some locations receiving 10% of the messages beyond the \(4\) second mark. This makes nodes more likely to lose the correct head of the chain, leading more often to reorg their local chain and ultimately failing or performing duties more poorly because the node is not fully operational. This gets mirrored when comparing the rewards achieved between the available locations, where Sydney nodes earned \(10\)% fewer rewards from the MER than the rest. We have demonstrated that even though the hardware requirements to participate in a Post-Merge Ethereum are substantially smaller than its predecessor PoW consensus, there are some minimum requirements to run both EL and CL clients without a hardware bottleneck.
Furthermore, the paper compares the networking performance of the different available Ethereum CL clients. We have demonstrated that under optimal networking and hardware conditions (i.e., the instance in Helsinki), there are barely any differences between clients, i.e., a similar downtime or out-of-sync time. We have identified that hardware limitations directly increase the latency (mostly from message validation). Thus, nodes face more downtime as they might get out of sync, or could potentially add fewer new votes to their blocks.
## VI Conclusions and Future work
This paper presents a new methodology to quantify, qualify, and compare the performance of Ethereum beacon nodes. We have demonstrated that despite all the clients performing similarly under optimal hardware and networking conditions, variations or limitations on these same ones can severely impact the stability of the beacon nodes, and thus, the performance of the hosted validators. We can conclude the study by stating that there is indeed a performance impact related to the connectivity of a node. Still, it is only significant when the hardware is not properly dimensioned. With the presented clear difference in block arrival latencies across locations, nodes further away from the core of the network can see their stability and performance reduced. Reaching even critical stability problems if the hardware is not slightly over-dimensioned. In future work, we aim to explore a real-time model able to aggregate and compare all the presented parameters to monitor and alert the performance of a client with or without a validator.
## VII Acknowledgements
This work has been supported by the Lido Ecosystem Grant Organization (LEGO), the Ethereum Foundation under the Research Grant FY21-0356, and Protocol Labs under its Ph.D. Fellowship Program FY22-P2P. We want to thank the researchers from Attestant.io for helping with the necessary infrastructure and discussions. Also, to Paristosh from the Ethereum Foundation, for his implication and help in activating the Goerli validators. In particular, Izzy and Alvaro Revauleta for their constructive feedback on this study.
|
2301.00211 | Asymptotically autonomous robustness in Probability of non-autonomous
random attractors for stochastic convective Brinkman-Forchheimer equations on
$\mathbb{R}^3$ | This article is concerned with the \emph{asymptotically autonomous
robustness} (almost surely and in probability) of non-autonomous random
attractors for two stochastic versions of 3D convective Brinkman-Forchheimer
(CBF) equations defined on the whole space $\mathbb{R}^3$:
$$\frac{\partial\boldsymbol{v}}{\partial t}-\mu
\Delta\boldsymbol{v}+(\boldsymbol{v}\cdot\nabla)\boldsymbol{v}
+\alpha\boldsymbol{v}+ \beta|\boldsymbol{v}|^{r-1}\boldsymbol{v}+\nabla
p=\boldsymbol{f}(t)+``\mbox{stochastic terms}",\quad
\nabla\cdot\boldsymbol{v}=0,$$
with initial and boundary vanishing conditions, where $\mu,\alpha,\beta >0$,
$r\geq1$ and $\boldsymbol{f}(\cdot)$ is a given time-dependent external force
field. By the asymptotically autonomous robustness of a non-autonomous random
attractor $ \mathscr{A}=\{ \mathscr{A}(\tau,\omega): \tau\in\mathbb{R},
\omega\in\Omega\}$ we mean its time-section $\mathscr{A}(\tau,\omega)$ is
robust to a time-independent random set as time $\tau$ tends to negative
infinity according to the Hausdorff semi-distance of the underlying space. Our
goal is to study this topic, almost surely and in probability, for the
non-autonomous 3D CBF equations when the stochastic term is a linear
multiplicative or additive noise, and the time-dependent forcing converges
towards a time-independent function. Our main results contain two cases: i)
$r\in(3,\infty)$ with any $\beta,\mu>0$; ii) $r=3$ with $2\beta\mu\geq1$. The
main procedure to achieve our goal is how to justify that the usual pullback
asymptotic compactness of the solution operators is uniform on some
\emph{uniformly} tempered universes over an \emph{infinite} time-interval
$(-\infty,\tau]$. This can be done by a method based on Kuratowski's measure of
noncompactness. | Kush Kinra, Manil T. Mohan, Renhai Wang | 2022-12-31T14:57:09Z | http://arxiv.org/abs/2301.00211v2 | Asymptotically autonomous robustness in probability of non-autonomous random attractors for stochastic convective Brinkman-Forchheimer equations on \(\mathbb{R}^{d}\)
###### Abstract.
This article is concerned with the _asymptotically autonomous robustness_ (almost surely and in probability) of non-autonomous random attractors for two stochastic versions of convective Brinkman-Forchheimer (CBF) equations defined on the whole space \(\mathbb{R}^{d}\):
\[\frac{\partial\mathbf{v}}{\partial t}-\mu\Delta\mathbf{v}+(\mathbf{v}\cdot\nabla)\mathbf{v}+ \alpha\mathbf{v}+\beta|\mathbf{v}|^{r-1}\mathbf{v}+\nabla p=\mathbf{f}(t)+\text{``stochastic terms''},\quad\nabla\cdot\mathbf{v}=0,\]
with initial and boundary vanishing conditions, where \(d=2,3\), \(\mu,\alpha,\beta>0\), \(r\geq 1\) and \(\mathbf{f}(t)\) is a given time-dependent external force field. By the asymptotically autonomous robustness of a non-autonomous random attractor \(\mathcal{A}=\{\mathcal{A}(\tau,\omega):r\in\mathbb{R},\omega\in\Omega\}\) we mean its time-section \(\sigma(\tau,\omega)\) is robust to a time-independent random set as time \(\tau\) tends to negative infinity according to the Hausdorff semi-distance of the underlying space. Our goal is to study this topic, almost surely and in probability, for the non-autonomous CBF equations when the stochastic term is a linear multiplicative or additive noise, and the time-dependent forcing converges towards a time-independent function. Our main results contain three cases: i) \(d=2\) and \(r\in\{1\}\cup[2,\infty)\); ii) \(d=3\) and \(r\in(3,\infty)\); iii) \(d=3\), \(r=3\) and \(2\beta\mu\geq 1\). The main procedure to achieve our goal is how to justify that the usual pullback asymptotic compactness of the solution operators is uniform on some _uniformly_ tempered universes over an _infinite_ time-interval \((-\infty,\tau]\). This can be done by a method based on Kuratowski's measure of noncompactness by showing the backward uniform "tail-smallness" and "flattening-property" of the solutions over \((-\infty,\tau]\) in order to overcome the lack of compact Sobolev embeddings on unbounded domains. Several rigorous calculations dealing the pressure term \(p\) and the fast growing term \(\beta|\mathbf{v}|^{r-1}\mathbf{v}\) play key role in the whole analysis. When \(\alpha=\beta=0\), the present result can be viewed as a generation of the authors's recent work [73] for the standard Navier-Stokes equations on unbounded Poincare domains.
Key words and phrases:Asymptotically autonomous robustness, pullback random attractor, stochastic convective Brinkman-Forchheimer equations, backward uniform-tail estimate, backward flattening-property 2020 Mathematics Subject Classification: Primary 37L55; Secondary 37B55, 35B41, 35B40
## 1. Introduction
### The model
In this article, we consider a stochastic fluid dynamic model concerning the convective Brinkman-Forchheimer (CBF) equation driven by stochastic and non-autonomous forcing simultaneously defined on the whole space \(\mathbb{R}^{d}\) (\(d=2,3\)):
(1.1) \[\left\{\begin{aligned} \frac{\partial\mathbf{v}}{\partial t}-\mu \Delta\mathbf{v}+(\mathbf{v}\cdot\nabla)\mathbf{v}+\alpha\mathbf{v}+\beta|\mathbf{v}|^{r-1}\mathbf{v}+ \nabla p&=\mathbf{f}+S(\mathbf{v})\circ\frac{\mathrm{d}\mathrm{W}(t)}{ \mathrm{d}t},\ \text{in}\ \ \mathbb{R}^{d}\times(\tau,\infty),\\ \nabla\cdot\mathbf{v}&=0,\
## 1. Introduction
### Background
The _CBF equations_ are the _CBF equations_, the _
equations, see [6, 14, 15, 29, 37, 38, 39, 64, 73, 74, 75]. As per the existing literature, the existence of random attractors for stochastic systems is based on some transformation which converts the stochastic system into a pathwise deterministic system. This transformation is available in the literature only when the noise is either linear multiplicative or additive one, see [6, 27, 37, 43, 64]. In order to deal with the nonlinear diffusion term of the noise, the concept of mean random attractors was introduced in [66] and applied to stochastic Navier-Stokes and CBF equations (Ito sense) with Lipschitz nonlinear diffusion term in [67] and [36], respectively, see [68, 70] for other physically relevant stochastic models. Another different approach in the direction of random attractors, when the diffusion term is nonlinear, is the Wong-Zakai approximation of pathwise random attractors, see [29, 30, 38, 75] and the references therein.
### Motivation, assumptions and main results
In general, a non-autonomous random attractor carries the form \(\mathscr{A}_{\varsigma}=\{\mathscr{A}_{\varsigma}(\tau,\omega):\tau\in\mathbb{ R},\ \omega\in\Omega\}\), where \(\varsigma\) stands for some external perturbation parameter. In the literature, the robustness of pullback random attractors of stochastic CBF equations have been established in [37, 38] with respect to the external parameter \(\varsigma\). For the robustness with respect to the external/internal parameters of pullback random attractors of 2D stochastic Navier-Stokes, we refer interested readers to the works [22, 29, 30, 39, 73] and the references therein. Currently, the questions of robustness of pullback random attractors of stochastic CBF equations defined on _unbounded_ domains with respect to the _internal_ parameter \(\tau\), however, is still unsolved.
As per our expectations, if the time-dependent forcing term \(\mathbf{f}(x,t)\) converges to some time-independent forcing term \(\mathbf{f}_{\infty}(x)\) in some sense, the non-autonomous random dynamics of the system (1.1) becomes more and more autonomous. Our main motivation is to examine the asymptotically autonomous robustness of pullback random attractors of (1.1) when the parameters in (1.1) are discussed in the following three cases.
**Assumption 1.1**.: \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\) _converges to \(\mathbf{f}_{\infty}\in\mathbb{L}^{2}(\mathbb{R}^{d}):\)_
\[\lim_{\tau\to-\infty}\int_{-\infty}^{\tau}\|\mathbf{f}(t)-\mathbf{f}_{\infty}\|^{2}_{ \mathrm{L}^{2}(\mathbb{R}^{d})}\mathrm{d}t=0. \tag{1.2}\]
_Moreover, \(\mathbf{f}(\cdot,\cdot)\) satisfies_
\[\sup_{s\leq\tau}\int_{-\infty}^{s}e^{\kappa(t-s)}\|\mathbf{f}(t)\|^{2}_{\mathrm{L} ^{1}(\mathbb{R}^{d})}\mathrm{d}t<+\infty,\ \ \ \ \ \forall\ \ \kappa>0\text{, }\tau\in\mathbb{R}. \tag{1.3}\]
**Theorem 1.2** (Multiplicative noise case).: _Let Assumption 1.1 be satisfied. Then, for all the cases, given in Table 1, the non-autonomous RDS \(\Phi\) generated by (1.1) with \(S(\mathbf{v})=\mathbf{v}\) has a unique pullback
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Cases** & \(d\) & \(r\) & conditions on \(\mu\) \& \(\beta\) \\ \hline
**I** & \(d=2\) & \(r\in\{1\}\cup[2,\infty)\) & for any \(\mu>0\) and \(\beta>0\) \\ \hline
**II** & \(d=3\) & \(r\in(3,\infty)\) & for any \(\mu>0\) and \(\beta>0\) \\ \hline
**III** & \(d=3\) & \(r=3\) & for \(\mu>0\) and \(\beta>0\) with \(2\beta\mu\geq 1\) \\ \hline \end{tabular}
\end{table}
Table 1. Values of \(\mu,\beta\) and \(r\) for \(d=2,3\).
random attractor \(\mathscr{A}=\{\mathscr{A}(\tau,\omega):\tau\in\mathbb{R},\omega\in\Omega\}\) such that \(\bigcup\limits_{s\in(-\infty,\tau]}\mathscr{A}(s,\omega)\) is precompact in \(\mathbb{H}\) (the definition of \(\mathbb{H}\) is given below, see Section 2.1) and \(\lim\limits_{t\to+\infty}e^{-\gamma t}\sup\limits_{s\in(-\infty,\tau]}\| \mathscr{A}(s-t,\vartheta_{-t}\omega)\|_{\mathbb{H}}=0,\) for any \(\gamma>0\), \(\tau\in\mathbb{R}\) and \(\omega\in\Omega\). In addition, the time-section \(\mathscr{A}(\tau,\omega)\) is asymptotically autonomous robust in \(\mathbb{H}\), and the limiting set of \(\mathscr{A}(\tau,\omega)\) as \(\tau\to-\infty\) is just determined by the random attractor \(\mathscr{A}_{\infty}=\{\mathscr{A}(\omega):\omega\in\Omega\}\) of stochastic CBF equations (1.1) with the autonomous forcing \(\boldsymbol{f}_{\infty}\), that is,_
\[\lim\limits_{\tau\to-\infty}\mathrm{dist}_{\mathbb{H}}(\mathscr{A}(\tau,\omega ),\mathscr{A}_{\infty}(\omega))=0,\ \mathbb{P}\text{-a.s.}\ \omega\in\Omega. \tag{1.4}\]
_Moreover, the asymptotically autonomous robustness in probability is also justified:_
\[\lim\limits_{\tau\to-\infty}\mathbb{P}\Big{(}\omega\in\Omega:\mathrm{dist}_{ \mathbb{H}}(\mathscr{A}(\tau,\omega),\mathscr{A}_{\infty}(\omega))\geq\delta \Big{)}\!=0,\quad\forall\ \delta>0. \tag{1.5}\]
_In addition, for any \(\varepsilon>0\) and sequence \(\tau_{n}\to-\infty\), there exists \(\Omega_{\varepsilon}\in\mathscr{F}\) with \(\mathbb{P}(\Omega_{\varepsilon})>1-\varepsilon\) such that_
\[\lim\limits_{n\to\infty}\sup\limits_{\omega\in\Omega_{\varepsilon}}\mathrm{ dist}_{\mathbb{H}}(\mathscr{A}(\tau_{n},\omega),\mathscr{A}_{\infty}(\omega))=0. \tag{1.6}\]
**Theorem 1.3** (Additive noise case).: _Under the Assumption 1.1 and for all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), all results in Theorem 1.2 hold for the non-autonomous RDS generated by (1.1) with \(S(\boldsymbol{v})=\boldsymbol{g}\) with \(\boldsymbol{g}\in\mathrm{D}(\mathrm{A})\), where \(\mathrm{D}(\mathrm{A})\) is the domain of the Stokes operator \(\mathrm{A}\) defined in (2.1)._
**Remark 1.4**.: _(i) An example of Assumption 1.1 is \(\boldsymbol{f}(x,t)=\boldsymbol{f}_{\infty}(x)e^{t}+\boldsymbol{f}_{\infty}(x)\) with \(\boldsymbol{f}_{\infty}\in\mathbb{L}^{2}(\mathbb{R}^{d})\cap\mathbb{L}^{1}( \mathbb{R}^{d})\)._
_(ii) Assumption 1.1 implies the following conditions (cf. [9]):_
\[\text{Uniform integrability:}\quad\sup\limits_{s\leq\tau}\int_{-\infty}^{s}e^{ \kappa(\xi-s)}\|\boldsymbol{f}(\xi)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2} \mathrm{d}\xi<+\infty,\ \forall\ \kappa>0\text{, }\tau\in\mathbb{R}, \tag{1.8}\] \[\text{Uniform tails-smallness:}\quad\lim\limits_{k\to\infty}\sup \limits_{s\leq\tau}\int_{-\infty}^{s}e^{\kappa(\xi-s)}\int_{|x|\geq k}| \boldsymbol{f}(x,\xi)|^{2}\mathrm{d}x\mathrm{d}\xi=0,\ \forall\ \kappa>0\text{, }\tau\in\mathbb{R}. \tag{1.7}\]
_(iii) We only use Assumption 1.1 for \(\boldsymbol{f}\) in the whole paper._
_(iv) In Poincare domains (bounded or unbounded), we can relax the condition (1.3) (see [73])._
_(iv) Due to technical difficulties, we are not able to establish the present results for \(d=2\) and \(r\in(1,2)\)._
_(v) In the additive noise case, we do not need to assume, as in [73, Hypothesis 1.3], that there exists a constant \(\aleph>0\) such that \(\boldsymbol{g}\in\mathrm{D}(\mathrm{A})\) satisfies_
\[\bigg{|}\sum\limits_{i,j=1}^{d}\int_{\mathbb{R}^{d}}v_{i}(x)\frac{\partial g_{ j}(x)}{\partial x_{i}}v_{j}(x)\mathrm{d}x\bigg{|}\leq\aleph\|\boldsymbol{v}\|_{ \mathbb{L}^{2}(\mathbb{R}^{d})}^{2},\ \ \forall\ \boldsymbol{v}\in\mathbb{L}^{2}(\mathbb{R}^{d}). \tag{1.9}\]
### Novelties, difficulties and approaches
In order to prove Theorems 1.2 and 1.3, the uniform precompactness of \(\bigcup\limits_{s\in(-\infty,\tau]}\mathscr{A}(s,\omega)\) in \(\mathbb{H}\) is a pivotal point. The well-known abstract theory of pullback random attractors from [65] tells us that the pullback asymptotic compactness of \(\Phi\) gives the compactness of \(\mathscr{A}(\tau,\omega)\) for each \(\tau\in\mathbb{R}\), but it cannot provide the precompactness of \(\bigcup\limits_{s\in(-\infty,\tau]}\mathscr{A}(s,\omega)\) in \(\mathbb{H}\), since \((-\infty,\tau]\) is an _infinite_ interval. However, motivated by the ideas of [65], this can be done if one is able to show that the
usual pullback asymptotic compactness of \(\Phi\) is uniform with respect to a uniformly tempered universe (see (2.13)) over \((-\infty,\tau]\).
Note that in the bounded domain case, one can obtain the _uniform_ pullback asymptotic compactness of \(\Phi\) over \((-\infty,\tau]\) via a compact uniform pullback absorbing set by using compact Sobolev embeddings (see [39, Theorem 3.10]). Moreover, the same idea is used for several stochastic Navier-Stokes, \(g\)-Navier-Stokes, magneto-hydrodynamics, Brinkman-Forchheimer equations on bounded domains, see [39, 44, 72, 76]. Due to the lack of compact Sobolev embeddings in unbounded domains as considered in the present work, to demonstrate such _backward uniform_ pullback asymptotic compactness is therefore harder than that in the bounded domain case. We mention that the criteria of _Kuratowski's measure of noncompactness_ ([41, 55]) is useful to resolve the difficulty created by the noncompactness of Sobolev embeddings on unbounded domains (cf. [42, Lemma 2.7]). In order to apply such criteria, we use the idea of _uniform tail-estimates_ introduced by Wang [62] and _flattening-properties_ introduced by Ma et. al. [46](deterministic case) and Kloeden and Langa [33](random case). Using the cut-off technique, we show that the solutions of (1.1) are sufficiently small in \(\mathbb{L}^{2}(\mathcal{O}_{k}^{c})\) uniformly over \((-\infty,\tau]\), when \(k\) is large enough, where \(\mathcal{O}_{k}=\{x\in\mathbb{R}^{d}:|x|\leq k\}\) and \(\mathcal{O}_{k}^{c}=\mathbb{R}^{d}\setminus\mathcal{O}_{k}\), that is, we obtain the backward uniform tail-estimates for the solutions. Furthermore, using the same cut-off function, we can also establish the backward flattening-properties of the solutions.
Note that parabolic and hyperbolic stochastic models as considered in the works [12, 13, 9, 19, 42, 59, 62, 69] etc., do not contain pressure term \(p\). But some physically relevant models such as Navier-Stokes (cf. [73]), Brinkman-Forchheimer equations (cf. [76]) and many others, contain the pressure term \(p\). While proving the backward uniform tail-estimates as well as backward flattening-properties of the solutions, when we take a suitable inner product, the pressure term \(p\) does not vanish with the help of divergence free condition (or incompressibility condition) of the solutions of (1.1). However, by taking the divergence in (1.1) formally and using the divergence free condition, we end up with the rigorous expression of the pressure term
\[p=(-\Delta)^{-1}\bigg{[}\sum_{i,j=1}^{d}\frac{\partial^{2}}{\partial x_{i} \partial x_{j}}(v_{i}v_{j})+\nabla\cdot\{|\boldsymbol{v}|^{r-1}\boldsymbol{v }\}-\nabla\cdot\boldsymbol{f}\bigg{]}, \tag{1.10}\]
in the weak sense, which is the most difficult term to handle in an appropriate way. Then it is possible to obtain these backward uniform tail-estimates as well as backward flattening-property with the help of Gagliardo-Nirenberg ([54, Theorem 1]. In this paper it has been used very carefully in each case to get appropriate estimates by using Holder, interpolation and Young inequalities (cf. Lemmas 3.8-3.9 and 4.7-4.8).
It is worth mentioning here that we are able to prove the backward uniform tail-estimate as well as the backward flattening-property for \(d=2\) and \(d=3\) with \(r\in\{1\}\cup[2,\infty)\) and \(r\in[3,\infty)\), respectively. But establishing the backward uniform tail-estimate as well as the backward flattening-properties for \(d=2\) with \(1<r<2\) on the whole space is still not yet resolved (that is, the difficulty in estimating the pressure term (1.10) on the whole space is not resolved for \(d=2\) with \(1<r<2\)). Note that one can estimate the pressure term (1.10) for \(d=2\) with \(1<r<2\) on unbounded Poincare domains \(\mathcal{O}\) by using the **elliptic regularity** (cf. [35, Lemma 6.5]). As a result of these backward uniform tail-estimates and backward flattening-property
of the solutions to (1.1), the backward uniform pullback asymptotic compactness of \(\Phi\) in \(\mathbb{H}\) follows. The wide-spread idea of energy equations introduced in [3] can be used to overcome the noncompactness of Sobolev embeddings on unbounded domains, see the works [6, 7, 29, 38, 63, 64, 71], etc. and many others. A remark is that we are currently unable to use the idea of energy equations to prove the backward uniform pullback asymptotic compactness of \(\Phi\) in \(\mathbb{H}\) since \((-\infty,\tau]\) is an infinite time-interval.
Since we have to consider the uniformly tempered universe to prove the backward uniform pullback asymptotic compactness of \(\Phi,\) we shall establish the measurability of the uniformly compact attractor. This is not straightforward compared with the usual case since the radii of the uniform pullback absorbing set is taken as the supremum over an uncountable set \((-\infty,\tau]\) (see Proposition 3.7). In order to overcome the difficulty, we first observe that the measurability of the usual random attractor is known in the literature, see for example, [6, 7, 29, 65], etc., and then prove that such a uniformly compact attractor is just equal to the usual random attractor. This idea has been successfully used by the authors in [9, 72, 73] etc., for different stochastic models.
### Advantages of the damping term
CBF equations are also known as damped Navier-Stokes equations (cf. [32]). The damping arises from the resistance to the motion of the flow or by friction effects. Due to the presence of the damping term \(\alpha\boldsymbol{v}+\beta|\boldsymbol{v}|^{r-1}\boldsymbol{v},\) we are able to establish better results than which are available for the Navier-Stokes equations. The existence of global as well as random attractors for the Navier-Stokes equations on the whole space or general unbounded domains is an interesting and challenging open problem. In the literature, for Navier-Stokes equations, these types of results are available on unbounded Poincare domains only (cf. [39, 73]). For 2D Navier-Stokes equations forced by a linear multiplicative noise, we refer to [40]. For stochastic CBF equations (1.1), we are considering the whole space, where the linear damping term \(\alpha\boldsymbol{v}\) plays a crucial role to establish the required results on the whole space. This is different from the 2D Navier-Stokes equations on unbounded Poincare domains, see [73].
### Outline of the article
In the next section, we provide the necessary function spaces and abstract formulation of (1.1), and discuss the Ornstein-Uhlenbeck process with its properties. In Section 3, we prove Theorem 1.2 for the system (1.1) driven by multiplicative noise. In the final section, we prove Theorem 1.3 for the problem (1.1) driven by additive noise.
## 2. Mathematical formulation
We start this section with some necessary function spaces whose elements satisfy the divergence free conditions, that is, \(\nabla\cdot\boldsymbol{v}=0.\) Next, in order to obtain the abstract formulation of the system (1.1), we define linear, bilinear and nonlinear operators along with their properties. Finally, we discuss the Ornstein-Uhlenbeck process with some of its properties and the backward tempered random sets.
### Function spaces and operators
Let \(\mathrm{C}^{\infty}_{0}(\mathbb{R}^{d};\mathbb{R}^{d})\) denote the space of all \(\mathbb{R}^{d}\)-valued, infinitely differentiable functions with compact support in \(\mathbb{R}^{d}\). Let \(\mathbb{L}^{s}(\mathbb{R}^{d}):=\mathbb{L}^{s}(\mathbb{R}^{d};\mathbb{R}^{d})\) and \(\mathbb{H}^{k}(\mathbb{R}^{d}):=\mathrm{H}^{k}(\mathbb{R}^{d};\mathbb{R}^{d})\) for
\(s\in[2,\infty)\) and \(k\in\mathbb{N}\). Define the spaces
\[\mathbb{H} :=\overline{\{\mathbf{v}\in\mathrm{C}_{0}^{\infty}(\mathbb{R}^{d}; \mathbb{R}^{d}):\nabla\cdot\mathbf{v}=0\}}^{\mathbb{L}^{2}(\mathbb{R}^{d})},\] \[\mathbb{V} :=\overline{\{\mathbf{v}\in\mathrm{C}_{0}^{\infty}(\mathbb{R}^{d}; \mathbb{R}^{d}):\nabla\cdot\mathbf{v}=0\}}^{\mathbb{H}^{1}(\mathbb{R}^{d})},\] \[\widetilde{\mathbb{L}}^{p} :=\overline{\{\mathbf{v}\in\mathrm{C}_{0}^{\infty}(\mathbb{R}^{d}; \mathbb{R}^{d}):\nabla\cdot\mathbf{v}=0\}}^{\mathbb{L}^{p}(\mathbb{R}^{d})},\ \ p>2.\]
The spaces \(\mathbb{H}\), \(\mathbb{V}\) and \(\widetilde{\mathbb{L}}^{p}\) are endowed with the norms
\[\|\mathbf{v}\|_{\mathbb{H}}^{2}:=\int_{\mathbb{R}^{d}}|\mathbf{v}(x)|^{2}\mathrm{d}x, \ \|\mathbf{v}\|_{\mathbb{V}}^{2}=\int_{\mathbb{R}^{d}}|\mathbf{v}(x)|^{2}\mathrm{d}x+ \int_{\mathbb{R}^{d}}|\nabla\mathbf{v}(x)|^{2}\mathrm{d}x\text{ and }\|\mathbf{v}\|_{\widetilde{\mathbb{L}}^{p}}^{p}:=\int_{ \mathbb{R}^{d}}|\mathbf{v}(x)|^{p}\mathrm{d}x,\]
for \(p\in(2,\infty)\), respectively. The inner product in the Hilbert space \(\mathbb{H}\) is represented by \((\cdot,\cdot)\). The duality pairing between the spaces \(\mathbb{V}\) and \(\mathbb{V}^{\prime}\), and \(\widetilde{\mathbb{L}}^{p}\) and its dual \(\widetilde{\mathbb{L}}^{\frac{p}{p-1}}\) is denoted by \(\langle\cdot,\cdot\rangle.\) Also, the space \(\mathbb{H}\) can be identified with its own dual \(\mathbb{H}^{\prime}\). We endow the space \(\mathbb{V}\cap\widetilde{\mathbb{L}}^{p}\) with the norm \(\|\mathbf{v}\|_{\mathbb{V}}+\|\mathbf{v}\|_{\widetilde{\mathbb{L}}^{p}}\), for \(\mathbf{v}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{p}\) and its dual \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{p^{\prime}}\) with the norm (cf. [24, Subsection 2.1])
\[\inf\Bigl{\{}\|\mathbf{u}_{1}\|_{\mathbb{V}^{\prime}}+\|\mathbf{u}_{2}\|_{\widetilde{ \mathbb{L}}^{p^{\prime}}}:\mathbf{u}=\mathbf{u}_{1}+\mathbf{u}_{2},\ \mathbf{u}_{1}\in\mathbb{V}^{\prime},\ \mathbf{u}_{2}\in \widetilde{\mathbb{L}}^{p^{\prime}}\Bigr{\}}.\]
#### 2.1.1. Linear operator
Let \(\mathscr{P}:\mathbb{L}^{2}(\mathbb{R}^{d})\to\mathbb{H}\) be the Helmholtz-Hodge (or Leray) projection. Note that the projection operator \(\mathscr{P}\) can be expressed in terms of the Riesz transform (cf. [53]). We define the Stokes operator
\[\mathrm{A}\mathbf{v}:=-\mathscr{P}\Delta\mathbf{v},\ \mathbf{v}\in\mathrm{D}(\mathrm{A}):= \mathbb{V}\cap\mathbb{H}^{2}(\mathbb{R}^{d}). \tag{2.1}\]
Moreover, \(\mathscr{P}\) and \(\Delta\) commutes in \(\mathbb{R}^{d}\), that is, \(\mathscr{P}\Delta=\Delta\mathscr{P}\).
#### 2.1.2. Bilinear operator
Let us define the _trilinear form_\(b(\cdot,\cdot,\cdot):\mathbb{V}\times\mathbb{V}\times\mathbb{V}\to\mathbb{R}\) by
\[b(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3})=\int_{\mathbb{R}^{d}}(\mathbf{v}_{1}(x)\cdot \nabla)\mathbf{v}_{2}(x)\cdot\mathbf{v}_{3}(x)\mathrm{d}x=\sum_{i,j=1}^{d}\int_{ \mathbb{R}^{d}}v_{1,i}(x)\frac{\partial v_{2,j}(x)}{\partial x_{i}}v_{3,j}(x) \mathrm{d}x.\]
If \(\mathbf{v}_{1},\mathbf{v}_{2}\) are such that the linear map \(b(\mathbf{v}_{1},\mathbf{v}_{2},\cdot)\) is continuous on \(\mathbb{V}\), the corresponding element of \(\mathbb{V}^{\prime}\) is denoted by \(\mathrm{B}(\mathbf{v}_{1},\mathbf{v}_{2})\). We also denote \(\mathrm{B}(\mathbf{v})=\mathrm{B}(\mathbf{v},\mathbf{v})=\mathscr{P}[(\mathbf{v}\cdot\nabla)\bm {v}]\). An integration by parts yields
\[\begin{cases}b(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{2})=0,\ \ \text{for all}\ \ \mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{V},\\ b(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3})=-b(\mathbf{v}_{1},\mathbf{v}_{3},\mathbf{v}_{2}),\ \ \text{for all}\ \ \mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}\in\mathbb{V}.\end{cases} \tag{2.2}\]
**Remark 2.1** ([61, Chapter 2, Section 2.3]).: _For all \(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}\in\mathbb{V}\),_
\[|b(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3})|\leq C\times\begin{cases}\|\mathbf{v}_{1}\|_{ \mathbb{H}}^{1/2}\|\nabla\mathbf{v}_{1}\|_{\mathbb{H}}^{1/2}\|\nabla\mathbf{v}_{2}\|_{ \mathbb{H}}\|\mathbf{v}_{3}\|_{\mathbb{H}}^{1/2}\|\nabla\mathbf{v}_{3}\|_{\mathbb{H}}^{1 /2},&\text{ for }d=2,\\ \|\mathbf{v}_{1}\|_{\mathbb{H}}^{1/4}\|\nabla\mathbf{v}_{1}\|_{\mathbb{H}}^{3/4}\| \nabla\mathbf{v}_{2}\|_{\mathbb{H}}\|\mathbf{v}_{3}\|_{\mathbb{H}}^{1/4}\|\nabla\mathbf{v}_{ 3}\|_{\mathbb{H}}^{3/4},&\text{ for }d=3.\end{cases} \tag{2.3}\]
**Remark 2.2**.: _Note that \(\langle\mathrm{B}(\mathbf{v}_{1},\mathbf{v}_{1}-\mathbf{v}_{2}),\mathbf{v}_{1}-\mathbf{v}_{2}\rangle=0\) (for all \(\mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{V}\)) gives us_
\[\langle\mathrm{B}(\mathbf{v}_{1})-\mathrm{B}(\mathbf{v}_{2}),\mathbf{v}_{1}-\mathbf{v}_{2} \rangle=\langle\mathrm{B}(\mathbf{v}_{1}-\mathbf{v}_{2},\mathbf{v}_{2}),\mathbf{v}_{1}-\mathbf{v}_ {2}\rangle=-\langle\mathrm{B}(\mathbf{v}_{1}-\mathbf{v}_{2},\mathbf{v}_{1}-\mathbf{v}_{2}),\bm {v}_{2}\rangle. \tag{2.4}\]
#### 2.1.3. Nonlinear operator
Let us consider the nonlinear operator \(\mathcal{C}(\boldsymbol{v}):=\mathscr{P}(|\boldsymbol{v}|^{r-1}\boldsymbol{v})\), for \(\boldsymbol{v}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}\). The map \(\mathcal{C}(\cdot):\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}\to\mathbb{V}^{ \prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\) and \(\langle\mathcal{C}(\boldsymbol{v}),\boldsymbol{v}\rangle=\|\boldsymbol{v}\| _{\mathbb{L}^{r+1}}^{r+1}\).
**Remark 2.3**.: _For any \(\boldsymbol{v}_{1},\boldsymbol{v}_{2}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{ r+1}\), we have (cf. [49, Subsection 2.4])_
\[\langle\mathcal{C}(\boldsymbol{v}_{1})-\mathcal{C}(\boldsymbol{v}_{2}), \boldsymbol{v}_{1}-\boldsymbol{v}_{2}\rangle\geq\frac{1}{2}\||\boldsymbol{v} _{1}|^{\frac{r-1}{2}}(\boldsymbol{v}_{1}-\boldsymbol{v}_{2})\|_{\mathbb{H}}^{ 2}+\frac{1}{2}\||\boldsymbol{v}_{2}|^{\frac{r-1}{2}}(\boldsymbol{v}_{1}- \boldsymbol{v}_{2})\|_{\mathbb{H}}^{2}\geq 0,\ \text{ for all }\ r\geq 1. \tag{2.5}\]
### Abstract formulation and Ornstein-Uhlenbeck process
By taking the projection \(\mathscr{P}\) on the SCBF equations (1.1), we obtain the following abstract formulation by linear, bilinear and nonlinear operators:
\[\left\{\begin{aligned} \frac{\mathrm{d}\boldsymbol{v}}{ \mathrm{d}t}+\mu\mathrm{A}\boldsymbol{v}+\mathrm{B}(\boldsymbol{v})+\alpha \boldsymbol{v}+\beta\mathcal{C}(\boldsymbol{v})&=\mathscr{P} \boldsymbol{f}+S(\boldsymbol{v})\circ\frac{\mathrm{d}\mathrm{W}}{\mathrm{d}t}, & t>\tau,\\ \boldsymbol{v}(x)|_{t=\tau}&=\boldsymbol{v}_{\tau}(x),& x\in\mathbb{R}^{d},\end{aligned}\right. \tag{2.6}\]
where \(S(\boldsymbol{v})=\boldsymbol{v}\) (multiplicative noise) or \(S(\boldsymbol{v})\) is independent of \(\boldsymbol{v}\) (additive noise). Here, the symbol \(\circ\) represents that the stochastic integral is understood in the sense of Stratonovich and \(\mathrm{W}(t,\omega)\) is the standard scalar Wiener process on the probability space \((\Omega,\mathscr{F},\mathbb{P})\), where \(\Omega=\{\omega\in C(\mathbb{R};\mathbb{R}):\omega(0)=0\}\), endowed with the compact-open topology given by the metric
\[d_{\Omega}(\omega,\omega^{\prime}):=\sum_{m=1}^{\infty}\frac{1}{2^{m}}\frac{ \|\omega-\omega^{\prime}\|_{m}}{1+\|\omega-\omega^{\prime}\|_{m}},\text{ where }\|\omega-\omega^{\prime}\|_{m}:=\sup_{-m\leq t\leq m}|\omega(t)-\omega^{\prime}(t)|,\]
\(\mathscr{F}\) is the Borel sigma-algebra induced by the compact-open topology of \((\Omega,d_{\Omega})\) and \(\mathbb{P}\) is the two-sided Wiener measure on \((\Omega,\mathscr{F})\). From [28], it is clear that the measure \(\mathbb{P}\) is ergodic and invariant under the translation-operator group \(\{\vartheta_{t}\}_{t\in\mathbb{R}}\) on \(\Omega\) defined by
\[\vartheta_{t}\omega(\cdot)=\omega(\cdot+t)-\omega(t),\ \text{ for all }\ t\in\mathbb{R},\ \omega\in\Omega.\]
The operator \(\vartheta(\cdot)\) is known as _Wiener shift operator_.
#### 2.2.1. Ornstein-Uhlenbeck process
Consider for some \(\sigma>0\)
\[y(\vartheta_{t}\omega)=\int_{-\infty}^{t}e^{-\sigma(t-\xi)}\mathrm{d}\mathrm{W }(\xi),\ \ \omega\in\Omega, \tag{2.7}\]
which is the stationary solution of the one-dimensional Ornstein-Uhlenbeck equation
\[\mathrm{d}y(\vartheta_{t}\omega)+\sigma y(\vartheta_{t}\omega)\mathrm{d}t= \mathrm{d}\mathrm{W}(t). \tag{2.8}\]
It is known from [25] that there exists a \(\vartheta\)-invariant subset \(\widetilde{\Omega}\subset\Omega\) of full measure such that \(y(\vartheta_{t}\omega)\) is continuous in \(t\) for every \(\omega\in\widetilde{\Omega}\), and
\[\lim_{t\to\pm\infty}\frac{|y(\vartheta_{t}\omega)|}{|t|}=\lim_{t\to\pm\infty} \frac{1}{t}\int_{0}^{t}y(\vartheta_{\xi}\omega)\mathrm{d}\xi=\lim_{t\to\infty}e ^{-\delta t}|y(\vartheta_{-t}\omega)|=0, \tag{2.9}\]
for all \(\delta>0\). For further analysis of this work, we do not distinguish between \(\widetilde{\Omega}\) and \(\Omega\).
Since, \(\omega(\cdot)\) has sub-exponential growth (cf. [8, Lemma 11]), \(\Omega\) can be written as \(\Omega=\bigcup\limits_{N\in\mathbb{N}}\Omega_{N}\), where
\[\Omega_{N}:=\{\omega\in\Omega:|\omega(t)|\leq Ne^{|t|},\text{ for all }t\in\mathbb{R}\}, \text{ for all }\ N\in\mathbb{N}.\]
Moreover, for each \(N\in\mathbb{N}\), \((\Omega_{N},d_{\Omega_{N}})\) is a polish space (cf. [8, Lemma 17]).
**Lemma 2.4**.: _For each \(N\in\mathbb{N}\), suppose \(\omega_{k},\omega_{0}\in\Omega_{N}\) are such that \(d_{\Omega}(\omega_{k},\omega_{0})\to 0\) as \(k\to\infty\). Then, for each \(\tau\in\mathbb{R}\), \(T\in\mathbb{R}^{+}\) and \(a\in\mathbb{R}\),_
\[\sup_{t\in[\tau,\tau+T]}\biggl{[}|y(\vartheta_{t}\omega_{k})-y( \vartheta_{t}\omega_{0})|+|e^{ay(\vartheta_{t}\omega_{k})}-e^{ay(\vartheta_{t} \omega_{0})}|\biggr{]}\to 0\ \ \text{as}\ \ k\to\infty, \tag{2.11}\] \[\sup_{k\in\mathbb{N}}\sup_{t\in[\tau,\tau+T]}|y(\vartheta_{t} \omega_{k})|\leq C(\tau,T,\omega_{0}). \tag{2.10}\]
Proof.: See the proofs of [23, Corollary 22] and [45, Lemma 2.5].
#### 2.2.2. Backward-uniformly tempered random set
A bi-parametric set \(\mathcal{D}=\{\mathcal{D}(\tau,\omega)\}\) in a Banach space \(\mathbb{X}\) is said to be _backward-uniformly tempered_ if
\[\lim_{t\to+\infty}e^{-ct}\sup_{s\leq\tau}\|\mathcal{D}(s-t,\vartheta_{-t} \omega)\|_{\mathbb{X}}^{2}=0\ \ \forall\ \ (\tau,\omega,c)\in\mathbb{R}\times\Omega\times\mathbb{R}^{+},\ \ \ \ \text{ where }\ \|\mathcal{D}\|_{\mathbb{X}}=\sup_{\mathbf{x}\in\mathcal{D}}\|\mathbf{x}\|_{ \mathbb{X}}. \tag{2.12}\]
#### 2.2.3. Class of random sets
* Let \(\mathfrak{D}\) be the collection of subsets of \(\mathbb{H}\) defined as: (2.13) \[\mathfrak{D}=\biggl{\{}\mathcal{D}=\{\mathcal{D}(\tau,\omega):(\tau,\omega) \in\mathbb{R}\times\Omega\}:\lim_{t\to+\infty}e^{-ct}\sup_{s\leq\tau}\| \mathcal{D}(s-t,\vartheta_{-t}\omega)\|_{\mathbb{H}}^{2}=0,\ \forall\ c>0\biggr{\}}.\]
* Let \(\mathfrak{B}\) be the collection of subsets of \(\mathbb{H}\) defined as: \[\mathfrak{B}=\biggl{\{}\mathcal{B}=\{\mathcal{B}(\tau,\omega):(\tau,\omega) \in\mathbb{R}\times\Omega\}:\lim_{t\to+\infty}e^{-ct}\|\mathcal{B}(\tau-t, \vartheta_{-t}\omega)\|_{\mathbb{H}}^{2}=0,\ \forall\ c>0\biggr{\}}.\]
* Let \(\mathfrak{D}_{\infty}\) be the collection of subsets of \(\mathbb{H}\) defined as: \[\mathfrak{D}_{\infty}=\biggl{\{}\widehat{\mathcal{D}}=\{\widehat{\mathcal{D}} (\omega):\omega\in\Omega\}:\lim_{t\to+\infty}e^{-ct}\|\widehat{\mathcal{D}}( \vartheta_{-t}\omega)\|_{\mathbb{H}}^{2}=0,\ \forall\ c>0\biggr{\}}.\]
## 3. 2D and 3D SCBF equations: Multiplicative noise
In this section, we consider 2D and 3D SCBF equations driven by a linear multiplicative white noise, that is, \(S(\mathbf{v})=\mathbf{v}\) and establish the asymptotic autonomy of pullback random attractors. Let us define
\[\mathbf{u}(t,\tau,\omega,\mathbf{u}_{\tau}):=e^{-y(\vartheta_{t}\omega)}\mathbf{v}(t,\tau,\omega,\mathbf{v}_{\tau})\ \ \text{with}\ \ \mathbf{u}_{\tau}=e^{-y(\vartheta_{\tau}\omega)}\mathbf{v}_{\tau},\]
where \(y\) satisfies (2.8) and \(\mathbf{v}(\cdot):=\mathbf{v}(\cdot,\tau,\omega,\mathbf{v}_{\tau})\) is the solution of (1.1) with \(S(\mathbf{v})=\mathbf{v}\). Then \(\mathbf{u}(\cdot):=\mathbf{u}(\cdot,\tau,\omega,\mathbf{u}_{\tau})\) satisfies:
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{u}(t)}{\mathrm{d}t}- \mu\Delta\mathbf{u}(t)&+e^{y(\vartheta_{t}\omega)}(\mathbf{u}(t)\cdot \nabla)\mathbf{u}(t)+\alpha\mathbf{u}(t)+\beta e^{(r-1)y(\vartheta_{t}\omega)}|\mathbf{u} (t)|^{r-1}\mathbf{u}(t)\\ &=-e^{-y(\vartheta_{t}\omega)}\nabla p(t)+\mathbf{f}(t)e^{-y( \vartheta_{t}\omega)}+\sigma y(\vartheta_{t}\omega)\mathbf{u}(t),& \text{in}\ \ \mathbb{R}^{d}\times(\tau,\infty),\\ \nabla\cdot\mathbf{u}&=0,&\text{in}\ \ \mathbb{R}^{d}\times(\tau,\infty),\\ \mathbf{u}(x)|_{t=\tau}&=\mathbf{u}_{0}(x)=e^{-y(\vartheta_{ \tau}\omega)}\mathbf{v}_{0}(x),&\text{$x\in\mathbb{R}^{d}$ \ and}\ \ \tau\in\mathbb{R},\\ \mathbf{u}(x)|_{t=\tau}&\to 0&\text{as}\ \ |x|\to\infty,\end{aligned}\right. \tag{3.1}\]
as well as (projected form)
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{u}(t)}{\mathrm{d}t}+\mu \mathrm{A}\mathbf{u}(t)&+e^{y(\vartheta_{t}\omega)}\mathrm{B}\big{(} \mathbf{u}(t)\big{)}+\alpha\mathbf{u}(t)+\beta e^{(r-1)y(\vartheta_{t}\omega)}\mathcal{ C}\big{(}\mathbf{u}(t)\big{)}\\ &=e^{-y(\vartheta_{t}\omega)}\mathscr{P}\mathbf{f}(t)+\sigma y( \vartheta_{t}\omega)\mathbf{u}(t),\quad t>\tau,\ \tau\in\mathbb{R},\\ \mathbf{u}(x)|_{t=\tau}&=\mathbf{u}_{0}(x)=e^{-y(\vartheta_ {\tau}\omega)}\mathbf{v}_{0}(x),\qquad\qquad x\in\mathbb{R}^{d},\end{aligned}\right. \tag{3.2}\]
in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\), where \(r\geq 1\). Due to some technical difficulties, we restrict ourselves to all the cases given in Table 1 (see Lemmas 3.8 and 3.9).
### Non-autonomous random dynamical system (NRDS)
Lusin continuity helps us to define the NRDS. The following lemma (energy inequality) will be frequently used.
**Lemma 3.1**.: _For all the cases given in Table 1, assume that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\). Then, the solution of (3.2) satisfies the following energy inequality:_
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathbf{u}\|_{\mathbb{H}}^{2}+\Big{(}\alpha-2 \sigma y(\vartheta_{t}\omega)+\frac{\alpha}{2}\Big{)}\|\mathbf{u}\|_{\mathbb{H}}^{ 2}+2\mu\|\nabla\mathbf{u}\|_{\mathbb{H}}^{2}+2\beta e^{(r-1)y(\vartheta_{t}\omega) }\|\mathbf{u}\|_{\mathbb{L}^{r+1}}^{r+1}\leq\frac{2e^{2|y(\vartheta_{t}\omega)|}}{ \alpha}\|\mathbf{f}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}. \tag{3.3}\]
Proof.: From the first equation of the system (3.2), using (2.2) and the Cauchy-Schwarz inequality, one can obtain (3.3) immediately.
**Lemma 3.2**.: _For all the cases given in Table 1, let \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\). For each \((\tau,\omega,\mathbf{u}_{\tau})\in\mathbb{R}\times\Omega\times\mathbb{H}\), the system (3.2) has a unique weak solution \(\mathbf{u}(\cdot,\tau,\omega,\mathbf{u}_{\tau})\in\mathrm{C}([\tau,+\infty);\mathbb{H} )\cap\mathrm{L}^{2}_{\mathrm{loc}}(\tau,+\infty;\mathbb{V})\cap\mathrm{L}^{r+1 }_{\mathrm{loc}}(\tau,+\infty;\widetilde{\mathbb{L}}^{r+1})\) such that \(\mathbf{u}\) is continuous with respect to the initial data._
Proof.: One can prove the existence and uniqueness of solutions by a standard Faedo-Galerkin approximation method, cf. [31, 34, 47], etc. For continuity with respect to initial data \(\mathbf{u}_{\tau}\), see [38, Lemma 3.5].
Next result shows the Lusin continuity of the mapping of solution to the system (3.2) in sample points.
**Proposition 3.3**.: _For all the cases given in Table 1, suppose that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\). For each \(N\in\mathbb{N}\), the mapping \(\omega\mapsto\mathbf{u}(t,\tau,\omega,\mathbf{u}_{\tau})\) (solution of (3.2)) is continuous from \((\Omega_{N},d_{\Omega_{N}})\) to \(\mathbb{H}\), uniformly in \(t\in[\tau,\tau+T]\) with \(T>0\)._
Proof.: Assume that \(\omega_{k},\omega_{0}\in\Omega_{N},\ N\in\mathbb{N}\) such that \(d_{\Omega_{N}}(\omega_{k},\omega_{0})\to 0\) as \(k\to\infty\). Let \(\mathscr{U}^{k}(\cdot):=\mathbf{u}^{k}(\cdot)-\mathbf{u}^{0}(\cdot)\), where \(\mathbf{u}^{k}(\cdot):=\mathbf{u}(\cdot,\tau,\omega_{k},\mathbf{u}_{\tau})\) and \(\mathbf{u}^{0}:=\mathbf{u}(\cdot,\tau,\omega_{0},\mathbf{u}_{\tau})\). Then, \(\mathscr{U}^{k}(\cdot)\) satisfies:
\[\frac{\mathrm{d}\mathscr{U}^{k}}{\mathrm{d}t} =-\mu\mathrm{A}\mathscr{U}^{k}-(\alpha-\sigma y(\vartheta_{t} \omega_{k}))\mathscr{U}^{k}-e^{y(\vartheta_{t}\omega_{k})}\Big{[}\mathrm{B} \big{(}\mathbf{u}^{k}\big{)}-\mathrm{B}\big{(}\mathbf{u}^{0}\big{)}\Big{]}-\Big{[}e^{ y(\vartheta_{t}\omega_{k})}-e^{y(\vartheta_{t}\omega_{0})}\Big{]} \mathrm{B}\big{(}\mathbf{u}^{0}\big{)}\] \[\quad-\beta e^{(r-1)y(\vartheta_{t}\omega_{k})}\Big{[}\mathcal{C} \big{(}\mathbf{u}^{k}\big{)}-\mathcal{C}\big{(}\mathbf{u}^{0}\big{)}\Big{]}-\beta \Big{[}e^{(r-1)y(\vartheta_{t}\omega_{k})}-e^{(r-1)y(\vartheta_{t}\omega_{0})} \Big{]}\mathcal{C}\big{(}\mathbf{u}^{0}\big{)}\] \[\quad+\mathbf{f}\Big{[}e^{-y(\vartheta_{t}\omega_{k})}-e^{-y( \vartheta_{t}\omega_{0})}\Big{]}+\sigma[y(\vartheta_{t}\omega_{k})-y(\vartheta_{ t}\omega_{0})]\mathbf{u}^{0}, \tag{3.4}\]
in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\). Taking the inner product with \(\mathscr{U}^{k}(\cdot)\) in (3.4), and using (2.2) and (2.4), we obtain
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{U}^{k}\|_{\mathbb{H}}^{2}=- \mu\|\nabla\mathscr{U}^{k}\|_{\mathbb{H}}^{2}-(\alpha-\sigma y(\vartheta_{t} \omega_{k}))\|\mathscr{U}^{k}\|_{\mathbb{H}}^{2}-\beta e^{(r-1)y(\vartheta_{t} \omega_{k})}\Big{\langle}\mathcal{C}\big{(}\mathbf{u}^{k}\big{)}-\mathcal{C}\big{(} \mathbf{u}^{0}\big{)},\mathbf{u}^{k}-\mathbf{u}^{0}\Big{\rangle}\]
\[+e^{y(\vartheta_{t}\omega_{k})}b(\mathscr{U}^{k},\mathscr{U}^{k}, \boldsymbol{u}^{0})+\Big{[}e^{y(\vartheta_{t}\omega_{k})}-e^{y(\vartheta_{t} \omega_{0})}\Big{]}b(\boldsymbol{u}^{0},\mathscr{U}^{k},\boldsymbol{u}^{0})\] \[-\beta\Big{[}e^{(r-1)y(\vartheta_{t}\omega_{k})}-e^{(r-1)y( \vartheta_{t}\omega_{0})}\Big{]}\Big{\langle}\mathcal{C}\big{(}\boldsymbol{u }^{0}\big{)},\mathscr{U}^{k}\Big{\rangle}+\Big{[}e^{-y(\vartheta_{t}\omega_{k} )}-e^{-y(\vartheta_{t}\omega_{0})}\Big{]}(\boldsymbol{f},\mathscr{U}^{k}) \tag{3.5}\] \[+\sigma[y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})]( \boldsymbol{u}^{0},\mathscr{U}^{k}).\]
We know by (2.5) that
\[-\Big{\langle}\mathcal{C}\big{(}\boldsymbol{u}^{k}\big{)}-\mathcal{C}\big{(} \boldsymbol{u}^{0}\big{)},\boldsymbol{u}^{k}-\boldsymbol{u}^{0}\Big{\rangle} \leq-\frac{1}{2}\||\boldsymbol{u}^{k}|^{\frac{r-1}{2}}(\boldsymbol{u}^{k}- \boldsymbol{u}^{0})\|_{\mathbb{H}}^{2}-\frac{1}{2}\||\boldsymbol{u}^{0}|^{ \frac{r-1}{2}}(\boldsymbol{u}^{k}-\boldsymbol{u}^{0})\|_{\mathbb{H}}^{2}. \tag{3.6}\]
Using Holder's and Young's inequalities, we obtain
\[\Big{|}\Big{[}e^{-y(\vartheta_{t}\omega_{k})}-e^{-y(\vartheta_{t }\omega_{0})}\Big{]}(\boldsymbol{f},\mathscr{U}^{k})\Big{|} \leq C\Big{|}e^{-y(\vartheta_{t}\omega_{k})}-e^{-y(\vartheta_{t} \omega_{0})}\Big{|}^{2}\|\boldsymbol{f}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2 }+\frac{\alpha}{4}\|\mathscr{U}^{k}\|_{\mathbb{H}}^{2}, \tag{3.8}\] \[\Big{|}\sigma[y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0 })](\boldsymbol{u}^{0},\mathscr{U}^{k})\Big{|} \leq C|y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})|^{2} \|\boldsymbol{u}^{0}\|_{\mathbb{H}}^{2}+\frac{\alpha}{4}\|\mathscr{U}^{k}\|_{ \mathbb{H}}^{2},\] (3.9) \[\Big{|}\Big{[}e^{(r-1)y(\vartheta_{t}\omega_{k})}-e^{(r-1)y( \vartheta_{t}\omega_{0})}\Big{]}\Big{\langle}\mathcal{C}\big{(}\boldsymbol{u }^{0}\big{)},\mathscr{U}^{k}\Big{\rangle}\Big{|} \leq C\Big{|}e^{(r-1)y(\vartheta_{t}\omega_{k})}-e^{(r-1)y( \vartheta_{t}\omega_{0})}\Big{[}\|\boldsymbol{u}^{0}\|_{\mathbb{L}^{r+1}}^{r+1 }+\|\boldsymbol{u}^{k}\|_{\mathbb{L}^{r+1}}^{r+1}\Big{]}. \tag{3.7}\]
Next, we estimate the remaining terms of (3.5) separately.
**Case I: \(d=2\)**_and \(r\geq 1\)._ Applying (2.2), (2.3), Holder's and Young's inequalities, we estimate
\[\Big{|}e^{y(\vartheta_{t}\omega_{k})}b(\mathscr{U}^{k},\mathscr{U }^{k},\boldsymbol{u}^{0})\Big{|} \leq Ce^{y(\vartheta_{t}\omega_{k})}\|\mathscr{U}^{k}\|_{\mathbb{H}} \|\nabla\mathscr{U}^{k}\|_{\mathbb{H}}\|\nabla\boldsymbol{u}^{0}\|_{\mathbb{H}} \tag{3.10}\] \[\leq Ce^{2y(\vartheta_{t}\omega_{k})}\|\nabla\boldsymbol{u}^{0} \|_{\mathbb{H}}^{2}\|\mathscr{U}^{k}\|_{\mathbb{H}}^{2}+\frac{\mu}{4}\|\nabla \mathscr{U}^{k}\|_{\mathbb{H}}^{2},\]
and
\[\Big{|}\Big{[}e^{y(\vartheta_{t}\omega_{k})}-e^{y(\vartheta_{t}\omega_{0})} \Big{]}b(\boldsymbol{u}^{0},\mathscr{U}^{k},\boldsymbol{u}^{0})\Big{|} \leq C\Big{|}e^{y(\vartheta_{t}\omega_{k})}-e^{y(\vartheta_{t}\omega_{0})} \Big{|}^{2}\|\boldsymbol{u}^{0}\|_{\mathbb{H}}^{2}\|\nabla\boldsymbol{u}^{0}\|_ {\mathbb{H}}^{2}+\frac{\mu}{4}\|\nabla\mathscr{U}^{k}\|_{\mathbb{H}}^{2}. \tag{3.11}\]
**Case II: \(d=3\)**_and \(r>3\)._ Using Holder's and Young's inequalities, we infer
\[\Big{|}e^{y(\vartheta_{t}\omega_{k})}b(\mathscr{U}^{k},\mathscr{U }^{k},\boldsymbol{u}^{0})\Big{|}\leq\frac{\mu}{4}\|\nabla\mathscr{U}^{k}\|_{ \mathbb{H}}^{2}+\frac{\beta}{4}e^{(r-1)y(\vartheta_{t}\omega_{k})}\|| \mathscr{U}^{k}||\boldsymbol{u}^{0}|^{\frac{r-1}{2}}\|_{\mathbb{H}}^{2}+C\| \mathscr{U}^{k}\|_{\mathbb{H}}^{2}, \tag{3.12}\]
and
\[\Big{|}\Big{[}e^{y(\vartheta_{t}\omega_{k})}-e^{y(\vartheta_{t} \omega_{0})}\Big{]}b(\boldsymbol{u}^{0},\mathscr{U}^{k},\boldsymbol{u}^{0}) \Big{|} \tag{3.13}\] \[\leq\Big{|}1-e^{y(\vartheta_{t}\omega_{0})-y(\vartheta_{t}\omega_{k })}\Big{|}e^{y(\vartheta_{t}\omega_{k})}\|\nabla\boldsymbol{u}^{0}\|_{\mathbb{H}} \|\|\mathscr{U}^{k}\|\boldsymbol{u}^{0}\|_{\mathbb{H}}\] \[\leq\frac{\beta}{4}e^{(r-1)y(\vartheta_{t}\omega_{k})}\|| \mathscr{U}^{k}||\boldsymbol{u}^{0}|^{\frac{r-1}{2}}\|_{\mathbb{H}}^{2}+C\Big{|}1 -e^{y(\vartheta_{t}\omega_{0})-y(\vartheta_{t}\omega_{k})}\Big{|}^{2}\|\nabla \boldsymbol{u}^{0}\|_{\mathbb{H}}^{2}+C\|\mathscr{U}^{k}\|_{\mathbb{H}}^{2}.\]
**Case III: _When \(d=r=3\) with \(2\beta\mu\geq 1\)._ Applying (2.2), Holder's and Young's inequalities, we obtain
\[\Big{|}e^{y(\vartheta_{t}\omega_{k})}b(\mathscr{U}^{k},\mathscr{U}^{k}, \boldsymbol{u}^{0})\Big{|}=\Big{|}e^{y(\vartheta_{t}\omega_{k})}b(\mathscr{U}^{k}, \mathscr{U}^{k},\boldsymbol{u}^{k})\Big{|}\leq\frac{1}{2\beta}\|\nabla \mathscr{U}^{k}\|_{\mathbb{H}}^{2}+\frac{\beta}{2}e^{2y(\vartheta_{t}\omega_{k})}\|| \mathscr{U}^{k}||\boldsymbol{u}^{k}|\|_{\mathbb{H}}^{2}, \tag{3.14}\]
and
\[\Big{|}e^{y(\vartheta_{t}\omega_{k})}-e^{y(\vartheta_{t}\omega_{0})}||b( \boldsymbol{u}^{0},\mathscr{U}^{k},\boldsymbol{u}^{0})\Big{|}\leq C\Big{|}1-e^{y( \vartheta_{t}\omega_{0})-y(\vartheta_{t}\omega_{k})}\Big{|}^{2}\|\nabla \boldsymbol{u}^{0}\|_{\mathbb{H}}^{2}+\frac{\beta}{2}e^{2y(\vartheta_{t}\omega_{k })}\||\mathscr{U}^{k}||\boldsymbol{u}^{0}|\|_{\mathbb{H}}^{2}. \tag{3.15}\]
Combining (3.5)-(3.15), we arrive at
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{U}^{k}(t)\|_{\mathbb{H}}^{2}\leq P(t)\| \mathscr{U}^{k}(t)\|_{\mathbb{H}}^{2}+Q(t),\text{ for a.e. }t\in[\tau,\tau+T]\text{ with }T>0, \tag{3.16}\]
where
\[P =\begin{cases}y(\vartheta_{t}\omega_{k})+Ce^{2y(\vartheta_{t}\omega_{ k})}\|\nabla\mathbf{u}^{0}\|_{\mathbb{H}}^{2},&\text{ for }d=2\text{ with }r\geq 1,\\ y(\vartheta_{t}\omega_{k})+C,&\text{ for }d=3\text{ with }r>3,\\ y(\vartheta_{t}\omega_{k}),&\text{ for }d=r=3\text{ with }2\beta\mu\geq 1,\\ \end{cases}\] \[Q =C\Big{|}e^{-y(\vartheta_{t}\omega_{k})}-e^{-y(\vartheta_{t}\omega_{ 0})}\Big{|}^{2}\|\mathbf{f}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}+C|y(\vartheta_ {t}\omega_{k})-y(\vartheta_{t}\omega_{0})|^{2}\|\mathbf{u}^{0}\|_{\mathbb{H}}^{2}+ C\Big{|}e^{(r-1)y(\vartheta_{t}\omega_{k})}-e^{(r-1)y(\vartheta_{t}\omega_{0})} \Big{|}\] \[\quad\times\Big{[}\|\mathbf{u}^{0}\|_{\widetilde{\Gamma}^{r+1}}^{r+1} +\|\mathbf{u}^{k}\|_{\widetilde{\Gamma}^{r+1}}^{r+1}\Big{]}+C\times\begin{cases} \Big{|}e^{y(\vartheta_{t}\omega_{k})}-e^{y(\vartheta_{t}\omega_{0})}\big{|}^ {2}\|\nabla\mathbf{u}^{0}\|_{\mathbb{H}}^{2},&\text{ for }d=2\text{ with }r\geq 1,\\ \big{|}1-e^{y(\vartheta_{t}\omega_{0})-y(\vartheta_{t}\omega_{k})}\big{|}^ {2}\|\nabla\mathbf{u}^{0}\|_{\mathbb{H}}^{2},&\text{ for }d=3\text{ with }r>3,\\ \big{|}1-e^{y(\vartheta_{t}\omega_{0})-y(\vartheta_{t}\omega_{k})}\big{|}^ {2}\|\nabla\mathbf{u}^{0}\|_{\mathbb{H}}^{2},&\text{ for }d=r=3\text{ with }2\beta\mu\geq 1.\end{cases}\]
From (3.3), we deduce
\[\int_{\tau}^{\tau+T}2\beta e^{(r-1)y(\vartheta_{t}\omega_{k})}\| \mathbf{u}^{k}(t)\|_{\widetilde{\Gamma}^{r+1}}^{r+1}\mathrm{d}t \leq\|\mathbf{u}_{\tau}\|_{\mathbb{H}}^{2}+\frac{2}{\alpha}\int_{ \tau}^{\tau+T}e^{-2y(\vartheta_{t}\omega_{k})}\|\mathbf{f}(t)\|_{\mathbb{L}^{2}( \mathbb{R}^{d})}^{2}\mathrm{d}t\] \[\leq\|\mathbf{u}_{\tau}\|_{\mathbb{H}}^{2}+\frac{2}{\alpha}\sup_{t\in[ \tau,\tau+T]}\Big{[}e^{-2y(\vartheta_{t}\omega_{k})}\Big{]}\int_{\tau}^{\tau+ T}\|\mathbf{f}(t)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}\mathrm{d}t,\]
which gives
\[\sup_{k\in\mathbb{N}}\int_{\tau}^{\tau+T}e^{(r-1)y(\vartheta_{t}\omega_{k})}\| \mathbf{u}^{k}(t)\|_{\widetilde{\Gamma}^{r+1}}^{r+1}\mathrm{d}t\leq C(\tau,T, \omega_{0},\mathbf{u}_{\tau},\mathbf{f}), \tag{3.17}\]
where we have used (2.11) and the fact \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\). Using (2.11) and \(\mathbf{u}^{0}\in\mathrm{L}^{2}_{\mathrm{loc}}(\tau,+\infty;\mathbb{V})\), we deduce
\[\int_{\tau}^{\tau+T}P(t)\mathrm{d}t\leq C(\tau,T,\omega_{0}). \tag{3.18}\]
Now, from (3.17), \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{H})\), \(\mathbf{u}^{0}\in\mathrm{C}([\tau,+\infty);\mathbb{H})\cap\mathrm{L}^{2}_{\mathrm{ loc}}(\tau,+\infty;\mathbb{V})\cap\mathrm{L}^{r+1}_{\mathrm{loc}}(\tau,+ \infty;\widetilde{\mathbb{L}}^{r+1})\) and Lemma 2.4, we conclude
\[\lim_{k\to+\infty}\int_{\tau}^{\tau+T}Q(t)\mathrm{d}t=0. \tag{3.19}\]
Making use of the Gronwall inequality in (3.16), we get
\[\|\mathscr{U}^{k}(t)\|_{\mathbb{H}}^{2}\leq e^{\int_{\tau}^{\tau+T}P(t) \mathrm{d}t}\bigg{[}\int_{\tau}^{\tau+T}Q(t)\mathrm{d}t\bigg{]},\quad\text{for all }t\in[\tau,\tau+T]. \tag{3.20}\]
In view of (3.18)-(3.20), we complete the proof.
Lemma 3.2 ensures that we can define a mapping \(\Phi:\mathbb{R}^{+}\times\mathbb{R}\times\Omega\times\mathbb{H}\to\mathbb{H}\) by
\[\Phi(t,\tau,\omega,\mathbf{v}_{\tau}):=\mathbf{v}(t+\tau,\tau,\vartheta_{-\tau}\omega, \mathbf{v}_{\tau})=e^{y(\vartheta_{t}\omega)}\mathbf{u}(t+\tau,\tau,\vartheta_{-\tau} \omega,\mathbf{u}_{\tau}). \tag{3.21}\]
The Lusin continuity in Proposition 3.3 provides the \(\mathscr{F}\)-measurability of \(\Phi\). Consequently, in view of Lemma 3.2 and Proposition 3.3, we have the following result for NRDS.
**Proposition 3.4**.: _The mapping \(\Phi\) defined by (3.21) is an NRDS on \(\mathbb{H}\), that is, \(\Phi\) has the following properties:_
1. \(\Phi\) _is_ \((\mathscr{B}(\mathbb{R}^{+})\times\mathscr{B}(\mathbb{R})\times\mathscr{F} \times\mathscr{B}(\mathbb{H});\mathscr{B}(\mathbb{H}))\)_-measurable,_
2. \(\Phi\) _satisfies the cocycle property:_ \(\Phi(0,\tau,\omega,\cdot)=\mathrm{I}\)_, and_ \[\Phi(t+s,\tau,\omega,\mathbf{v}_{\tau})=\Phi(t,\tau+s,\vartheta_{s}\omega,\Phi(s, \tau,\omega,\mathbf{v}_{\tau})),\quad t,s\geq 0.\]
### Backward convergence of NRDS
Consider the autonomous SCBF equations driven by linear multiplicative white noise:
\[\left\{\frac{\mathrm{d}\widetilde{\mathbf{v}}(t)}{\mathrm{d}t}+\mu \mathrm{A}\widetilde{\mathbf{v}}(t)+\mathrm{B}(\widetilde{\mathbf{v}}(t))+\alpha \widetilde{\mathbf{v}}(t)+\beta\mathcal{C}(\widetilde{\mathbf{v}}(t)) =\mathscr{P}\mathbf{f}_{\infty}+\widetilde{\mathbf{v}}(t)\circ\frac{ \mathrm{d}\mathrm{W}(t)}{\mathrm{d}t}, \ t>0,\] \[\widetilde{\mathbf{v}}(x,0) =\widetilde{\mathbf{v}}_{0}(x), x\in\mathbb{R}^{d}. \tag{3.22}\]
Let \(\widetilde{\mathbf{u}}(t,\omega)=e^{-y(\vartheta_{t}\omega)}\widetilde{\mathbf{v}}(t,\omega)\). Then, \(\widetilde{\mathbf{u}}(\cdot)\) satisfies
\[\left\{\begin{aligned} \frac{\mathrm{d}\widetilde{\mathbf{u}}(t)}{ \mathrm{d}t}+\mu\mathrm{A}\widetilde{\mathbf{u}}(t)&+e^{y(\vartheta_ {t}\omega)}\mathrm{B}\big{(}\widetilde{\mathbf{u}}(t)\big{)}+\alpha\widetilde{\mathbf{ u}}(t)+\beta e^{(r-1)y(\vartheta_{t}\omega)}\mathcal{C}\big{(}\widetilde{\mathbf{u}}(t) \big{)}\\ &=\mathscr{P}\mathbf{f}_{\infty}e^{-y(\vartheta_{t}\omega)}+\sigma y (\vartheta_{t}\omega)\widetilde{\mathbf{u}}(t),\quad t>0,\\ \widetilde{\mathbf{u}}(x,0)&=\widetilde{\mathbf{u}}_{0}(x)=e ^{-y(\omega)}\widetilde{\mathbf{v}}_{0}(x), x\in\mathbb{R}^{d},\end{aligned}\right. \tag{3.23}\]
in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\).
**Proposition 3.5**.: _For all the cases given in Table 1, suppose that Assumption 1.1 is satisfied. Then, \(\lim_{\tau\to-\infty}\|\mathbf{u}_{\tau}-\widetilde{\mathbf{u}}_{0}\|_{\mathbb{H}}=0\) implies that the solution \(\mathbf{u}\) of the system (3.2) backward converges to the solution \(\widetilde{\mathbf{u}}\) of the system (3.23), that is,_
\[\lim_{\tau\to-\infty}\|\mathbf{u}(T+\tau,\tau,\vartheta_{-\tau}\omega,\mathbf{u}_{ \tau})-\widetilde{\mathbf{u}}(t,\omega,\widetilde{\mathbf{u}}_{0})\|_{\mathbb{H}}=0, \quad\text{for all }\ T>0\text{ and }\omega\in\Omega. \tag{3.24}\]
Proof.: Let \(\mathscr{U}^{\tau}(\cdot):=\mathbf{u}(\cdot+\tau,\tau,\vartheta_{-\tau}\omega,\mathbf{ u}_{\tau})-\widetilde{\mathbf{u}}(\cdot,\omega,\widetilde{\mathbf{u}}_{0})\). From (3.2) and (3.23), we obtain
\[\begin{split}\frac{\mathrm{d}\mathscr{U}^{\tau}}{\mathrm{d}t}& =-\mu\mathrm{A}\mathscr{U}^{\tau}-\alpha\mathscr{U}^{\tau}-e^{y( \vartheta_{t}\omega)}\big{[}\mathrm{B}\big{(}\mathbf{u}\big{)}-\mathrm{B}\big{(} \widetilde{\mathbf{u}}\big{)}\big{]}-\beta e^{(r-1)y(\vartheta_{t}\omega)}\big{[} \mathcal{C}\big{(}\mathbf{u}\big{)}-\mathcal{C}\big{(}\widetilde{\mathbf{u}}\big{)} \big{]}\\ &\quad+e^{-y(\vartheta_{t}\omega)}[\mathscr{P}\mathbf{f}(t+\tau)- \mathscr{P}\mathbf{f}_{\infty}]+\sigma y(\vartheta_{t}\omega)\mathscr{U}^{\tau}, \end{split} \tag{3.25}\]
in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\). Taking the inner product with \(\mathscr{U}^{\tau}(\cdot)\) in (3.25), and using (2.2) and (2.4), we get
\[\begin{split}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{ U}^{\tau}\|_{\mathbb{H}}^{2}&=-\mu\|\nabla\mathscr{U}^{\tau}\|_{ \mathbb{H}}^{2}-(\alpha-\sigma y(\vartheta_{t}\omega))\|\mathscr{U}^{\tau}\|_ {\mathbb{H}}^{2}-\beta e^{(r-1)y(\vartheta_{t}\omega)}\big{\langle}\mathcal{ C}\big{(}\mathbf{u}\big{)}-\mathcal{C}\big{(}\widetilde{\mathbf{u}}\big{)},\mathbf{u}- \widetilde{\mathbf{u}}\big{\rangle}\\ &\quad+e^{y(\vartheta_{t}\omega)}b(\mathscr{U}^{\tau},\mathscr{U }^{\tau},\widetilde{\mathbf{u}})+e^{-y(\vartheta_{t}\omega)}(\mathbf{f}(t+\tau)-\mathbf{f }_{\infty},\mathscr{U}^{\tau}).\end{split} \tag{3.26}\]
From (2.5), one can write
\[-\big{\langle}\mathcal{C}\big{(}\mathbf{u}\big{)}-\mathcal{C}\big{(}\widetilde{\mathbf{u }}\big{)},\mathbf{u}-\widetilde{\mathbf{u}}\big{\rangle}\leq-\frac{1}{2}\||\mathbf{u}|^{ \frac{r-1}{2}}(\mathbf{u}-\widetilde{\mathbf{u}})\|_{\mathbb{H}}^{2}-\frac{1}{2}\|| \widetilde{\mathbf{u}}|^{\frac{r-1}{2}}(\mathbf{u}-\widetilde{\mathbf{u}})\|_{\mathbb{H}}^ {2}. \tag{3.27}\]
Applying Holder's and Young's inequalities, we infer
\[\Big{|}e^{-y(\vartheta_{t}\omega)}(\mathbf{f}(t+\tau)-\mathbf{f}_{\infty},\mathscr{U} ^{\tau})\Big{|}\leq\|\mathbf{f}(t+\tau)-\mathbf{f}_{\infty}\|_{\mathbb{L}^{2}(\mathbb{ R}^{d})}^{2}+Ce^{-2y(\vartheta_{t}\omega)}\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}, \tag{3.28}\]
and
\[\begin{split}& e^{y(\vartheta_{t}\omega)}b(\mathscr{U}^{\tau}, \mathscr{U}^{\tau},\widetilde{\mathbf{u}})\\ &\leq\begin{cases}Ce^{2y(\vartheta_{t}\omega)}\|\nabla\widetilde{ \mathbf{u}}\|_{\mathbb{H}}^{2}\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+\frac{\mu}{2} \|\nabla\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2},&\text{for }d=2\text{ and }r\geq 1,\\ \frac{\mu}{2}\|\nabla\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+\frac{\beta}{2}e^{ (r-1)y(\vartheta_{t}\omega)}\||\mathscr{U}^{\tau}||\widetilde{\mathbf{u}}|^{\frac {r-1}{2}}\|_{\mathbb{H}}^{2}+C\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2},&\text{for }d=3\text{ and }r>3,\\ \frac{1}{2\beta}\|\nabla\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+\frac{\gamma}{2}e^ {2y(\vartheta_{t}\omega)}\||\mathscr{U}^{\tau}||\widetilde{\mathbf{u}}||_{ \mathbb{H}}^{2},&\text{for }d=r=3\text{ and }2\beta\mu\geq 1.\end{cases}\end{split} \tag{3.29}\]
Combining (3.26)-(3.29), we achieve
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{U}^{\tau}(t)\|_{\mathbb{H}}^{2}\leq S(t )\|\mathscr{U}^{\tau}(t)\|_{\mathbb{H}}^{2}+\|\mathbf{f}(t+\tau)-\mathbf{f}_{\infty}\|_ {\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}, \tag{3.30}\]
where
\[S(t)=C\times\begin{cases}e^{2y(\vartheta_{t}\omega)}\|\nabla\widetilde{\mathbf{u}} (t)\|_{\mathbb{H}}^{2}+e^{-2y(\vartheta_{t}\omega)}+|y(\vartheta_{t}\omega)|,& \text{ for }d=2\text{ and }r\geq 1,\\ e^{-2y(\vartheta_{t}\omega)}+|y(\vartheta_{t}\omega)|+1,&\text{ for }d=3\text{ and }r>3,\\ e^{-2y(\vartheta_{t}\omega)}+|y(\vartheta_{t}\omega)|,&\text{ for }d=r=3\text{ and }2\beta\mu\geq 1,\end{cases}\]
for a.e. \(t\in[\tau,\tau+T]\). Making use of Gronwall's inequality in (3.30) over \((0,T)\), we obtain
\[\|\mathscr{U}^{\tau}(T)\|_{\mathbb{H}}^{2}\leq\bigg{[}\|\mathscr{U}^{\tau}(0 )\|_{\mathbb{H}}^{2}+\int_{0}^{T}\|\mathbf{f}(t+\tau)-\mathbf{f}_{\infty}\|_{\mathbb{ L}^{2}(\mathbb{R}^{d})}^{2}\mathrm{d}t\bigg{]}e^{\int_{0}^{T}S(t)\mathrm{d}t}.\]
Since \(y\) is continuous and \(\widetilde{\mathbf{u}}\in\mathrm{L}^{2}(0,T;\mathbb{V})\), it implies that \(\int_{0}^{T}S(t)\mathrm{d}t\) is bounded. From Assumption 1.1 (particularly, (1.2)), we deduce
\[\int_{0}^{T}\|\mathbf{f}(t+\tau)-\mathbf{f}_{\infty}\|_{\mathbb{L}^{2}(\mathbb{R}^{d}) }^{2}\mathrm{d}t\leq\int_{-\infty}^{\tau+T}\|\mathbf{f}(t)-\mathbf{f}_{\infty}\|_{ \mathbb{L}^{2}(\mathbb{R}^{d})}^{2}\mathrm{d}t\to 0\ \text{ as }\ \tau\to-\infty. \tag{3.31}\]
Using the fact that \(\int_{0}^{T}S(t)\mathrm{d}t\) is bounded, (3.31) and \(\lim\limits_{\tau\to\infty}\|\mathscr{U}^{\tau}(0)\|_{\mathbb{H}}^{2}=0\), we conclude the proof.
### Increasing random absorbing sets
In this subsection, we prove the existence of a pullback \(\mathfrak{D}\)-random absorbing set for the system (2.6) with \(S(\mathbf{v})=\mathbf{v}\).
**Lemma 3.6**.: _For all the cases given in Table 1, suppose that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d }))\). Then, for each \((\tau,\omega,D)\in\mathbb{R}\times\Omega\times\mathfrak{D},\) there exists a time \(\mathcal{T}:=\mathcal{T}(\tau,\omega,D)>0\) such that_
\[\sup_{s\leq\tau}\sup_{t\geq\mathcal{T}}\sup_{\mathbf{u}_{0}\in D(s-t, \vartheta_{-t}\omega)}\bigg{[}\|\mathbf{u}(s,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0} )\|_{\mathbb{H}}^{2}\] \[+\frac{\alpha}{2}\int_{s-t}^{s}e^{\alpha(\zeta-s)-2\sigma\int_{s }^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+2\mu\int_{s-t}^{s}e^{\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y( \vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\nabla\mathbf{u}(\zeta,s-t,\vartheta_{-s }\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+2\beta\int_{s-t}^{s}e^{(r-1)y(\vartheta_{\zeta-s}\omega)+\alpha (\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d}\eta}\| \mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{L}^{r+1}}^{r+1} \mathrm{d}\zeta\bigg{]} \tag{3.32}\] \[\leq\frac{4}{\alpha}\sup_{s\leq\tau}\int_{-\infty}^{0}e^{\alpha \zeta+2|y(\vartheta_{\zeta}\omega)|+2\sigma\int_{\zeta}^{0}y(\vartheta_{\eta} \omega)\mathrm{d}\eta}\|\mathbf{f}(\zeta+s)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2} \mathrm{d}\zeta=:\frac{4}{\alpha}\sup_{s\leq\tau}K(s,\omega).\]
_Furthermore, for \(2<k_{1}<\infty\) and \(k_{2}>0\), there exists a time \(\mathcal{T}^{*}:=\mathcal{T}^{*}(\tau,\omega,D,k_{1})>0\) such that_
\[\sup_{s\leq\tau}\sup_{t\geq\mathcal{T}^{*}}\sup_{\mathbf{u}_{0}\in D(s -t,\vartheta_{-t}\omega)}\int_{s-t}^{s}e^{k_{2}|y(\vartheta_{\zeta-s}\omega)|+ \alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta_{-1}}\omega)\mathrm{ d}\eta}\|\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{k_{1}} \mathrm{d}\zeta\] \[\leq C\int_{-\infty}^{0}e^{k_{2}\left|y(\vartheta_{\zeta}\omega) \right|+\frac{\alpha}{k_{1}}\zeta-(k_{1}-2)\sigma\int_{\zeta}^{0}y(\vartheta_{ \eta}\omega)\mathrm{d}\eta}\mathrm{d}\zeta \tag{3.33}\] \[\quad\times\bigg{[}\int_{-\infty}^{0}e^{\frac{2(k_{1}-1)\alpha}{k_ {1}^{2}}\zeta_{1}+2|y(\vartheta_{\zeta_{1}}\omega)|+2\sigma\int_{\zeta_{1}}^{0}y (\vartheta_{\eta}\omega)\mathrm{d}\eta}\|\mathbf{f}(\zeta_{1}+s)\|_{\mathbb{L}^{2}( \mathbb{R}^{d})}^{2}\mathrm{d}\zeta_{1}\bigg{]}^{\frac{k_{1}}{2}}.\]
Proof.: Let us write the energy inequality (3.3) for \(\mathbf{u}(\zeta)=\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\), that is,
\[\frac{\mathrm{d}}{\mathrm{d}\zeta}\|\mathbf{u}(\zeta)\|_{\mathbb{H}}^{2 }+(\alpha-2\sigma y(\vartheta_{\zeta-s}\omega))\|\mathbf{u}(\zeta)\|_{\mathbb{H}}^ {2}+\frac{\alpha}{2}\|\mathbf{u}(\zeta)\|_{\mathbb{H}}^{2}+2\mu\|\nabla\mathbf{u}(\zeta )\|_{\mathbb{H}}^{2}+2\beta e^{(r-1)y(\vartheta_{\zeta-s}\omega)}\|\mathbf{u}( \zeta)\|_{\mathbb{L}^{r+1}}^{r+1} \tag{3.34}\] \[\leq\frac{2e^{2|y(\vartheta_{\zeta-s}\omega)|}}{\alpha}\|\mathbf{f}( \zeta)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}.\]
In view of the variation of constants formula with respect to \(\zeta\in(s-t,\xi)\), we get
\[\|\mathbf{u}(\xi,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^ {2}+\frac{\alpha}{2}\int_{s-t}^{\xi}e^{\alpha(\zeta-\xi)-2\sigma\int_{\xi}^{ \zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\mathbf{u}(\zeta,s-t,\vartheta_ {-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+2\mu\int_{s-t}^{\xi}e^{\alpha(\zeta-\xi)-2\sigma\int_{\xi}^{ \zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\nabla\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+2\beta\int_{s-t}^{\xi}e^{(r-1)y(\vartheta_{\zeta-s}\omega)+ \alpha(\zeta-\xi)-2\sigma\int_{\xi}^{\zeta}y(\vartheta_{\eta-s}\omega) \mathrm{d}\eta}\|\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{ L}^{r+1}}^{r+1}\mathrm{d}\zeta \tag{3.35}\] \[\leq e^{-\alpha(\xi-s+t)+2\sigma\int_{-t}^{\xi-s}y(\vartheta_{\eta }\omega)\mathrm{d}\eta}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}+\frac{2}{\alpha}\int_{ -t}^{\xi-s}e^{\alpha(\zeta+s-\xi)+2|y(\vartheta_{\zeta}\omega)|+2\sigma\int_{ \zeta}^{\xi-s}y(\vartheta_{\eta}\omega)\mathrm{d}\eta}\|\mathbf{f}(\zeta+s)\|_{ \mathbb{L}^{2}(\mathbb{R}^{d})}^{2}\mathrm{d}\zeta.\]
Putting \(\xi=s\) in (3.35), we find
\[\|\mathbf{u}(s,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{ 2}+\frac{\alpha}{2}\int_{s-t}^{s}e^{\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y( \vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\mathbf{u}(\zeta,s-t,\vartheta_{-s} \omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+2\mu\int_{s-t}^{s}e^{\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y( \vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\nabla\mathbf{u}(\zeta,s-t,\vartheta_{-s }\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+2\beta\int_{s-t}^{s}e^{(r-1)y(\vartheta_{\zeta-s}\omega)+ \alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d} \eta}\|\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{L}^{r+1}}^ {r+1}\mathrm{d}\zeta \tag{3.36}\] \[\leq e^{-\alpha t+2\sigma\int_{-t}^{0}y(\vartheta_{\eta}\omega) \mathrm{d}\eta}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}+\frac{2}{\alpha}\int_{-\infty} ^{0}e^{\alpha\zeta+2|y(\vartheta_{\zeta}\omega)|+2\sigma\int_{\zeta}^{0}y( \vartheta_{\eta}\omega)\mathrm{d}\eta}\|\mathbf{f}(\zeta+s)\|_{\mathbb{L}^{2}( \mathbb{R}^{d})}^{2}\mathrm{d}\zeta,\]
for all \(s\leq\tau\). Since \(\mathbf{u}_{0}\in D(s-t,\vartheta_{-t}\omega)\) and \(D\) is backward tempered, it implies from (2.9) and the definition of backward temperedness (2.12) that there exists a time \(\mathcal{T}=\mathcal{T}(\tau,\omega,D)\) such that for all \(t\geq\mathcal{T}\),
\[e^{-\alpha t+2\sigma\int_{-t}^{0}y(\vartheta_{\eta}\omega)\mathrm{ d}\eta}\sup_{s\leq\tau}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2} \tag{3.37}\] \[\leq e^{-\frac{\alpha}{3}t}\sup_{s\leq\tau}\|D(s-t,\vartheta_{-t }\omega)\|_{\mathbb{H}}^{2}\leq\frac{2}{\alpha}\int_{-\infty}^{0}e^{\alpha\zeta +2|y(\vartheta_{\zeta}\omega)|+2\sigma\int_{\zeta}^{0}y(\vartheta_{\eta}\omega) \mathrm{d}\eta}\|\mathbf{f}(\zeta+s)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2} \mathrm{d}\zeta.\]
Hence, by using (3.37) and taking supremum on \(s\in(-\infty,\tau]\) in (3.36), we reach at (3.32). Now, using (3.35), we estimate for \(2<k_{1}<\infty\) and \(k_{2}>0\)
\[\int_{s-t}^{s}e^{k_{2}|y(\vartheta_{\zeta-s}\omega)|+\alpha(\zeta-s )-2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\mathbf{u}( \zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{k_{1}}\mathrm{d}\zeta\] \[\leq C\int_{s-t}^{s}e^{k_{2}|y(\vartheta_{\zeta-s}\omega)|+\alpha( \zeta-s)+2\sigma\int_{\zeta-s}^{0}y(\vartheta_{\eta}\omega)\mathrm{d}\eta}\bigg{[} e^{-\frac{k_{1}}{2}\alpha(\zeta-s+t)+k_{1}\sigma\int_{-t}^{\zeta-s}y(\vartheta_{\eta} \omega)\mathrm{d}\eta}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{k_{1}}\] \[\quad+\bigg{(}\int_{-t}^{\zeta-s}e^{\alpha(\zeta_{1}+s-\zeta)+2|y( \vartheta_{\zeta_{1}}\omega)|+2\sigma\int_{\zeta_{1}}^{\zeta-s}y(\vartheta_{\eta} \omega)\mathrm{d}\eta}\|\mathbf{f}(\zeta_{1}+s)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2} \mathrm{d}\zeta_{1}\bigg{)}^{\frac{k_{1}}{2}}\bigg{]}\mathrm{d}\zeta\]
\[\leq C\int_{-\infty}^{0}e^{k_{2}\left|y(\vartheta_{\zeta}\omega) \right|+\frac{\alpha}{k_{1}}\zeta-(k_{1}-2)\sigma\int_{\zeta}^{0}y(\vartheta_{ \eta}\omega)\mathrm{d}\eta}\mathrm{d}\zeta\times\bigg{[}e^{-\frac{(k_{1}-1) \alpha}{k_{1}}t+k_{1}\sigma\int_{-t}^{0}y(\vartheta_{\eta}\omega)\mathrm{d} \eta}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{k_{1}} \tag{3.38}\] \[\quad+\bigg{(}\int_{-\infty}^{0}e^{\frac{2(k_{1}-1)\alpha}{k_{1}^{ 2}}\zeta_{1}+2|y(\vartheta_{\zeta_{1}}\omega)|+2\sigma\int_{\zeta_{1}}^{0}y( \vartheta_{\eta}\omega)\mathrm{d}\eta}\|\mathbf{f}(\zeta_{1}+s)\|_{\mathbb{L}^{2} (\mathbb{R}^{d})}^{2}\mathrm{d}\zeta_{1}\bigg{)}^{\frac{k_{1}}{2}}\bigg{]}.\]
Hence, using (2.9) and the backward-uniform temperedness property (2.12) of \(\mathbf{u}_{0}\) (see (3.37)), we obtain (3.33), as required.
**Proposition 3.7**.: _For all the cases given in Table 1, suppose that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d }))\) and Assumption 1.1 is satisfied. For \(K(\tau,\omega)\) same as in (3.32), we have_
(i) _There is an increasing pullback \(\mathfrak{D}\)-random absorbing set \(\mathscr{K}\) given by_
\[\mathscr{K}(\tau,\omega):=\left\{\mathbf{v}\in\mathbb{H}:\|\mathbf{v}\|_{\mathbb{H}}^{ 2}\leq\frac{4e^{y(\omega)}}{\alpha}\sup_{s\leq\tau}K(s,\omega)\right\},\ \text{ for all }\ \tau\in\mathbb{R}\text{ and }\omega\in\Omega. \tag{3.39}\]
_Moreover, \(\mathscr{K}\) is backward-uniformly tempered with arbitrary rate, that is, \(\mathscr{K}\in\mathfrak{D}\)._
(ii) _There is a \(\mathfrak{B}\)-pullback **random** absorbing set \(\widetilde{\mathscr{K}}\) given by_
\[\widetilde{\mathscr{K}}(\tau,\omega):=\left\{\mathbf{v}\in\mathbb{H}:\|\mathbf{v}\|_{ \mathbb{H}}^{2}\leq\frac{4e^{y(\omega)}}{\alpha}K(\tau,\omega)\right\}\in \mathfrak{B},\ \text{ for all }\ \tau\in\mathbb{R}\text{ and }\omega\in\Omega. \tag{3.40}\]
Proof.: See the proof of in [73, Proposition 4.6].
### Backward uniform tail-estimates and backward flattening-property
In this subsection, we show that the solution of the system (3.1) satisfies the _backward uniform tail-estimates_ and _backward flattening-property_ for \(d=2\) with \(r\in\{1\}\cup[2,\infty)\), \(d=3\) with \(r\in(3,\infty)\) and \(d=r=3\) with \(2\beta\mu\geq 1\). These estimates help us to obtain the backward uniform pullback \(\mathfrak{D}\)-asymptotic compactness of \(\Phi\). We use a cut-off function technique to obtain _backward uniform tail-estimates_ and _backward flattening-property_. The following lemma provides the backward uniform tail-estimates for the solutions of the system (3.1).
**Lemma 3.8**.: _For all the cases given in Table 1, suppose that Assumption 1.1 holds. Then, for any \((\tau,\omega,D)\in\mathbb{R}\times\Omega\times\mathfrak{D},\) the solution of (3.1) satisfies_
\[\lim_{k,t\to+\infty}\sup_{s\leq\tau}\sup_{\mathbf{u}_{0}\in D(s-t,\vartheta_{-t \omega})}\|\mathbf{u}(s,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{L}^{2}( \mathbb{O}_{k}^{\varepsilon})}^{2}=0, \tag{3.41}\]
_where \(\mathbb{O}_{k}=\{x\in\mathbb{R}^{d}:|x|\leq k\},\,k\in\mathbb{N}.\)_
Proof.: Let \(\mathsf{\rho}\) be a smooth function such that \(0\leq\mathsf{\rho}(\xi)\leq 1,\) for \(\xi\in\mathbb{R}^{+}\) and
\[\mathsf{\rho}(\xi)=\begin{cases}0,\text{ for }0\leq\xi\leq 1,\\ 1,\text{ for }\xi\geq 2.\end{cases}\]
Then, there exists a positive constant \(C\) such that \(|\mathsf{\rho}^{\prime}(\xi)|\leq C,\) for all \(\xi\in\mathbb{R}^{+}\). Taking divergence to the first equation of (3.1), we obtain formally in weak sense
\[-e^{-y(\vartheta_{t}\omega)}\Delta p=e^{y(\vartheta_{t}\omega)}\nabla\cdot \big{[}\big{(}\mathbf{u}\cdot\nabla\big{)}\mathbf{u}\big{]}+\beta e^{(r-1)y( \vartheta_{t}\omega)}\nabla\cdot\big{[}|\mathbf{u}|^{r-1}\mathbf{u}\big{]}-e^{-y( \vartheta_{t}\omega)}\nabla\cdot\mathbf{f}\]
\[=e^{y(\vartheta_{t}\omega)}\nabla\cdot\big{[}\nabla\cdot\big{(} \boldsymbol{u}\otimes\boldsymbol{u}\big{)}\big{]}+\beta e^{(r-1)y(\vartheta_{t} \omega)}\nabla\cdot\big{[}|\boldsymbol{u}|^{r-1}\boldsymbol{u}\big{]}-e^{-y( \vartheta_{t}\omega)}\nabla\cdot\boldsymbol{f}\] \[=e^{y(\vartheta_{t}\omega)}\sum_{i,j=1}^{d}\frac{\partial^{2}}{ \partial x_{i}\partial x_{j}}\big{(}u_{i}u_{j}\big{)}+\beta e^{(r-1)y(\vartheta _{t}\omega)}\nabla\cdot\big{[}|\boldsymbol{u}|^{r-1}\boldsymbol{u}\big{]}-e^{-y (\vartheta_{t}\omega)}\nabla\cdot\boldsymbol{f},\]
which implies
\[p=(-\Delta)^{-1}\Bigg{[}e^{2y(\vartheta_{t}\omega)}\sum_{i,j=1}^{d}\frac{ \partial^{2}}{\partial x_{i}\partial x_{j}}\big{(}u_{i}u_{j}\big{)}+\beta e^{ ry(\vartheta_{t}\omega)}\nabla\cdot\big{[}|\boldsymbol{u}|^{r-1}\boldsymbol{u} \big{]}-\nabla\cdot\boldsymbol{f}\Bigg{]}, \tag{3.42}\]
in the weak sense. Taking the inner product to the first equation of (3.1) with \(\mathsf{\rho}\Big{(}\frac{|x|^{2}}{k^{2}}\Big{)}\boldsymbol{u}\), we have
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{d}} \mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\boldsymbol{u}|^{2}\mathrm{ d}x =\mu\int_{\mathbb{R}^{d}}(\Delta\boldsymbol{u})\mathsf{\rho}\bigg{(} \frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{u}\mathrm{d}x-\alpha\int_{\mathbb{R}^ {d}}\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\boldsymbol{u}|^{2} \mathrm{d}x-e^{y(\vartheta_{t}\omega)}b\bigg{(}\boldsymbol{u},\boldsymbol{u}, \mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{u}\bigg{)} \tag{3.43}\] \[\quad-\beta e^{(r-1)y(\vartheta_{t}\omega)}\int_{\mathbb{R}^{d}} |\boldsymbol{u}|^{r+1}\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)} \mathrm{d}x-e^{-y(\vartheta_{t}\omega)}\int_{\mathbb{R}^{d}}(\nabla\mathsf{ \rho})\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{u}\mathrm{ d}x\] \[\quad+e^{-y(\vartheta_{t}\omega)}\int_{\mathbb{R}^{d}} \boldsymbol{f}\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{u} \mathrm{d}x+\sigma y(\vartheta_{t}\omega)\int_{\mathbb{R}^{d}}\mathsf{\rho} \bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\boldsymbol{u}|^{2}\mathrm{d}x.\]
Let us now estimate each term on the right hand side of (3.43). Integration by parts and divergence free condition of \(\boldsymbol{u}(\cdot)\) help us to obtain
\[\mu\int_{\mathbb{R}^{d}}(\Delta\boldsymbol{u})\mathsf{\rho}\bigg{(} \frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{u}\mathrm{d}x+\mu\int_{\mathbb{R}^{d} }|\nabla\boldsymbol{u}|^{2}\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)} \mathrm{d}x\] \[=-\mu\int_{\mathbb{R}^{d}}\mathsf{\rho}^{\prime}\bigg{(}\frac{|x|^ {2}}{k^{2}}\bigg{)}\frac{2}{k^{2}}(x\cdot\nabla)\boldsymbol{u}\cdot\boldsymbol {u}\mathrm{d}x \tag{3.44}\] \[\leq\frac{2\sqrt{2}\mu}{k}\int\limits_{k\leq|x|\leq\sqrt{2}k}| \boldsymbol{u}||\mathsf{\rho}^{\prime}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)} \bigg{|}|\nabla\boldsymbol{u}|\mathrm{d}x\leq\frac{C}{k}\int_{\mathbb{R}^{d}} |\boldsymbol{u}||\nabla\boldsymbol{u}|\mathrm{d}x\leq\frac{C}{k}\big{[}\| \boldsymbol{u}\|_{\mathbb{H}}^{2}+\|\nabla\boldsymbol{u}\|_{\mathbb{H}}^{2} \big{]},\]
and
\[-e^{y(\vartheta_{t}\omega)}b\bigg{(}\boldsymbol{u},\boldsymbol{u},\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{u}\bigg{)} =e^{y(\vartheta_{t}\omega)}\int_{\mathbb{R}^{d}}\mathsf{\rho}^{ \prime}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\frac{x}{k^{2}}\cdot\boldsymbol{u} |\boldsymbol{u}|^{2}\mathrm{d}x \tag{3.45}\] \[\leq\frac{C}{k}e^{|y(\vartheta_{t}\omega)|}\|\boldsymbol{u}\|_{ \mathbb{H}}^{\frac{6-d}{2}}\|\nabla\boldsymbol{u}\|_{\mathbb{H}}^{\frac{d}{2}} \leq\frac{C}{k}\bigg{[}\|\nabla\boldsymbol{u}\|_{\mathbb{H}}^{2}+e^{\frac{4|y (\vartheta_{t}\omega)|}{4-d}}\|\boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(6-d)}{4- d}}\bigg{]},\]
where we have used Ladyzhenskaya's (for both \(d=2,3\)) and Young's inequalities in the penultimate and final inequalities, respectively. Using integration by parts, divergence free condition and (3.42), we obtain
\[-e^{-y(\vartheta_{t}\omega)}\int_{\mathbb{R}^{d}}(\nabla p)\mathsf{ \rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{u}\mathrm{d}x=e^{-y( \vartheta_{t}\omega)}\int_{\mathbb{R}^{d}}p\mathsf{\rho}^{\prime}\bigg{(}\frac{ |x|^{2}}{k^{2}}\bigg{)}\frac{2}{k^{2}}(x\cdot\boldsymbol{u})\mathrm{d}x\] \[\quad\leq\frac{Ce^{|y(\vartheta_{t}\omega)|}}{k}\int\limits_{ \mathbb{R}^{d}}|(-\Delta)^{-1}\big{[}\nabla\cdot\big{[}\nabla\cdot\big{(} \boldsymbol{u}\otimes\boldsymbol{u}\big{)}\big{]}\big{]}|\cdot|\boldsymbol{u}| \mathrm{d}x\]
\[\leq Ce^{|y(\vartheta_{t}\omega)|}\|\mathbf{f}\|_{\mathbb{L}^{1}( \mathbb{R}^{d})}\|\mathbf{u}\|_{\mathbb{H}^{2}}^{\frac{d-d}{2}}\|\nabla\mathbf{u}\|_{ \mathbb{H}^{2}}^{\frac{d-2}{2}} \tag{3.49}\] \[\leq Ce^{2|y(\vartheta_{t}\omega)|}\|\mathbf{f}\|_{\mathbb{L}^{1}( \mathbb{R}^{d})}^{2}+C\|\mathbf{u}\|_{\mathbb{H}}^{2}+C\|\nabla\mathbf{u}\|_{\mathbb{H} }^{2}.\]
Finally, we estimate the penultimate term on right hand side (RHS) of (3.43) by using Holder's and Young's inequalities as follows:
\[e^{-y(\vartheta_{t}\omega)}\int_{\mathbb{R}^{d}}\mathbf{f}(x) \rho\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\mathbf{u}\mathrm{d}x\leq\frac{\alpha}{4} \int_{\mathbb{R}^{d}}\rho\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\mathbf{u}|^{2} \mathrm{d}x+\frac{e^{2|y(\vartheta_{t}\omega)|}}{\alpha}\int_{\mathbb{R}^{d} }\rho\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\mathbf{f}(x)|^{2}\mathrm{d}x. \tag{3.50}\]
Combining (3.43)-(3.50), we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathbf{u}\|_{\mathbb{L}^{2}(\mathcal{O} _{k}^{c})}^{2}+(\alpha-2\sigma y(\vartheta_{t}\omega))\|\mathbf{u}\|_{\mathbb{L}^{2 }(\mathcal{O}_{k}^{c})}^{2}\] \[\leq\frac{C}{k}\Big{[}\|\mathbf{u}\|_{\mathbb{H}}^{2}+\|\nabla\mathbf{u} \|_{\mathbb{H}}^{2}+e^{(r-1)y(\vartheta_{t}\omega)}\|\mathbf{u}\|_{\mathbb{L}^{r+1 }}^{r+1}+e^{2|y(\vartheta_{t}\omega)|}\|\mathbf{f}\|_{\mathbb{L}^{1}(\mathbb{R}^{ d})}^{2}\Big{]} \tag{3.51}\] \[\quad+\frac{C}{k}\Big{[}e^{\frac{4|y(\vartheta_{t}\omega)|}{4-d} \|\mathbf{u}\|_{\mathbb{H}}^{2\frac{(6-d)}{4-d}}}+e^{(r-1)|y(\vartheta_{t}\omega) |}\|\mathbf{u}\|_{\mathbb{H}}^{r+1}\Big{]}+\frac{2e^{2|y(\vartheta_{t}\omega)|}}{ \alpha}\int_{|x|\geq k}|\mathbf{f}(x)|^{2}\mathrm{d}x.\]
Applying the variation of constants formula to the above equation (3.51) on \((s-t,s)\) and replacing \(\omega\) by \(\vartheta_{-s}\omega\), for \(s\leq\tau\), \(t\geq 0\) and \(\omega\in\Omega\), we find
\[\|\mathbf{u}(s,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{L}^{2} (\mathcal{O}_{k}^{c})}^{2}\] \[\leq e^{-\alpha t+2\sigma\int_{-t}^{0}y(\vartheta_{n}\omega) \mathrm{d}n}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}+\frac{C}{k}\bigg{[}\int_{s-t}^{s} e^{\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{n-s}\omega)\mathrm{d}n}\| \mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[\quad+\int_{s-t}^{s}e^{\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y( \vartheta_{n-s}\omega)\mathrm{d}n}\|\nabla\mathbf{u}(\zeta,s-t,\vartheta_{-s} \omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[\quad+\int_{s-t}^{s}e^{(r-1)y(\vartheta_{\zeta-s}\omega)+\alpha( \zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{n-s}\omega)\mathrm{d}n}\|\mathbf{u} (\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}^{r+1}}^{r+1}\mathrm{ d}\zeta\] \[\quad+\int_{-t}^{0}e^{\alpha\zeta+2|y(\vartheta_{\zeta}\omega)| +2\sigma\int_{\zeta}^{0}y(\vartheta_{n}\omega)\mathrm{d}n}\|\mathbf{f}(\zeta+s)\| _{\mathbb{L}^{1}(\mathbb{R}^{d})}^{2}\mathrm{d}\zeta\bigg{]}\] \[\quad+\frac{C}{k}\bigg{[}\int_{s-t}^{s}e^{\frac{4|y(\vartheta_{ \zeta-s}\omega)|}{4-d}+\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{n -s}\omega)\mathrm{d}n}\|\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{ \mathbb{H}}^{\frac{2(6-d)}{4-d}}\mathrm{d}\zeta\] \[\quad+\int_{s-t}^{s}e^{(r-1)|y(\vartheta_{\zeta-s}\omega)|+\alpha (\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{n-s}\omega)\mathrm{d}n}\|\mathbf{u} (\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{r+1}\mathrm{d}\zeta \bigg{]} \tag{3.52}\] \[\quad+C\int_{-t}^{0}e^{\alpha\zeta+2|y(\vartheta_{\zeta}\omega)| +2\sigma\int_{\zeta}^{0}y(\vartheta_{n}\omega)\mathrm{d}n}\int_{|x|\geq k}| \mathbf{f}(x,\zeta+s)|^{2}\mathrm{d}x\mathrm{d}\zeta.\]
Now using (2.9), the definition of backward-uniform temperedness (2.12) (for the first term on RHS of (3.52)), Lemma 3.6 ((3.32) and (3.33) for the second and third terms on RHS of (3.52), respectively) and (1.8) (for the final term on RHS of (3.52)), we immediately complete the proof.
The following lemma provides the backward flattening-property for the solution of the system (3.1). For each \(k\geq 1\), we let
\[\varrho_{k}(x):=1-\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)},\ \ x\in\mathbb{R}^{d}.\]
Let \(\bar{\mathbf{u}}:=\varrho_{k}\mathbf{u}\) for \(\mathbf{u}:=\mathbf{u}(s,s-t,\omega,\mathbf{u}_{\tau})\in\mathbb{H}\). Then \(\bar{\mathbf{u}}\in\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})\), which has the orthogonal decomposition:
\[\bar{\mathbf{u}}=\mathrm{P}_{i}\bar{\mathbf{u}}\oplus(\mathrm{I}-\mathrm{P}_{i})\bar{ \mathbf{u}}=:\bar{\mathbf{u}}_{i,1}+\bar{\mathbf{u}}_{i,2},\quad\text{for eah }i\in\mathbb{N}, \tag{3.53}\]
where, \(\mathrm{P}_{i}:\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})\to\mathbb{H}_{i}:=\mathrm{ span}\{e_{1},e_{2},\cdots,e_{i}\}\subset\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})\) is a canonical projection and \(\{e_{j}\}_{j=1}^{\infty}\) is the family of eigenfunctions for \(-\Delta\) in \(\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})\) with corresponding positive eigenvalues \(\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{j}\to\infty\) as \(j\to\infty\). We also have that
\[\varrho_{k}\Delta\mathbf{u}=\Delta\bar{\mathbf{u}}-\mathbf{u}\Delta\varrho_{k}-2\nabla \varrho_{k}\cdot\nabla\mathbf{u}.\]
Furthermore, for \(\mathbf{\psi}\in\mathbb{H}_{0}^{1}(\mathcal{O}_{\sqrt{2}k})\), we have
\[\mathrm{P}_{i}\mathbf{\psi} =\sum_{j=1}^{i}(\mathbf{\psi},e_{j})e_{j},\ \nabla\mathrm{P}_{i}\mathbf{\psi}= \mathrm{A}^{1/2}\mathrm{P}_{i}\mathbf{\psi}=\sum_{j=1}^{i}\lambda_{j}^{1/2}(\mathbf{ \psi},e_{j})e_{j},\] \[(\mathrm{I}-\mathrm{P}_{i})\mathbf{\psi} =\sum_{j=i+1}^{\infty}(\mathbf{\psi},e_{j})e_{j},\ \nabla(\mathrm{I}-\mathrm{P}_{i})\mathbf{\psi}= \mathrm{A}^{1/2}(\mathrm{I}-\mathrm{P}_{i})\mathbf{\psi}=\sum_{j=i+1}^{\infty} \lambda_{j}^{1/2}(\mathbf{\psi},e_{j})e_{j}, \tag{3.54}\] \[\|\nabla(\mathrm{I}-\mathrm{P}_{i})\mathbf{\psi}\|_{\mathrm{L}^{2}( \mathcal{O}_{\sqrt{2}k})}^{2} =\sum_{j=i+1}^{\infty}\lambda_{j}|(\mathbf{\psi},e_{j})|^{2}\geq \lambda_{i+1}\sum_{j=i+1}^{\infty}|(\mathbf{\psi},e_{j})|^{2}=\lambda_{i+1}\|( \mathrm{I}-\mathrm{P}_{i})\mathbf{\psi}\|_{\mathrm{L}^{2}(\mathcal{O}_{\sqrt{2}k}) }^{2}.\]
**Lemma 3.9**.: _For all the cases given in Table 1, suppose that Assumption 1.1 is satisfied. Let \((\tau,\omega,D)\in\mathbb{R}\times\Omega\times\mathfrak{D}\) and \(k\geq 1\) be fixed. Then_
\[\lim_{i,t\to+\infty}\sup_{s\leq\tau}\sup_{\mathbf{u}_{0}\in D(s-t, \vartheta_{-t}\omega)}\|(\mathrm{I}-\mathrm{P}_{i})\bar{\mathbf{u}}(s,s-t,\vartheta _{-s}\omega,\bar{\mathbf{u}}_{0,2})\|_{\mathrm{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{ 2}=0, \tag{3.55}\]
_where \(\bar{\mathbf{u}}_{0,2}=(\mathrm{I}-\mathrm{P}_{i})(\varrho_{k}\mathbf{u}_{0})\)._
Proof.: Multiplying by \(\varrho_{k}\) in the first equation of (3.1), we rewrite the equation as:
\[\frac{\mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t}-\mu\Delta\bar{\mathbf{u}}+ e^{y(\vartheta_{t}\omega)}\varrho_{k}(\mathbf{u}\cdot\nabla)\mathbf{u}+\alpha\bar{\mathbf{u}} +\beta e^{(r-1)y(\vartheta_{t}\omega)}\varrho_{k}|\mathbf{u}|^{r-1}\mathbf{u} \tag{3.56}\] \[=-e^{-y(\vartheta_{t}\omega)}\varrho_{k}\nabla p+e^{-y(\vartheta_ {t}\omega)}\varrho_{k}\mathbf{f}+\sigma y(\vartheta_{t}\omega)\bar{\mathbf{u}}-\mu\mathbf{ u}\Delta\varrho_{k}-2\mu\nabla\varrho_{k}\cdot\nabla\mathbf{u}.\]
Applying \((\mathrm{I}-\mathrm{P}_{i})\) to the equation (3.56) and taking the inner product of the resulting equation with \(\bar{\mathbf{u}}_{i,2}\) in \(\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})\) gives
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\bar{\mathbf{u}}_{i,2}\|_ {\mathrm{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{2}+\mu\|\nabla\bar{\mathbf{u}}_{i,2}\|_ {\mathrm{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{2}+(\alpha-\sigma y(\vartheta_{t} \omega))\|\bar{\mathbf{u}}_{i,2}\|_{\mathrm{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{2}+ \beta e^{(r-1)y(\vartheta_{t}\omega)}\|\mathbf{u}|^{\frac{r-1}{2}}\bar{\mathbf{u}}_{i,2 }\|_{\mathrm{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{2} \tag{3.57}\] \[=-\underbrace{e^{y(\vartheta_{t}\omega)}\sum_{q,q^{\prime}=1}^{d} \int_{\mathcal{O}_{\sqrt{2}k}}(\mathrm{I}-\mathrm{P}_{i})\bigg{[}u_{q}\frac{ \partial u_{q^{\prime}}}{\partial x_{q}}\{\varrho_{k}(x)\}^{2}u_{q^{\prime}} \bigg{]}\mathrm{d}x}_{=:J_{1}}-\underbrace{\big{(}e^{-y(\vartheta_{t}\omega)} \varrho_{k}\nabla p,\bar{\mathbf{u}}_{i,2}\big{)}}_{=:J_{2}}\] \[+\underbrace{\big{\{}\big{(}e^{-y(\vartheta_{t}\omega)}\varrho_ {k}\mathbf{f},\bar{\mathbf{u}}_{i,2}\big{)}-\mu\big{(}\mathbf{u}\Delta\varrho_{k},\bar{\mathbf{ u}}_{i,2}\big{)}-\mu\big{(}2\nabla\varrho_{k}\cdot\nabla\mathbf{u},\bar{\mathbf{u}}_{i,2} \big{)}\Big{\}}}_{=:J_{3}}.\]
Next, we estimate each terms of (3.57) as follows: Using integration by parts, divergence free condition of \(\mathbf{u}(\cdot)\), (3.54) (without loss of generality (WLOG), one may assume that \(\lambda_{i}\geq 1\)), Holder's and Young's inequalities, we get
\[|J_{1}| =e^{|y(\vartheta_{t}\omega)|}\bigg{|}\int_{\mathcal{O}_{\sqrt{2}k }}(\mathrm{I}-\mathrm{P}_{i})\bigg{[}\rho^{\prime}\bigg{(}\frac{|x|^{2}}{k^{2}} \bigg{)}\frac{x}{k^{2}}\cdot\varrho_{k}(x)\mathbf{u}|\mathbf{u}|^{2}\bigg{]}\mathrm{d}x \bigg{|} \tag{3.58}\] \[\leq Ce^{|y(\vartheta_{t}\omega)|}\|\bar{\mathbf{u}}_{i,2}\|_{\mathrm{ L}^{2}(\mathcal{O}_{\sqrt{2}k})}\|\mathbf{u}\|_{\mathrm{L}^{4}}^{2}\] \[\leq C\lambda_{i+1}^{-\frac{(4-d)}{8}}e^{|y(\vartheta_{t}\omega)|} \|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathrm{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{d- d}{4}}\|\nabla\mathbf{u}\|_{\mathrm{H}}^{\frac{d}{2}}\|\mathbf{u}\|_{\mathrm{H}}^{ \frac{8-d}{4}}\] \[\leq\frac{\mu}{20}\|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathrm{L}^{2}( \mathcal{O}_{\sqrt{2}k})}^{2}+C\lambda_{i+1}^{-\frac{4-d}{4+d}}\bigg{[}\|\nabla \mathbf{u}\|_{\mathrm{H}}^{2}+e^{\frac{8|y(\vartheta_{t}\omega)|}{4-d}}\|\mathbf{u}\|_{ \mathrm{H}}^{\frac{2(8-d)}{4-d}}\bigg{]},\]
\[|J_{3}| \leq C\bigg{[}e^{|y(\vartheta_{t}\omega)|}\|\mathbf{f}\|_{\mathbb{L}^{2}( \mathbb{R}^{d})}+\|\mathbf{u}\|_{\mathbb{H}}+\|\nabla\mathbf{u}\|_{\mathbb{H}}\bigg{]} \|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})} \tag{3.59}\] \[\leq C\lambda_{i+1}^{-\frac{1}{2}}\bigg{[}\|\mathbf{u}\|_{\mathbb{H}}+ \|\nabla\mathbf{u}\|_{\mathbb{H}}+e^{|y(\vartheta_{t}\omega)|}\|\mathbf{f}\|_{\mathbb{ L}^{2}(\mathbb{R}^{d})}\bigg{]}\|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathcal{O}_{\sqrt{2}k})}\] \[\leq\frac{\mu}{20}\|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathcal{O}_{\sqrt{2}k})}^{2}+C\lambda_{i+1}^{-1}\bigg{[}\|\mathbf{u}\|_{\mathbb{ H}}^{2}+\|\nabla\mathbf{u}\|_{\mathbb{H}}^{2}+e^{2|y(\vartheta_{t}\omega)|}\|\mathbf{f}\|_{ \mathbb{L}^{2}(\mathbb{R}^{d})}^{2}\bigg{]}.\]
Using integration by parts, divergence free condition and (3.42), we obtain
\[|J_{2}|\] \[=\bigg{|}e^{-y(\vartheta_{t}\omega)}\int_{\mathcal{O}_{\sqrt{2}k} }(\mathrm{I}-\mathrm{P}_{i})p\mathsf{\rho}^{\prime}\bigg{(}\frac{|x|^{2}}{k^{2 }}\bigg{)}\frac{4}{k^{2}}(x\cdot\bar{\mathbf{u}})\mathrm{d}x\bigg{|}\] \[\leq Ce^{|y(\vartheta_{t}\omega)|}\int_{\mathcal{O}_{\sqrt{2}k}} \big{|}(-\Delta)^{-1}\big{[}\nabla\cdot\big{[}\nabla\cdot\big{(}\mathbf{u}\otimes \mathbf{u}\big{)}\big{]}\big{]}\big{|}\cdot|\bar{\mathbf{u}}_{i,2}|\mathrm{d}x\] \[\quad+Ce^{(r-1)y(\vartheta_{t}\omega)}\int_{\mathcal{O}_{\sqrt{2} k}}\big{|}(-\Delta)^{-1}\big{[}\nabla\cdot\big{[}|\mathbf{u}|^{r-1}\mathbf{u}\big{]} \big{]}\big{|}\cdot|\bar{\mathbf{u}}_{i,2}|\mathrm{d}x+Ce^{|y(\vartheta_{t}\omega )|}\int_{\mathcal{O}_{\sqrt{2}k}}|(-\Delta)^{-1}[\nabla\cdot\mathbf{f}]|\cdot|\bar {\mathbf{u}}_{i,2}|\mathrm{d}x \tag{3.60}\] \[=:C\Big{[}\widetilde{S}_{1}(d,r)+\widetilde{S}_{2}(d,r)+ \widetilde{S}_{3}(d,r)\Big{]}.\]
**Estimate of \(\widetilde{S}_{1}(d,r)\):** Using Holder's inequality, Fourier transformation, Ladyzhenskaya's and Young's inequalities, respectively, we get for \(d=2,3\) (similar to (3.58) above),
\[|\widetilde{S}_{1}(d,r)| \leq e^{|y(\vartheta_{t}\omega)|}\big{\|}(-\Delta)^{-1}\big{[} \nabla\cdot\big{[}\nabla\cdot\big{(}\mathbf{u}\otimes\mathbf{u}\big{)}\big{]}\big{]} \big{\|}_{\mathbb{L}^{2}(\mathcal{R}^{d})}\|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2 }(\mathcal{O}_{\sqrt{2}k})} \tag{3.61}\] \[\leq e^{|y(\vartheta_{t}\omega)|}\|\mathbf{u}\|_{\mathbb{L}^{4}}^{2} \|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}\] \[\leq\frac{\mu}{20}\|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathcal{O}_{\sqrt{2}k})}^{2}+C\lambda_{i+1}^{-\frac{4-d}{4+d}}\bigg{[}\| \nabla\mathbf{u}\|_{\mathbb{H}}^{2}+e^{\frac{8|y(\vartheta_{t}\omega)|}{4-d}} \|\mathbf{u}\|_{\mathbb{H}}^{\frac{2(8-d)}{4-d}}\bigg{]}.\]
**Estimate of \(\widetilde{S}_{2}(d,r)\):** Divergence free condition gives \(\widetilde{S}_{2}(d,r)=0\) for \(r=1\). Therefore, we will consider \(r\in[2,\infty)\) for \(d=2\) and \(r\in[3,\infty)\) for \(d=3\). Applying Holder's, Gagliardo-Nirenberg's, interpolation and Young's inequalities, we obtain
\[|\widetilde{S}_{2}(d,r)| \leq e^{(r-1)|y(\vartheta_{t}\omega)|}\times\begin{cases}\|(- \Delta)^{-1}\big{[}\nabla\cdot\big{[}|\mathbf{u}|^{r-1}\mathbf{u}\big{]}\big{]}\|_{ \mathbb{L}^{2}(\mathcal{R}^{d})}\|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathcal{O}_{\sqrt{2}k})},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \|(-\Delta)^{-1}\big{[}\nabla\cdot\big{[}|\mathbf{u}|^{r-1}\mathbf{u}\big{]}\big{]}\|_{ \mathbb{L}^{2}(\mathcal{R}^{d})}\|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathcal{O}_{\sqrt{2}k})},&\text{for $d=3$ and $r\in[3,5]$},\\ \|(-\Delta)^{-1}\big{[}\nabla\cdot\big{[}|\mathbf{u}|^{r-1}\mathbf{u}\big{]}\big{]}\|_{ \mathbb{L}^{\frac{3(r+1)}{2r-1}}(\mathcal{R}^{d})}\|\bar{\mathbf{u}}_{i,2}\|_{ \mathbb{L}^{\frac{3(r+1)}{r+4}}(\mathcal{O}_{\sqrt{2}k})},&\text{for $d=3$ and $r\in(5,\infty)$},\\ &\text{for $d=3$ and $r\in(5,\infty)$},\end{cases}\] \[\leq Ce^{(r-1)|y(\vartheta_{t}\omega)|}\times\begin{cases}\|\mathbf{u} \|_{\mathbb{L}^{r}}^{r}\|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt {2}k})},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \|\mathbf{u}\|_{\mathbb{L}^{r}\frac{\theta_{t}}{\omega}}^{r}\|\bar{\mathbf{u}}_{i,2}\|_{ \mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})},&\text{for $d=3$ and $r\in[3,5]$},\\ \|\mathbf{u}\|_{\mathbb{L}^{r+1}}^{r}\|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{\frac{3(r+1)} {r+4}}(\mathcal{O}_{\sqrt{2}k})},&\text{for $d=3$ and $r\in(5,\infty)$},\end{cases}\]
\[\leq Ce^{(r-1)|y(\vartheta_{t}\omega)|}\times\begin{cases}\| \boldsymbol{u}\|_{\widetilde{\Gamma}^{r-1}}^{\frac{(r+1)(r-2)}{r-1}}\|\boldsymbol{u} \|_{\widetilde{\Gamma}^{r-1}}^{\frac{2}{(r-1)}}\|\bar{\boldsymbol{u}}_{i,2}\|_ {\mathbb{L}^{2}(\mathbb{O}\sqrt{2k})},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r+1}}^{\frac{3(r-5)}{3(r-1)}}\| \boldsymbol{u}\|_{\widetilde{\Gamma}^{3(r-1)}}^{\frac{5-r}{3(r-1)}}\| \bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathbb{O}\sqrt{2k})},&\text{for $d=3$ and $r\in[3,5]$},\\ \|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r+1}}^{\frac{(r+3)(r-5)}{3(r-1)}}\| \bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathbb{O}\sqrt{2k})}^{\frac{2(r+ 1)}{3(r-1)}},&\text{for $d=3$ and $r\in(5,\infty)$},\\ \end{cases}\] \[\leq Ce^{(r-1)|y(\vartheta_{t}\omega)|}\times\begin{cases}\lambda_{i +1}^{-\frac{1}{r^{2}}}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r+1}}^{\frac{(r +1)(r-2)}{r-1}}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r}}^{\frac{r}{r-1}}\| \nabla\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathbb{O}\sqrt{2k})}^{ \frac{1}{r-1}},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \lambda_{i+1}^{-\frac{1}{3(r-1)}}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r+1 }}^{\frac{(r+3)(r-1)}{3(r-1)}}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{3(r-1) }}^{\frac{2r}{3(r-1)}}\|\nabla\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathbb{O}\sqrt{2k})}^{\frac{2}{3(r-1)}},&\text{for $d=3$ and $r\in[3,5]$},\\ \lambda_{i+1}^{-\frac{1}{3(r-1)}}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r+1 }}^{\frac{(r+3)(r-5)}{3(r-1)}}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{3(r-1) }}^{\frac{2r}{3(r-1)}}\|\nabla\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathbb{O}\sqrt{2k})}^{\frac{2}{3(r-1)}},&\text{for $d=3$ and $r\in(5,\infty)$},\\ \end{cases} \tag{3.62}\] \[\leq\frac{\mu}{20}\|\nabla\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^ {2}(\mathbb{O}\sqrt{2k})}^{2}+C\lambda_{i+1}^{-\frac{1}{r^{2}}}\Big{[}e^{(r-1 )|y(\vartheta_{t}\omega)|}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r+1}}^{r+1}+ e^{2(r-1)|y(\vartheta_{t}\omega)|}\|\boldsymbol{u}\|_{\mathbb{H}}^{2r}\Big{]},\]
where we have used the fact that \(\lambda_{i}\geq 1\).
**Estimate of \(\widetilde{S}_{3}(d,r)\)**: Similar to (3.49), we find (for \(d=2,3\))
\[|\widetilde{S}_{3}(d,r)| \leq Ce^{|y(\vartheta_{t}\omega)|}\|(-\Delta)^{-1}[\nabla\cdot \boldsymbol{f}]\|_{\mathbb{L}\frac{d}{d-1}}\|\bar{\boldsymbol{u}}_{i,2}\|_{ \mathbb{L}^{d}(\mathbb{O}\sqrt{2k})}\] \[\leq Ce^{|y(\vartheta_{t}\omega)|}\|\boldsymbol{f}\|_{\mathbb{L}^ {1}(\mathbb{R}^{d})}\|\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathbb{O} \sqrt{2k})}^{\frac{4-d}{2}}\|\nabla\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2} (\mathbb{O}\sqrt{2k})}^{\frac{d-2}{2}}\] \[\leq C\lambda_{i+1}^{-\frac{4-d}{4}}e^{|y(\vartheta_{t}\omega)|} \|\boldsymbol{f}\|_{\mathbb{L}^{1}(\mathbb{R}^{d})}\|\nabla\bar{\boldsymbol{u} }_{i,2}\|_{\mathbb{L}^{2}(\mathbb{O}\sqrt{2k})} \tag{3.63}\] \[\leq\frac{\mu}{20}\|\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathbb{O}\sqrt{2k})}^{2}+C\lambda_{i+1}^{-\frac{4-d}{2}}e^{2|y(\vartheta_{t} \omega)|}\|\boldsymbol{f}\|_{\mathbb{L}^{1}(\mathbb{R}^{d})}^{2}.\]
Now, combining (3.57)-(3.63), we arrive at
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\bar{\boldsymbol{u}}_{i,2}\|_{ \mathbb{L}^{2}(\mathbb{O}\sqrt{2k})}^{2}+(\alpha-2\sigma y(\vartheta_{t}\omega)) \|\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathbb{O}\sqrt{2k})}^{2}\] \[\leq C\lambda_{i+1}^{-\frac{1}{r^{2}}}\bigg{[}\|\boldsymbol{u}\| _{\mathbb{H}}^{2}+e^{\frac{8|y(\vartheta_{t}\omega)|}{4-d}}\|\boldsymbol{u}\|_{ \mathbb{H}}^{\frac{2(8-d)}{4-d}}+e^{2(r-1)|y(\vartheta_{t}\omega)|}\| \boldsymbol{u}\|_{\mathbb{H}}^{2r}+\|\nabla\boldsymbol{u}\|_{\mathbb{H}}^{2}+e ^{(r-1)y(\vartheta_{t}\omega)}\|\boldsymbol{u}\|_{\widetilde{\Gamma}^{r+1}}^{r+1} \tag{3.64}\] \[+e^{2|y(\vartheta_{t}\omega)|}\|\boldsymbol{f}\|_{\mathbb{L}^{1}( \mathbb{R}^{d})}^{2}+e^{2|y(\vartheta_{t}\omega)|}\|\boldsymbol{f}\|_{\mathbb{L}^ {2}(\mathbb{R}^{d})}^{2}\bigg{]}.\]
In view of the variation of constant formula, we find
\[\|(\mathrm{I}-\mathrm{P}_{i})\bar{\boldsymbol{u}}(s,s-t,\vartheta_{-s} \omega,\bar{\boldsymbol{u}}_{0,2})\|_{\mathbb{L}^{2}(\mathbb{O}\sqrt{2k})}^{2}\] \[\leq e^{-\alpha t+2\sigma\int_{-t}^{0}y(\vartheta_{\eta}\omega) \mathrm{d}\eta}\|(\mathrm{I}-\mathrm{P}_{i})(\varrho_{k}\boldsymbol{u}_{0})\|_{ \mathbb{L}^{2}(\mathbb{O}\sqrt{2k})}^{2}\] \[\quad+C\lambda_{i+1}^{-\frac{1}{r^{2}}}\bigg{[}\int_{s-t}^{s}e^{ \alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d} \eta}\|\boldsymbol{u}(\zeta,s-t,\vartheta_{-s}\omega,\boldsymbol{u}_{0})\|_{ \mathbb{H}}^{2}\mathrm{d}\zeta\] \[\quad+\int_{s-t}^{s}e^{\frac{8|y(\vartheta_{\zeta-s}\omega)|}{4-d} +\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d} \eta}\|\boldsymbol{u}(\zeta,s-t,\vartheta_{-s}\omega,\boldsymbol{u}_{0})\|_{ \mathbb{H}}^{\frac{2(8-d)}{4-d}}\mathrm{d}\zeta\] \[\quad+\int_{s-t}^{s}e^{2(r-1)|y(\vartheta_{\zeta-s}\omega)|+\alpha (\zeta-s)-2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d}\eta}\| \boldsymbol{u}(\zeta,s-t,\vartheta_{-s}\omega,\boldsymbol{u}_{0})\|_{ \mathbb{H}}^{2r}\mathrm{d}\zeta\] \[\quad+\int_{s-t}^{s}e^{\alpha(\zeta-s)-2\sigma\int_{s}^{\zeta}y( \vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\nabla\boldsymbol{u}(\zeta,s-t, \vartheta_{-s}\omega,\boldsymbol{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\]
\[+\int_{s-t}^{s}e^{(r-1)y(\vartheta_{\zeta-s}\omega)+\alpha(\zeta-s)- 2\sigma\int_{s}^{\zeta}y(\vartheta_{\eta-s}\omega)\mathrm{d}\eta}\|\mathbf{u}( \zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+ 1}\mathrm{d}\zeta \tag{3.65}\] \[+\int_{-t}^{0}e^{\alpha\zeta+2|y(\vartheta_{\zeta}\omega)|+2 \sigma\int_{\zeta}^{0}y(\vartheta_{\eta}\omega)\mathrm{d}\eta}\big{[}\|\mathbf{f}( \zeta+s)\|_{\mathbb{L}^{1}(\mathbb{R}^{d})}^{2}+\|\mathbf{f}(\zeta+s)\|_{\mathbb{L }^{2}(\mathbb{R}^{d})}^{2}\big{]}\mathrm{d}\zeta\bigg{]}.\]
Since, \(\|(\mathrm{I}-\mathrm{P}_{i})(\varrho_{k}\mathbf{u}_{0})\|_{\mathbb{L}^{2}( \mathcal{O}_{\sqrt{2k}})}^{2}\leq C\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}\), for all \(\mathbf{u}_{0}\in D(s-t,\vartheta_{-t}\omega)\) and \(s\leq\tau\). Now, using the definition of backward temperedness (2.12), (2.9), (1.7), Lemma 3.6 (in particular, (3.32) and (3.33)) and the fact that \(\lambda_{i}\to\infty\) as \(i\to\infty\), we obtain (3.55), as desired, which completes the proof.
### Proof of Theorem 1.2
This subsection is devoted to the main result of this section, that is, the existence of pullback \(\mathfrak{D}\)-random attractors and their asymptotic autonomy for the solution of the system (2.6) with \(S(\mathbf{v})=\mathbf{v}\). For all the cases given in Table 1, the existence of pullback random attractors for non-autonomous SCBF equations driven by multiplicative noise on the whole space is established in [37]. For all the cases given in Table 1, as the existence of a unique pullback random attractor is known for each \(\tau\), one can obtain the existence of a unique random attractor for autonomous SCBF equations driven by multiplicative noise on the whole space (cf. [37]).
In view of Propositions 3.5 and 3.7, and Lemmas 3.8-3.9, the proof of Theorem 1.2 can be completed by applying similar arguments as in the proof of [73, Theorem 1.6] ([73, Subsection 3.5]) and [9, Theorem 5.2].
## 4. 2D and 3D SCBF equations: Additive noise
In this section, we consider SCBF equations driven by additive white noise, that is, \(S(\mathbf{v})\) is independent of \(\mathbf{v}\) and establish the asymptotic autonomy of pullback random attractors. Let us consider the following SCBF equations:
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{v}(t)}{\mathrm{d}t}+ \mu\mathrm{A}\mathbf{v}(t)+\mathrm{B}(\mathbf{v}(t))+\alpha\mathbf{v}(t)+\beta\mathcal{C} (\mathbf{v}(t))&=\mathscr{P}\mathbf{f}(t)+\mathbf{g}(x)\frac{\mathrm{d} \mathrm{W}(t)}{\mathrm{d}t},& t>\tau,\ \tau\in\mathbb{R},\\ \mathbf{v}(x)|_{t=\tau}&=\mathbf{v}_{\tau}(x),& x\in \mathbb{R}^{d},\end{aligned}\right. \tag{4.1}\]
where \(\mathbf{g}\in\mathrm{D}(\mathrm{A})\) and \(\mathrm{W}(t,\omega)\) is the standard scalar Wiener process on the probability space \((\Omega,\mathscr{F},\mathbb{P})\) (see Section 3 above). Let us define \(\mathbf{u}(t,\tau,\omega,\mathbf{u}_{\tau}):=\mathbf{v}(t,\tau,\omega,\mathbf{v}_{\tau})-\mathbf{g} (x)y(\vartheta_{t}\omega)\), where \(y\) is given in (2.7) and satisfies (2.8), and \(\mathbf{v}\) is the solution of (1.1) with \(S(\mathbf{v})=\mathbf{g}(x)\). Then \(\mathbf{u}\) satisfies:
\[\left\{\begin{aligned} \frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}-\mu \Delta\mathbf{u}&+\big{(}(\mathbf{u}+\mathbf{g}y(\vartheta_{t}\omega))\cdot \nabla\big{)}(\mathbf{u}+\mathbf{g}y(\vartheta_{t}\omega))+\alpha\mathbf{u}+\beta|\mathbf{u}+ \mathbf{g}y(\vartheta_{t}\omega)|^{r-1}(\mathbf{u}+\mathbf{g}y(\vartheta_{t}\omega))\\ &=-\nabla p+\mathbf{f}+(\sigma-\alpha)\mathbf{g}y(\vartheta_{t}\omega)+ \mu y(\vartheta_{t}\omega)\Delta\mathbf{g},&\text{in}\ \ \mathbb{R}^{d}\times(\tau,\infty),\\ \nabla\cdot\mathbf{u}&=0,&\text{in}\ \ \ \mathbb{R}^{d}\times(\tau,\infty),\\ \mathbf{u}(x)|_{t=\tau}&=\mathbf{u}_{\tau}(x)=\mathbf{v}_{\tau}(x )-\mathbf{g}(x)y(\vartheta_{\tau}\omega),& x\in\mathbb{R}^{d}\ \ \text{and}\ \ \tau\in\mathbb{R},\\ \mathbf{u}(x)|_{t=\tau}&\to 0,&\text{as}\ \ |x|\to\infty,\end{aligned}\right. \tag{4.2}\]
as well as (projected form)
(4.3) \[\left\{\begin{aligned} \frac{\mathrm{d}\boldsymbol{u}}{\mathrm{d}t}+ \mu\mathrm{A}\boldsymbol{u}&+\mathrm{B}(\boldsymbol{u}+\boldsymbol{ g}y(\vartheta_{t}\omega))+\alpha\boldsymbol{u}+\beta\mathcal{C}(\boldsymbol{u}+ \boldsymbol{g}y(\vartheta_{t}\omega))\\ &=\mathscr{P}\boldsymbol{f}+(\sigma-\alpha)\boldsymbol{g}y( \vartheta_{t}\omega)+\mu y(\vartheta_{t}\omega)\Delta\boldsymbol{g},\ \ \ \ \ t>\tau,\ \ \tau\in\mathbb{R},\\ \boldsymbol{u}(x)_{|t=\tau}&=\boldsymbol{u}_{\tau}( x)=\boldsymbol{v}_{0}(x)-\boldsymbol{g}(x)y(\vartheta_{\tau}\omega),\
Proof.: One can prove the existence and uniqueness of solution by a standard Faedo-Galerkin approximation method, see the works [31, 34, 47], etc. For the continuity with respect to the initial data \(\mathbf{u}_{\tau}\), see the proof of Theorem 3.9 in [34].
Next result shows the Lusin continuity of the mapping of the solution to the system (4.3) in sample points.
**Proposition 4.3**.: _For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), suppose that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d }))\). For each \(N\in\mathbb{N}\), the mapping \(\omega\mapsto\mathbf{u}(t,\tau,\omega,\mathbf{u}_{\tau})\) (solution of (4.3)) is continuous from \((\Omega_{N},d_{\Omega_{N}})\) to \(\mathbb{H}\), uniformly in \(t\in[\tau,\tau+T]\) with \(T>0\)._
Proof.: Let us assume \(\omega_{k},\omega_{0}\in\Omega_{N}\) be such that \(d_{\Omega_{N}}(\omega_{k},\omega_{0})\to 0\) as \(k\to\infty\). Let \(\mathscr{U}^{k}(\cdot):=\mathbf{u}^{k}(\cdot)-\mathbf{u}^{0}(\cdot)\), where \(\mathbf{u}^{k}(\cdot)=\mathbf{u}(\cdot,\tau,\omega_{k},\mathbf{u}_{\tau})\) and \(\mathbf{u}_{0}(\cdot)=\mathbf{u}(\cdot,\tau,\omega_{0},\mathbf{u}_{\tau})\). Then, \(\mathscr{U}^{k}(\cdot)\) satisfies:
\[-\beta\mathbb{C}\big{(}\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{ g}\big{)}\big{]}+\{(\sigma-\alpha)\mathbf{g}+\mu\Delta\mathbf{g}\}[y(\vartheta_{t} \omega_{k})-y(\vartheta_{t}\omega_{0})], \tag{4.10}\]
in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\). Taking the inner product with \(\mathscr{U}^{k}(\cdot)\) in (4.10), using (2.4) and rearranging the terms, we obtain
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{U}^{k}\|_{ \mathbb{H}}^{2}+\mu\|\nabla\mathscr{U}^{k}\|_{\mathbb{H}}^{2}+\alpha\|\mathscr{ U}^{k}\|_{\mathbb{H}}^{2}\] \[=-b\big{(}\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y( \vartheta_{t}\omega_{0})]\mathbf{g},\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})- y(\vartheta_{t}\omega_{0})]\mathbf{g},\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g} \big{)}\] \[\quad+[y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})] \Big{\{}b\big{(}\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g},\mathbf{u}^{k},\mathbf{g} \big{)}-b\big{(}\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g},\mathbf{u}^{0},\mathbf{g} \big{)}\Big{\}}\] \[\quad+\beta[y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0} )]\Big{\langle}\mathcal{C}\big{(}\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g} \big{)}-\mathcal{C}\big{(}\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g}\big{)},\mathbf{g}\Big{\rangle}\] \[\quad+[y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})] \big{(}(\sigma-\alpha)\mathbf{g}+\mu\Delta\mathbf{g},\mathscr{U}^{k}\big{)}. \tag{4.11}\]
From (2.5), we get
\[-\beta\Big{\langle}\mathbb{C}\big{(}\mathbf{u}^{k}+y(\vartheta_{t} \omega_{k})\mathbf{g}\big{)}-\mathbb{C}\big{(}\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0 })\mathbf{g}\big{)},\Big{(}\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g}\Big{)}- \big{(}\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g}\big{)}\Big{\rangle}\] \[\quad\leq-\frac{\beta}{2}\bigg{\|}\Big{|}\Big{(}\mathbf{u}^{k}+y( \vartheta_{t}\omega_{k})\mathbf{g}\Big{)}\Big{|}^{\frac{r-1}{2}}\Big{[}\mathscr{U} ^{k}+(y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0}))\mathbf{g}\Big{]} \bigg{\|}_{\mathbb{H}}^{2}\] \[\quad-\frac{\beta}{2}\bigg{\|}\big{|}\big{(}\mathbf{u}^{0}+y( \vartheta_{t}\omega_{0})\mathbf{g}\big{)}\big{|}^{\frac{r-1}{2}}\Big{[}\mathscr{U} ^{k}+(y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0}))\mathbf{g}\Big{]} \bigg{\|}_{\mathbb{H}}^{2}. \tag{4.12}\]
For \(r>1\) and \(\mathbf{g}\in\mathrm{D}(\mathrm{A})\), in view of Holder's and Young's inequalities, we obtain
\[\Big{\{}b\big{(}\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g},\mathbf{ u}^{k},\mathbf{g}\big{)}-b\big{(}\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g},\mathbf{u}^{0}, \mathbf{g}\big{)}\Big{\}}\] \[\leq\bigg{\{}\|\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g}\|_{ \widetilde{\mathbb{L}}^{r+1}}\|\nabla\mathbf{u}^{k}\|_{\mathbb{H}}+\|\mathbf{u}^{0}+y( \vartheta_{t}\omega_{0})\mathbf{g}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\nabla\mathbf{u}^{0} \|_{\mathbb{H}}\bigg{\}}\|\mathbf{g}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}\] \[\leq C\bigg{\{}\|\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g}\|_{ \widetilde{\mathbb{L}}^{r+1}}^{r+1}+\|\nabla\mathbf{u}^{k}\|_{\mathbb{H}}^{2}+\|\mathbf{u}^ {0}+y(\vartheta_{t}\omega_{0})\mathbf{g}\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+\| \nabla\mathbf{u}^{0}\|_{\mathbb{H}}^{2}+1\bigg{\}}. \tag{4.13}\]
Moreover, we have
\[\leq C\bigg{\{}\|\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g}\|_{ \widetilde{\mathbb{L}}^{r+1}}^{r+1}+\|\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g }\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+1\bigg{\}}, \tag{4.15}\] \[\leq C\|\mathbf{u}^{k}-\mathbf{u}^{0}\|_{\mathbb{H}}\leq C\bigg{\{}\|\mathbf{ u}^{k}\|_{\mathbb{H}}^{2}+\|\mathbf{u}^{0}\|_{\mathbb{H}}^{2}+1\bigg{\}}. \tag{4.14}\]
Next, we estimate the remaining terms of (4.11) separately.
**Case I: \(d=2\)**_and \(r\geq 2\)._ Applying (2.2), (2.3) and Young's inequality, we estimate
\[\Big{|}b\big{(}\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y( \vartheta_{t}\omega_{0})]\mathbf{g},\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y (\vartheta_{t}\omega_{0})]\mathbf{g},\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g} \big{)}\Big{|}\] \[\leq C\|\mathscr{U}^{k}+(y(\vartheta_{t}\omega_{k})-y(\vartheta_ {t}\omega_{0}))\mathbf{g}\|_{\mathbb{H}}\|\nabla\mathscr{U}^{k}+(y(\vartheta_{t} \omega_{k})-y(\vartheta_{t}\omega_{0}))\nabla\mathbf{g}\|_{\mathbb{H}}\] \[\qquad\times\|\nabla\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\nabla \mathbf{g}\|_{\mathbb{H}}\] \[\leq C\|\nabla\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\nabla\mathbf{g} \|_{\mathbb{H}}^{2}\|\mathscr{U}^{k}\|_{\mathbb{H}}^{2}+C|y(\vartheta_{t} \omega_{k})-y(\vartheta_{t}\omega_{0})|^{2}\bigg{\{}\|\nabla\mathbf{u}^{0}\|_{ \mathbb{H}}^{2}+|y(\vartheta_{t}\omega_{0})|^{2}+1\bigg{\}} \tag{4.16}\] \[\qquad+\frac{\mu}{2}\|\nabla\mathscr{U}^{k}\|_{\mathbb{H}}^{2}.\]
**Case II: \(d=3\)**_and \(r>3\)._ Using Holder's and Young's inequalities, we infer
\[\Big{|}b\big{(}\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y( \vartheta_{t}\omega_{0})]\mathbf{g},\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y (\vartheta_{t}\omega_{0})]\mathbf{g},\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g} \big{)}\Big{|}\] \[\leq\frac{\mu}{2}\|\nabla\mathscr{U}^{k}\|_{\mathbb{H}}^{2}+C\| \mathscr{U}^{k}\|_{\mathbb{H}}^{2}+C|y(\vartheta_{t}\omega_{k})-y(\vartheta_ {t}\omega_{0})|^{2} \tag{4.17}\] \[\quad+\frac{\beta}{4}\bigg{\|}\Big{|}\mathbf{u}^{0}+y(\vartheta_{t} \omega_{0})\mathbf{g}\Big{|}^{\frac{r-1}{2}}\Big{[}\mathscr{U}^{k}+(y(\vartheta_{t }\omega_{k})-y(\vartheta_{t}\omega_{0}))\mathbf{g}\Big{]}\bigg{\|}_{\mathbb{H}}^{ 2}.\]
**Case III:**_When \(d=r=3\) with \(2\beta\mu\geq 1\)._ Applying (2.2), Holder's and Young's inequalities, we obtain
\[\Big{|}b\big{(}\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y( \vartheta_{t}\omega_{0})]\mathbf{g},\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y (\vartheta_{t}\omega_{0})]\mathbf{g},\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g} \big{)}\Big{|}\] \[\leq\Big{|}b\big{(}\mathscr{U}^{k},\mathscr{U}^{k}+[y(\vartheta_{ t}\omega_{k})-y(\vartheta_{t}\omega_{0})]\mathbf{g},\mathbf{u}^{0}+y(\vartheta_{t} \omega_{0})\mathbf{g}\big{)}\Big{|} \tag{4.18}\] \[\qquad+|y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})| \Big{|}b\big{(}\mathbf{g},\mathscr{U}^{k}+[y(\vartheta_{t}\omega_{k})-y( \vartheta_{t}\omega_{0})]\mathbf{g},\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g} \big{)}\Big{|}\] \[\leq\frac{1}{2\beta}\|\nabla\mathscr{U}^{k}\|_{\mathbb{H}}+\frac {\beta}{2}\Big{\|}\big{|}\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g}\big{|} \Big{[}\mathscr{U}^{k}+(y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})) \mathbf{g}\Big{]}\Big{\|}_{\mathbb{H}}^{2}\] \[\qquad+C|y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})|^{ 2}+\frac{\beta}{2}\Big{\|}\Big{|}\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g} \Big{|}\Big{[}\mathscr{U}^{k}+(y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{ 0}))\mathbf{g}\Big{]}\Big{\|}_{\mathbb{H}}^{2}.\]
Combining (4.11)-(4.18), we arrive at
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{U}^{k}(t)\|_{\mathbb{H}}^{2}\leq C\Big{[} \widetilde{P}(t)\|\mathscr{U}^{k}(t)\|_{\mathbb{H}}^{2}+\widetilde{Q}(t)\Big{]}, \tag{4.19}\]
for a.e. \(t\in[\tau,\tau+T]\), \(T>0\), and where
\[\widetilde{P} =\begin{cases}\|\nabla\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\nabla \mathbf{g}\|_{\mathbb{H}}^{2},&\text{for $d=2$ and $r>1$},\\ 1,&\text{for $d=3$ and $r>3$},\\ 0,&\text{for $d=r=3$ and $2\beta\mu\geq 1$},\end{cases}\] \[\widetilde{Q} =|y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})|\bigg{\{} \|\mathbf{u}^{k}+y(\vartheta_{t}\omega_{k})\mathbf{g}\|_{\mathbb{L}^{r+1}}^{r+1}+\| \mathbf{u}^{k}\|_{\mathbb{V}}^{2}+\|\mathbf{u}^{0}+y(\vartheta_{t}\omega_{0})\mathbf{g}\|_ {\mathbb{L}^{r+1}}^{r+1}+\|\mathbf{u}^{0}\|_{\mathbb{V}}^{2}+1\bigg{\}}\] \[\quad+|y(\vartheta_{t}\omega_{k})-y(\vartheta_{t}\omega_{0})|^{2 }\times\begin{cases}\|\nabla\mathbf{u}^{0}\|_{\mathbb{H}}^{2}+|y(\vartheta_{t} \omega_{0})|^{2}+1,&\text{for $d=2$ and $r>1$},\\ 1,&\text{for $d=3$ and $r>3$},\\ 1,&\text{for $d=r=3$ and $2\beta\mu\geq 1$}.\end{cases}\]
We infer from (4.4) that
\[\int_{\tau}^{\tau+T}\bigg{\{}\|\mathbf{u}^{k}(t)+y(\vartheta_{t} \omega_{k})\mathbf{g}\|_{\mathbb{L}^{r+1}}^{r+1}+\|\mathbf{u}^{k}(t)\|_{\mathbb{H}}^{2} +\|\nabla\mathbf{u}^{k}(t)\|_{\mathbb{H}}^{2}\bigg{\}}\mathrm{d}t\] \[\leq\|\mathbf{u}_{\tau}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}+C\int_ {\tau}^{\tau+T}\biggl{[}\|\mathbf{f}(t)\|_{\mathbb{H}}^{2}+|y(\vartheta_{t}\omega_ {k})|^{2}+|y(\vartheta_{t}\omega_{k})|^{r+1}+|y(\vartheta_{t}\omega_{k})|^{ \frac{2(r+1)}{r-1}}\biggr{]}\mathrm{d}t,\]
which gives
\[\sup_{k\in\mathbb{N}}\int_{\tau}^{\tau+T}\bigg{\{}\|\mathbf{u}^{k}(t)+y(\vartheta_ {t}\omega_{k})\mathbf{g}\|_{\mathbb{L}^{r+1}}^{r+1}+\|\mathbf{u}^{k}(t)\|_{\mathbb{H} }^{2}+\|\nabla\mathbf{u}^{k}(t)\|_{\mathbb{H}}^{2}\bigg{\}}\mathrm{d}t\leq C(\tau, T,\omega_{0}), \tag{4.20}\]
where we have used (2.11) and the fact that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\). It implies from (2.11) and \(\mathbf{u}^{0}\in\mathrm{L}^{2}_{\mathrm{loc}}(\tau,+\infty;\mathbb{V})\) that
\[\int_{\tau}^{\tau+T}\widetilde{P}(t)\mathrm{d}t\leq C(\tau,T,\omega_{0}). \tag{4.21}\]
Now, from \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{ d}))\), \(\mathbf{u}^{0}\in\mathrm{C}([\tau,+\infty);\mathbb{H})\cap\mathrm{L}^{2}_{\mathrm{ loc}}(\tau,+\infty;\mathbb{V})\cap\mathrm{L}^{r+1}_{\mathrm{loc}}(\tau,+ \infty;\widetilde{\mathbb{L}}^{r+1})\), Lemma 2.4 and (4.20), we conclude that
\[\lim_{k\to+\infty}\int_{\tau}^{\tau+T}\widetilde{Q}(t)\mathrm{d}t=0. \tag{4.22}\]
In view of the Gronwall inequality in (4.19) and making use of (4.21)-(4.22), one can complete the proof.
Lemma 4.2 ensures us that we can define a mapping \(\widetilde{\Phi}:\mathbb{R}^{+}\times\mathbb{R}\times\Omega\times\mathbb{H} \to\mathbb{H}\) by
\[\widetilde{\Phi}(t,\tau,\omega,\mathbf{v}_{\tau}):=\mathbf{v}(t+\tau,\tau,\vartheta_{- \tau}\omega,\mathbf{v}_{\tau})=\mathbf{u}(t+\tau,\tau,\vartheta_{-\tau}\omega,\mathbf{u}_ {\tau})+\mathbf{g}y(\vartheta_{t}\omega). \tag{4.23}\]
The Lusin continuity in Proposition 4.3 provides the \(\mathscr{F}\)-measurability of \(\widetilde{\Phi}\). Consequently, \(\widetilde{\Phi}\) defined by (4.23) is a NRDS on \(\mathbb{H}\).
### Backward convergence of NRDS
Consider the autonomous SCBF equations driven by the additive white noise:
\[\begin{cases}\frac{\mathrm{d}\widetilde{\mathbf{v}}(t)}{\mathrm{d}t}+\mu\Lambda \widetilde{\mathbf{v}}(t)+\mathrm{B}(\widetilde{\mathbf{v}}(t))+\alpha\widetilde{\mathbf{v }}(t)+\beta\mathcal{C}(\widetilde{\mathbf{v}}(t))=\mathscr{P}\mathbf{f}_{\infty}+\mathbf{g }(x)\frac{\mathrm{d}\mathrm{W}(t)}{\mathrm{d}t},&t>0,\\ \widetilde{\mathbf{v}}(x,0)=\widetilde{\mathbf{v}}_{0}(x),&x\in\mathbb{R}^{d}.\end{cases} \tag{4.24}\]
Let \(\widetilde{\mathbf{u}}(t,\omega)=\widetilde{\mathbf{v}}(t,\omega)-\mathbf{g}(x)y(\vartheta_{t} \omega)\). Then, the system (4.24) can be written in the following pathwise deterministic system:
\[\left\{\begin{aligned} \frac{\mathrm{d}\widetilde{\mathbf{u}}(t)}{ \mathrm{d}t}+\mu\mathrm{A}\widetilde{\mathbf{u}}(t)+\mathrm{B}(\widetilde{\mathbf{u}}( t)+\mathbf{g}y(\vartheta_{t}\omega))+\alpha\widetilde{\mathbf{u}}(t)+\beta\mathcal{C}( \widetilde{\mathbf{u}}(t)+\mathbf{g}y(\vartheta_{t}\omega))\\ =\mathscr{P}\mathbf{f}_{\infty}+(\sigma-\alpha)\mathbf{g}y(\vartheta_{t} \omega)-\mu y(\vartheta_{t}\omega)\mathrm{A}\mathbf{g},\quad t>0,\\ \widetilde{\mathbf{u}}(x,0)=\widetilde{\mathbf{u}}_{0}(x)=\widetilde{\mathbf{v }}_{0}(x)-\mathbf{g}(x)y(\omega),\hskip 72.27ptx\in\mathbb{R}^{d},\end{aligned}\right. \tag{4.25}\]
in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\).
**Proposition 4.4**.: _For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), suppose that Assumption 1.1 is satisfied and \(\lim\limits_{\tau\to-\infty}\|\mathbf{u}_{\tau}-\widetilde{\mathbf{u}}_{0}\|_{\mathbb{ H}}=0\). Then, the solution \(\mathbf{u}\) of the system (4.3) backward converges to the solution \(\widetilde{\mathbf{u}}\) of the system (4.25), that is,_
\[\lim\limits_{\tau\to-\infty}\|\mathbf{u}(T+\tau,\tau,\vartheta_{-\tau}\omega,\mathbf{ u}_{\tau})-\widetilde{\mathbf{u}}(t,\omega,\widetilde{\mathbf{u}}_{0})\|_{\mathbb{ H}}=0,\quad\text{for all }\ T>0\ \text{and}\ \omega\in\Omega. \tag{4.26}\]
Proof.: Let \(\mathscr{U}^{\tau}(\cdot):=\mathbf{u}(\cdot+\tau,\tau,\vartheta_{-\tau}\omega,\bm {u}_{\tau})-\widetilde{\mathbf{u}}(\cdot,\omega,\widetilde{\mathbf{u}}_{0})\). From (4.3) and (4.25), we get
\[\frac{\mathrm{d}\mathscr{U}^{\tau}}{\mathrm{d}t} =-\mu\mathrm{A}\mathscr{U}^{\tau}-\alpha\mathscr{U}^{\tau}-\left[ \mathrm{B}\big{(}\mathbf{u}+\mathbf{g}y(\vartheta_{t}\omega)\big{)}-\mathrm{B}\big{(} \widetilde{\mathbf{u}}+\mathbf{g}y(\vartheta_{t}\omega)\big{)}\right]\] \[\quad-\beta\big{[}\mathcal{C}\big{(}\mathbf{u}+\mathbf{g}y(\vartheta_{t} \omega)\big{)}-\mathcal{C}\big{(}\widetilde{\mathbf{u}}+\mathbf{g}y(\vartheta_{t} \omega)\big{)}\big{]}+[\mathscr{P}\mathbf{f}(t+\tau)-\mathscr{P}\mathbf{f}_{\infty}], \tag{4.27}\]
in \(\mathbb{V}^{\prime}+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}\). In view of (4.27), we obtain
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^ {2} =-\mu\|\nabla\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}-\alpha\|\mathscr{ U}^{\tau}\|_{\mathbb{H}}^{2}-\big{\langle}\mathrm{B}\big{(}\mathbf{u}+\mathbf{g}y( \vartheta_{t}\omega)\big{)}-\mathrm{B}\big{(}\widetilde{\mathbf{u}}+\mathbf{g}y( \vartheta_{t}\omega)\big{)},\mathbf{u}-\widetilde{\mathbf{u}}\big{\rangle}\] \[\quad-\beta\big{\langle}\mathcal{C}\big{(}\mathbf{u}+\mathbf{g}y( \vartheta_{t}\omega)\big{)}-\mathcal{C}\big{(}\widetilde{\mathbf{u}}+\mathbf{g}y( \vartheta_{t}\omega)\big{)},\mathbf{u}-\widetilde{\mathbf{u}}\big{\rangle}+(\mathbf{f}(t+ \tau)-\mathbf{f}_{\infty},\mathscr{U}^{\tau}). \tag{4.28}\]
From (2.5), one can rewrite
\[-\beta\big{\langle}\mathcal{C}\big{(}\mathbf{u}+\mathbf{g}y(\vartheta_{t }\omega)\big{)}-\mathcal{C}\big{(}\widetilde{\mathbf{u}}+\mathbf{g}y(\vartheta_{t} \omega)\big{)},(\mathbf{u}+\mathbf{g}y(\vartheta_{t}\omega))-(\widetilde{\mathbf{u}}+\mathbf{ g}y(\vartheta_{t}\omega))\big{\rangle}\] \[\quad\leq-\frac{\beta}{2}\|\mathbf{u}+\mathbf{g}y(\vartheta_{t}\omega)\| ^{\frac{r-1}{2}}_{\mathbb{H}}|\mathscr{U}^{\tau}|\|_{\mathbb{H}}^{2}-\frac{ \beta}{2}\|\widetilde{\mathbf{u}}+\mathbf{g}y(\vartheta_{t}\omega)|^{\frac{r-1}{2}}| \mathscr{U}^{\tau}|\|_{\mathbb{H}}^{2} \tag{4.29}\]
Applying (2.2), (2.4), Holder's and Young's inequalities, we infer
\[\begin{aligned} &\big{|}\big{\langle}\mathrm{B}\big{(}\mathbf{u}+\mathbf{g}y( \vartheta_{t}\omega)\big{)}-\mathrm{B}\big{(}\widetilde{\mathbf{u}}+\mathbf{g}y( \vartheta_{t}\omega)\big{)},(\mathbf{u}+\mathbf{g}y(\vartheta_{t}\omega))-(\widetilde {\mathbf{u}}+\mathbf{g}y(\vartheta_{t}\omega))\big{\rangle}\big{|}\\ &=|b(\mathscr{U}^{\tau},\mathscr{U}^{\tau},\widetilde{\mathbf{u}}+\mathbf{ g}y(\vartheta_{t}\omega))|\\ &\leq\begin{cases}C\|\nabla\widetilde{\mathbf{u}}+\nabla\mathbf{g}y( \vartheta_{t}\omega)\|^{2}_{\mathbb{H}}\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+ \frac{\mu}{2}\|\nabla\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2},\ \text{for}\ d=2\ \text{and}\ r\geq 1,\\ \frac{\mu}{2}\|\nabla\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+\frac{\beta}{4}\| \widetilde{\mathbf{u}}+\mathbf{g}y(\vartheta_{t}\omega)\|^{\frac{r-1}{2}}|\mathscr{U}^{ \tau}|\|_{\mathbb{H}}^{2},\ \text{for}\ d=3\ \text{and}\ r>3,\\ \frac{1}{2\beta}\|\nabla\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+\frac{\beta}{2}\| \widetilde{\mathbf{u}}+\mathbf{g}y(\vartheta_{t}\omega)\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^ {2},\ \text{for}\ d=r=3\ \text{and}\ 2\beta\mu\geq 1,\end{cases}\end{aligned} \tag{4.30}\]
and
\[|(\mathbf{f}(t+\tau)-\mathbf{f}_{\infty},\mathscr{U}^{\tau})|\leq C\|\mathbf{f}(t+\tau)-\mathbf{f}_{ \infty}\|^{2}_{\mathbb{L}^{2}(\mathbb{R}^{d})}+\frac{\alpha}{2}\|\mathscr{U}^{ \tau}\|_{\mathbb{H}}^{2}. \tag{4.31}\]
Combining (4.28)-(4.31), we achieve
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}\leq C \times\begin{cases}\|\nabla\widetilde{\mathbf{u}}+\mathbf{g}y(\vartheta_{t}\omega)\|_{ \mathbb{H}}^{2}\|\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+\|\mathbf{f}(t+\tau)-\mathbf{f} _{\infty}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2},&\text{for $d=2$ and $r\geq 1$},\\ \|\mathscr{U}^{\tau}\|_{\mathbb{H}}^{2}+\|\mathbf{f}(t+\tau)-\mathbf{f}_{\infty}\|_{ \mathbb{L}^{2}(\mathbb{R}^{d})}^{2},&\text{for $d=3$ and $r>3$},\\ \|\mathbf{f}(t+\tau)-\mathbf{f}_{\infty}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2},&\text{ for $d=r=3$ and $2\beta\mu\geq 1$}.\end{cases} \tag{4.32}\]
Applying similar steps as in Proposition 3.5, we complete the proof.
### Increasing random absorbing sets
This subsection provides the existence of increasing \(\mathfrak{D}\)-random absorbing set for non-autonomous SCBF equations (4.1).
**Lemma 4.5**.: _For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)) and for each \((\tau,\omega,D)\in\mathbb{R}\times\Omega\times\mathfrak{D},\) there exists a time \(\widetilde{\mathcal{T}}:=\widetilde{\mathcal{T}}(\tau,\omega,D)>0\) such that_
\[\sup_{s\leq\tau}\sup_{t\geq\widetilde{\mathcal{T}}}\sup_{\mathbf{u}_ {0}\in D(s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})}\bigg{[}\|\mathbf{u}(s,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}+\frac{\alpha}{2}\int_{s-t }^{s}e^{\alpha(\zeta-s)}\|\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\| _{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[\quad+\mu\int_{s-t}^{s}e^{\alpha(\zeta-s)}\|\nabla\mathbf{u}(\zeta,s- t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta \tag{4.33}\] \[\quad+\beta\int_{s-t}^{s}e^{\alpha(\zeta-s)}\|\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})+\mathbf{g}y(\vartheta_{\zeta-s}\omega)\|_{ \mathbb{L}^{r+1}}^{r+1}\mathrm{d}\zeta\bigg{]}\leq 2R\sup_{s\leq\tau} \widetilde{K}(s,\omega),\]
_where \(R\) is the same as in (4.4) and \(\widetilde{K}(s,\omega)\) is given by_
\[\widetilde{K}(s,\omega):=\int_{-\infty}^{0}e^{\alpha\zeta}\bigg{[}\|\mathbf{f}( \zeta+s)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}+|y(\vartheta_{\zeta}\omega)|^ {2}+|y(\vartheta_{\zeta}\omega)|^{r+1}+|y(\vartheta_{\zeta}\omega)|^{\frac{2( r+1)}{r-1}}\bigg{]}\mathrm{d}\zeta. \tag{4.34}\]
_Furthermore, for \(2<k_{1}<\infty\), there exists a time \(\widetilde{\mathcal{T}}^{*}:=\widetilde{\mathcal{T}}^{*}(\tau,\omega,D,k_{1})>0\) such that_
\[\sup_{s\leq\tau}\sup_{t\geq\widetilde{\mathcal{T}}^{*}}\sup_{\mathbf{ u}_{0}\in D(s-t,\vartheta_{-t}\omega)}\int_{s-t}^{s}e^{\alpha(\zeta-s)}\|\mathbf{u}( \zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{k_{1}}\mathrm{d}\zeta \tag{4.35}\] \[\leq\frac{Ck_{1}}{\alpha}\bigg{(}\int_{-\infty}^{0}e^{\frac{2(k_{ 1}-1)\alpha}{k_{1}^{2}}\zeta}\bigg{[}\|\mathbf{f}(\zeta+s)\|_{\mathbb{L}^{2}( \mathbb{R}^{d})}^{2}+|y(\vartheta_{\zeta}\omega)|^{2}+|y(\vartheta_{\zeta} \omega)|^{r+1}+|y(\vartheta_{\zeta}\omega)|^{\frac{2(r+1)}{r-1}}\bigg{]} \mathrm{d}\zeta\bigg{)}^{\frac{k_{1}}{2}}.\]
Proof.: Let us consider the energy inequality (4.4) for \(\mathbf{u}(\zeta)=\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\), that is,
\[\frac{\mathrm{d}}{\mathrm{d}\zeta}\|\mathbf{u}(\zeta)\|_{\mathbb{H}}^ {2}+\alpha\|\mathbf{u}(\zeta)\|_{\mathbb{H}}^{2}+\frac{\alpha}{2}\|\mathbf{u}(\zeta)\|_ {\mathbb{H}}^{2}+\mu\|\nabla\mathbf{u}(\zeta)\|_{\mathbb{H}}^{2}+\beta\|\mathbf{u}( \zeta)+\mathbf{g}y(\vartheta_{\zeta-s}\omega)\|_{\mathbb{L}^{r+1}}^{r+1}\] \[\leq R\bigg{[}\|\mathbf{f}(\zeta)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})} ^{2}+|y(\vartheta_{\zeta-s}\omega)|^{2}+|y(\vartheta_{\zeta-s}\omega)|^{r+1}+|y( \vartheta_{\zeta-s}\omega)|^{\frac{2(r+1)}{r-1}}\bigg{]},\]
In view of the variation of constants formula with respect to \(\zeta\in(s-t,\xi)\),
\[\|\mathbf{u}(\xi,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{ 2}+\frac{\alpha}{2}\int_{s-t}^{\xi}e^{\alpha(\zeta-\xi)}\|\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+\mu\int_{s-t}^{\xi}e^{\alpha(\zeta-\xi)}\|\nabla\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta+\beta\int_{s-t }^{\xi}e^{\alpha(\zeta-\xi)}\|\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})+ \mathbf{g}y(\vartheta_{\zeta-s}\omega)\|_{\mathbb{L}^{r+1}}^{r+1}\mathrm{d}\zeta\] \[\leq e^{-\alpha(\xi-s+t)}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}\]
\[+R\int_{-t}^{\xi-s}e^{\alpha(\zeta+s-\xi)}\bigg{[}\|\mathbf{f}(\zeta+s)\|_{ \mathbb{L}^{2}(\mathbb{R}^{d})}^{2}+|y(\vartheta_{\zeta}\omega)|^{2}+|y( \vartheta_{\zeta}\omega)|^{r+1}+|y(\vartheta_{\zeta}\omega)|^{\frac{2(r+1)}{r- 1}}\bigg{]}\mathrm{d}\zeta. \tag{4.36}\]
Putting \(\xi=s\) in (4.36), we find
\[\|\mathbf{u}(s,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2 }+\frac{\alpha}{2}\int_{s-t}^{s}e^{\alpha(\zeta-s)}\|\mathbf{u}(\zeta,s-t,\vartheta _{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+\mu\int_{s-t}^{s}e^{\alpha(\zeta-s)}\|\nabla\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\mathrm{d}\zeta\] \[+\beta\int_{s-t}^{s}e^{\alpha(\zeta-s)}\|\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})+\mathbf{g}y(\vartheta_{\zeta-s}\omega)\|_{ \mathbb{L}^{r+1}}^{r+1}\mathrm{d}\zeta \tag{4.37}\] \[\leq e^{-\alpha t}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}+R\int_{-\infty }^{0}e^{\alpha\zeta}\bigg{[}\|\mathbf{f}(\zeta+s)\|_{\mathbb{L}^{2}(\mathbb{R}^{d })}^{2}+|y(\vartheta_{\zeta}\omega)|^{2}+|y(\vartheta_{\zeta}\omega)|^{r+1}+| y(\vartheta_{\zeta}\omega)|^{\frac{2(r+1)}{r-1}}\bigg{]}\mathrm{d}\zeta,\]
for all \(s\leq\tau\). Since \(\mathbf{u}_{0}\in D(s-t,\vartheta_{-t}\omega)\) and \(D\) is backward tempered, the definition of backward temperedness (2.12) ensures that there exists a time \(\widetilde{\mathcal{T}}=\widetilde{\mathcal{T}}(\tau,\omega,D)\) such that for all \(t\geq\widetilde{\mathcal{T}}\),
\[e^{-\alpha t}\sup_{s\leq\tau}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}\leq R\int_{- \infty}^{0}e^{\alpha\zeta}\bigg{[}\|\mathbf{f}(\zeta+s)\|_{\mathbb{L}^{2}(\mathbb{ R}^{d})}^{2}+|y(\vartheta_{\zeta}\omega)|^{2}+|y(\vartheta_{\zeta}\omega)|^{r+1}+ |y(\vartheta_{\zeta}\omega)|^{\frac{2(r+1)}{r-1}}\bigg{]}\mathrm{d}\zeta. \tag{4.38}\]
Hence, Using (4.38) and taking supremum on \(s\) over \((-\infty,\tau]\) in (4.37), we arrive at (4.33). Furthermore, the inequality (4.35) can be obtained by using (4.36) and following the similar arguments as in (3.38).
**Proposition 4.6**.: _For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), suppose that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\). For \(R\) and \(\widetilde{K}(s,\omega),\) the same as in (4.4) and (4.34), respectively, we have_
(i) _There is an increasing pullback \(\mathfrak{D}\)-random absorbing set \(\mathcal{R}\) given by_
\[\mathcal{R}(\tau,\omega):=\bigg{\{}\mathbf{v}\in\mathbb{H}:\|\mathbf{v}\|_{\mathbb{H} }^{2}\leq 4R\sup_{s\leq\tau}\widetilde{K}(s,\omega)+2\|\mathbf{g}\|_{\mathbb{H}}^{2}|y( \omega)|^{2}\bigg{\}},\text{ for all }\tau\in\mathbb{R}\text{ and }\omega\in\Omega. \tag{4.39}\]
_Moreover, \(\mathcal{R}\) is backward-uniformly tempered with arbitrary rate, that is, \(\mathcal{R}\in\mathfrak{D}\)._
(ii) _There is a \(\mathfrak{B}\)-pullback random absorbing set \(\widetilde{\mathcal{R}}\) given by_
\[\widetilde{\mathcal{R}}(\tau,\omega):=\Big{\{}\mathbf{v}\in\mathbb{H}:\|\mathbf{v}\|_{ \mathbb{H}}^{2}\leq 4R\widetilde{K}(\tau,\omega)+2\|\mathbf{g}\|_{\mathbb{H}}^{2}|y( \omega)|^{2}\Big{\}}\in\mathfrak{B},\text{ for all }\tau\in\mathbb{R}\text{ and }\omega\in\Omega. \tag{4.40}\]
Proof.: See the proof of [73, Proposition 3.6].
### Backward uniform tail-estimates and backward flattening-property
In this subsection, we prove the backward tail-estimates and backward flattening-property for the solution of (4.2) for all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)). These estimates help us to prove the backward uniform pullback \(\mathfrak{D}\)-asymptotic compactness of the solution of (4.3). We will use the cut-off function (same as in Lemma 3.8) to obtain these estimates. The following lemma provides the backward uniform tail-estimates for the solution of the system (4.2).
**Lemma 4.7**.: _For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), suppose that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{L}^{2}(\mathbb{R}^{d}))\). Then, for any \((\tau,\omega,D)\in\mathbb{R}\times\Omega\times\mathfrak{D},\) the solution of (4.1) satisfies_
\[\lim_{k,t\to+\infty}\sup_{s\leq\tau}\sup_{\mathbf{u}_{0}\in D(s-t,\vartheta_{-t} \omega)}\|\mathbf{u}(s,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{L}^{2}( \mathcal{O}^{c}_{k})}^{2}=0, \tag{4.41}\]
_where \(\mathcal{O}_{k}=\{x\in\mathbb{R}^{d}:|x|\leq k\}\)._
Proof.: Let \(\mathsf{\rho}\) be a smooth function defined in Lemma 3.8. Taking divergence to the first equation in (4.2), formally we obtain (see the proof of Lemma 3.8 the detailed calculations)
\[p=(-\Delta)^{-1}\big{[}\nabla\cdot\big{[}\nabla\cdot\big{(}(\mathbf{u}+\mathbf{g}y) \otimes(\mathbf{u}+\mathbf{g}y)\big{)}\big{]}+\beta\nabla\cdot\big{[}|\mathbf{u}+\mathbf{g}y|^ {r-1}(\mathbf{u}+\mathbf{g}y)\big{]}-\nabla\cdot\mathbf{f}\big{]}. \tag{4.42}\]
Taking the inner product to the first equation of (4.2) with \(\mathsf{\rho}\Big{(}\frac{|x|^{2}}{k^{2}}\Big{)}\mathbf{u}\), we have
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^{d}} \mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\mathbf{u}|^{2}\mathrm{d}x\] \[=\mu\int_{\mathbb{R}^{d}}(\Delta\mathbf{u})\mathsf{\rho}\bigg{(}\frac {|x|^{2}}{k^{2}}\bigg{)}\mathbf{u}\mathrm{d}x-\alpha\int_{\mathbb{R}^{d}}\mathsf{ \rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\mathbf{u}|^{2}\mathrm{d}x-b\bigg{(}\bm {u}+\mathbf{g}y,\mathbf{u}+\mathbf{g}y,\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}( \mathbf{u}+\mathbf{g}y)\bigg{)}\] \[\quad+b\bigg{(}\mathbf{u}+\mathbf{g}y,\mathbf{u}+\mathbf{g}y,\mathsf{\rho}\bigg{(} \frac{|x|^{2}}{k^{2}}\bigg{)}\mathbf{g}y\bigg{)}-\beta\int_{\mathbb{R}^{d}}\mathsf{ \rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\mathbf{u}+\mathbf{g}y|^{r+1}\mathrm{d}x\] \[\quad+\beta\int_{\mathbb{R}^{d}}\lvert\mathbf{u}+\mathbf{g}y\rvert^{r-1}( \mathbf{u}+\mathbf{g}y)\mathsf{\rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\mathbf{g}y \mathrm{d}x-\int_{\mathbb{R}^{d}}(\nabla p)\mathsf{\rho}\bigg{(}\frac{|x|^{2}} {k^{2}}\bigg{)}\mathbf{u}\mathrm{d}x+\int_{\mathbb{R}^{d}}\mathbf{f}\mathsf{\rho}\bigg{(} \frac{|x|^{2}}{k^{2}}\bigg{)}\mathbf{u}\mathrm{d}x \tag{4.43}\]
Let us now estimate each terms on right hand side of (4.43). Using integration by parts, divergence free condition of \(\mathbf{u}(\cdot)\) and \(\mathbf{g}\in\mathrm{D}(\mathrm{A})\), we infer (see inequalities (3.44)-(3.50))
\[\mu\int_{\mathbb{R}^{d}}(\Delta\mathbf{u})\mathsf{\rho}\bigg{(}\frac {|x|^{2}}{k^{2}}\bigg{)}\mathbf{u}\mathrm{d}x \leq-\mu\int_{\mathbb{R}^{d}}\lvert\nabla\mathbf{u}\rvert^{2}\mathsf{ \rho}\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\mathrm{d}x+\frac{C}{k}\big{[}\lVert \mathbf{u}\rVert_{\mathbb{H}}^{2}+\lVert\nabla\mathbf{u}\rVert_{\mathbb{H}}^{2}\big{]}, \tag{4.45}\] \[y^{2}\ b\bigg{(}\mathbf{u}+\mathbf{g}y,\mathbf{g},\mathsf{\rho}\bigg{(}\frac {|x|^{2}}{k^{2}}\bigg{)}\mathbf{g}\bigg{)} \leq\frac{C}{k}\Big{[}\lVert\mathbf{u}+\mathbf{g}y\rVert_{\mathbb{H}}^{2} +\lvert y\rvert^{4}\lVert\mathbf{g}\rVert_{\mathbb{L}^{4}}^{4}\Big{]}\leq\frac{C} {k}\Big{[}\lVert\mathbf{u}\rVert_{\mathbb{H}}^{2}+\lvert y\rvert^{2}+\lvert y \rvert^{4}\Big{]},\] \[-b\bigg{(}\mathbf{u}+\mathbf{g}y,\mathbf{u}+\mathbf{g}y,\mathsf{\rho}\bigg{(} \frac{|x|^{2}}{k^{2}}\bigg{)}(\mathbf{u}+\mathbf{g}y)\bigg{)} \leq\frac{C}{k}\|\mathbf{u}+\mathbf{g}y\|_{\mathbb{L}^{3}}^{3}\leq\frac{C} {k}\bigg{[}\lVert\mathbf{u}+\mathbf{g}y\rVert_{\mathbb{H}}^{2}+\lVert\mathbf{u}+\mathbf{g}y \rVert_{\mathbb{L}^{r+1}}^{r+1}\bigg{]}\] (4.46) \[\leq\frac{C}{k}\Big{[}\lVert\mathbf{u}\rVert_{\mathbb{H}}^{2}+\lvert y \rvert^{2}+\lVert\mathbf{u}+\mathbf{g}y\rVert_{\mathbb{L}^{r+1}}^{r+1}\bigg{]},\ \ \text{for}\ \ r\geq 2, \tag{4.44}\]
where we have used interpolation and Young's inequalities. Using integration by parts, divergence free condition and (4.42), we obtain for \(r\geq 2\),
\[-\int_{\mathbb{R}^{d}}(\nabla p)\mathsf{\rho}\bigg{(}\frac{|x|^{2} }{k^{2}}\bigg{)}\mathbf{u}\mathrm{d}x=\int_{\mathbb{R}^{d}}p\mathsf{\rho}^{\prime} \bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\frac{2}{k^{2}}(x\cdot\mathbf{u})\mathrm{d}x\] \[\quad+\frac{C}{k}\int_{\mathbb{R}^{d}}\lvert(-\Delta)^{-1}\big{[} \nabla\cdot\big{[}\lvert\mathbf{u}+\mathbf{g}y\rvert^{r-1}(\mathbf{u}+\mathbf{g}y)\big{]} \big{]}\big{]}\rvert\cdot\lvert\mathbf{u}\rvert\mathrm{d}x\] \[=:\frac{C}{k}[Q_{1}(d,r)+Q_{2}(d,r)+Q_{3}(d,r)].\]
**Estimate of \(Q_{1}(d,r)\)**: Using \(\boldsymbol{g}\in\mathrm{D}(\mathrm{A})\), Holder's inequality, Fourier transformation, Ladyzhenskaya's and Young's inequalities, respectively, we get for \(d=2,3\),
\[|Q_{1}(d,r)| \leq\big{\|}(-\Delta)^{-1}\big{[}\nabla\cdot\big{[}\nabla\cdot \big{(}\boldsymbol{u}+\boldsymbol{g}y)\otimes(\boldsymbol{u}+\boldsymbol{g}y )\big{)}\big{]}\big{]}\big{\|}_{\mathbb{L}^{2}(\mathbb{R}^{d})}\|\boldsymbol{u} \|_{\mathbb{H}}\leq\|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{4}}^{2}\| \boldsymbol{u}\|_{\mathbb{H}}\] \[\leq C\|\boldsymbol{u}\|_{\mathbb{H}}^{2}\|\boldsymbol{u}\|_{ \mathbb{H}}+C|y|^{2}\|\boldsymbol{g}\|_{\mathbb{L}^{4}}^{2}\|\boldsymbol{u}\| _{\mathbb{H}}\] \[\leq C\|\boldsymbol{u}\|_{\mathbb{H}}^{\frac{6-d}{2}}\|\nabla \boldsymbol{u}\|_{\mathbb{H}}^{\frac{d}{2}}+C|y|^{4}+C\|\boldsymbol{u}\|_{ \mathbb{H}}^{2} \tag{4.48}\] \[\leq C\bigg{[}\|\nabla\boldsymbol{u}\|_{\mathbb{H}}^{2}+\| \boldsymbol{u}\|_{\mathbb{H}}^{2}+\|\boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(6-d )}{4-d}}+|y|^{4}\bigg{]}.\]
**Estimate of \(Q_{2}(d,r)\):** Applying Holder's (see (3.48)), Gagliardo-Nirenberg's (see (3.48)), interpolation and Young's inequalities, we obtain
\[|Q_{2}(d,r)| \leq C\times\begin{cases}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r}}^{r}\|\boldsymbol{u}\|_{\mathbb{H}},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r}\frac{6r}{5}}^{r}\| \boldsymbol{u}\|_{\mathbb{H}},&\text{for $d=3$ and $r\in[3,5]$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{ \mathbb{L}}\frac{3(r+1)}{r+4},&\text{for $d=3$ and $r\in(5,\infty)$},\\ \end{cases}\] \[\leq C\times\begin{cases}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r+1}}^{\frac{(r+1)(r-2)}{r-1}}\|\boldsymbol{u}+\boldsymbol{g}y\|_ {\mathbb{L}^{r-1}}^{\frac{2}{r-1}}\|\boldsymbol{u}\|_{\mathbb{H}},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{\frac{(r+1)(3r-5)}{3(r- 1)}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{\frac{5-r}{(r-1)}}}^{ \frac{5-r}{(r-1)}}\|\boldsymbol{u}\|_{\mathbb{H}},&\text{for $d=3$ and $r\in[3,5]$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{\frac{(r+1)(3r-5)}{3(r- 1)}}\|\boldsymbol{u}\|_{\mathbb{H}}^{\frac{3(r+1)}{3(r-1)}},&\text{for $d=3$ and $r\in(5,\infty)$},\end{cases} \tag{4.49}\] \[\leq C\Big{[}\|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1 }}^{r+1}+\|\boldsymbol{u}\|_{\mathbb{H}}^{r+1}+|y|^{r+1}\Big{]},\]
where we have used interpolation and Young's inequalities.
**Estimate of \(Q_{3}(d,r)\)**: Similar to (3.49), we find (for \(d=2,3\))
\[|Q_{3}(d,r)|\leq C\|(-\Delta)^{-1}[\nabla\cdot\boldsymbol{f}]\|_{\mathbb{L}^{ \frac{d}{d-1}}(\mathbb{R}^{d})}\|\boldsymbol{u}\|_{\mathbb{L}^{d}(\mathbb{R}^ {d})}\leq C\Big{[}\|\boldsymbol{f}\|_{\mathbb{L}^{1}(\mathbb{R}^{d})}^{2}+\| \boldsymbol{u}\|_{\mathbb{H}}^{2}+\|\nabla\boldsymbol{u}\|_{\mathbb{H}}^{2} \Big{]}. \tag{4.50}\]
Finally, we estimate the remaining terms of (4.43) by using Holder's and Young's inequalities as follows,
\[yb\bigg{(}\boldsymbol{u}+\boldsymbol{g}y,\boldsymbol{u},\rho \bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{g}\bigg{)}+\beta y\int_{ \mathbb{R}^{d}}\lvert\boldsymbol{u}+\boldsymbol{g}y\rvert^{r-1}(\boldsymbol {u}+\boldsymbol{g}y)\rho\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}\boldsymbol{g} \mathrm{d}x\] \[\leq\frac{\beta}{2}\int_{\mathbb{R}^{d}}\rho\bigg{(}\frac{|x|^{2} }{k^{2}}\bigg{)}|\boldsymbol{u}+\boldsymbol{g}y\rvert^{r+1}\mathrm{d}x+\frac{ \mu}{2}\int_{\mathbb{R}^{d}}\rho\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}| \nabla\boldsymbol{u}|^{2}\mathrm{d}x+\frac{\alpha}{2}\int_{\mathbb{R}^{d}} \rho\bigg{(}\frac{|x|^{2}}{k^{2}}\bigg{)}|\boldsymbol{u}|^{2}\mathrm{d}x \tag{4.51}\] \[\quad+C\int_{\mathbb{R}^{d}}\rho\bigg{(}\frac{|x|^{2}}{k^{2}} \bigg{)}\bigg{[}\lvert\boldsymbol{y}\rvert^{\frac{2(r+1)}{r-1}}\lvert \boldsymbol{g}\rvert^{\frac{2(r+1)}{r-1}}+\lvert y\rvert^{r+1}\lvert \boldsymbol{g}\rvert^{r+1}+\lvert\boldsymbol{f}\rvert^{2}+\lvert y\rvert^{2} \lvert\boldsymbol{g}\rvert^{2}+\lvert y\rvert^{2}\lvert\Delta\boldsymbol{g} \rvert^{2}\bigg{]}\mathrm{d}x.\]
Combining (4.43)-(4.51), we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\boldsymbol{u}\|_{\mathbb{L}^{2}( \mathscr{O}_{k}^{c})}^{2} \leq-\alpha\|\boldsymbol{u}\|_{\mathbb{L}^{2}(\mathscr{O}_{k}^{c})}^{2}+ \frac{C}{k}\bigg{[}\|\boldsymbol{u}\|_{\mathbb{H}}^{2}+\|\nabla\boldsymbol{u}\|_{ \mathbb{H}}^{2}+\|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{r+1}+\| \boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(6-d)}{4-d}}+\|\boldsymbol{u}\|_{\mathbb{ H}}^{r+1}+\|\boldsymbol{f}\|_{\mathbb{L}^{1}(\mathbb{R}^{d})}^{2}\] \[\quad+\lvert y\rvert^{2}+\lvert y\rvert^{4}+\lvert y\rvert^{r+1 }\bigg{]}+C\lvert y\rvert^{\frac{2(r+1)}{r-1}}\int_{\lvert x\rvert\geq k} \lvert\boldsymbol{g}(x)\rvert^{\frac{2(r+1)}{r-1}}\mathrm{d}x+C\lvert y\rvert^{r+1 }\int_{\lvert x\rvert\geq k}\lvert\boldsymbol{g}(x)\rvert^{r+1}\mathrm{d}x\]
\[+C\int_{|x|\geq k}|\mathbf{f}(x)|^{2}\mathrm{d}x+C|y|^{2}\int_{|x|\geq k}|\mathbf{g}(x)|^{ 2}\mathrm{d}x+C|y|^{2}\int_{|x|\geq k}|\Delta\mathbf{g}(x)|^{2}\mathrm{d}x. \tag{4.52}\]
Making use of the variation of constant formula to the above equation (4.52) on \((s-t,s)\) and replacing \(\omega\) by \(\vartheta_{-s}\omega\), we find that, for \(s\leq\tau,t\geq 0\) and \(\omega\in\Omega\),
\[\|\mathbf{u}(s,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2} \Big{\{}\] \[\leq e^{-\alpha t}\|\mathbf{u}_{0}\|_{\mathbb{H}}^{2}+\frac{C}{k} \bigg{[}\int_{s-t}^{s}e^{\alpha(\zeta-s)}\bigg{\{}\|\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}+\|\nabla\mathbf{u}(\zeta,s-t, \vartheta_{-s}\omega,\mathbf{u}_{0})\|_{\mathbb{H}}^{2}\] \[\quad+\|\mathbf{u}(\zeta,s-t,\vartheta_{-s}\omega,\mathbf{u}_{0})\|_{ \mathbb{H}}^{r+1}\bigg{\}}\mathrm{d}\zeta+\int_{-\infty}^{0}e^{\alpha\zeta} \bigg{\{}\|\mathbf{f}(\zeta+s)\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}^{2}+|y(\vartheta _{\zeta}\omega)|^{2}+|y(\vartheta_{\zeta}\omega)|^{4}\] \[\quad+|y(\vartheta_{\zeta}\omega)|^{r+1}\bigg{\}}\mathrm{d} \zeta\bigg{]}+C\int_{-\infty}^{0}e^{\alpha\zeta}|y(\vartheta_{\zeta}\omega)|^{ 2(r+1)}\mathrm{d}\zeta\int\limits_{|x|\geq k}|\mathbf{g}(x)|^{\frac{2(r+1)}{r-1}} \mathrm{d}x\] \[\quad+C\int_{-\infty}^{0}e^{\alpha\zeta}|y(\vartheta_{\zeta} \omega)|^{r+1}\mathrm{d}\zeta\int\limits_{|x|\geq k}|\mathbf{g}(x)|^{r+1}\mathrm{d }x+C\int_{-\infty}^{0}e^{\alpha\zeta}|y(\vartheta_{\zeta}\omega)|^{2}\mathrm{ d}\zeta\int\limits_{|x|\geq k}|\mathbf{g}(x)|^{2}\mathrm{d}x \tag{4.53}\] \[\quad+C\int_{-\infty}^{0}e^{\alpha\zeta}|y(\vartheta_{\zeta} \omega)|^{2}\mathrm{d}\zeta\int\limits_{|x|\geq k}|\Delta\mathbf{g}(x)|^{2}\mathrm{ d}x+C\int_{-\infty}^{0}e^{\alpha\zeta}\int\limits_{|x|\geq k}|\mathbf{f}(x,\zeta+s)|^{2} \mathrm{d}x\mathrm{d}\zeta.\]
Now, using the definition of backward temperedness (2.12), (2.9), (1.8), \(\mathbf{g}\in\mathrm{D}(\mathrm{A})\) and Lemma 4.5 (both (4.33) and (4.35)), one can complete the proof.
**Lemma 4.8**.: _For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), suppose that \(\mathbf{f}\in\mathrm{L}^{2}_{\mathrm{loc}}(\mathbb{R};\mathbb{H})\). Let \((\tau,\omega,D)\in\mathbb{R}\times\Omega\times\mathfrak{D}\) and \(k\geq 1\) be fixed. Then_
\[\lim_{i,t\to+\infty}\sup_{s\leq\tau}\sup_{\mathbf{u}_{0}\in D(s-t, \vartheta_{-t}\omega)}\|(\mathrm{I}-\mathrm{P}_{i})\bar{\mathbf{u}}(s,s-t, \vartheta_{-s}\omega,\bar{\mathbf{u}}_{0,2})\|_{\mathbb{L}^{2}(\mathcal{O}_{ \sqrt{2}k})}^{2}=0, \tag{4.54}\]
_where \(\bar{\mathbf{u}}_{0,2}=(\mathrm{I}-\mathrm{P}_{i})(\varrho_{k}\mathbf{u}_{0})\)._
Proof.: The first equation of (4.2) can be rewritten as (multiplying by \(\varrho_{k}\)):
\[\frac{\mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t}-\mu\Delta\bar{\mathbf{u} }+\varrho_{k}\big{(}(\mathbf{u}+\mathbf{g}y)\cdot\nabla\big{)}(\mathbf{u}+\mathbf{g}y)+\alpha \bar{\mathbf{u}}+\varrho_{k}|\mathbf{u}+\mathbf{g}y|^{r-1}(\mathbf{u}+\mathbf{g}y)+\varrho_{k} \nabla p \tag{4.55}\] \[=-\mu\mathbf{u}\Delta\varrho_{k}-2\mu\nabla\varrho_{k}\cdot\nabla\mathbf{ u}+\varrho_{k}\mathbf{f}+(\sigma-\alpha)\varrho_{k}\mathbf{g}y+\mu y\varrho_{k} \Delta\mathbf{g}.\]
Applying the projection \((\mathrm{I}-\mathrm{P}_{i})\) and taking the inner product with \(\bar{\mathbf{u}}_{i,2}\) in \(\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})\) to the equation (4.55), we get
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\bar{\mathbf{u}}_{i,2}\| _{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{2}+\mu\|\nabla\bar{\mathbf{u}}_{i,2}\| _{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{2}+\alpha\|\bar{\mathbf{u}}_{i,2}\|_{ \mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{2}+\beta\|\mathbf{u}+\mathbf{g}y\|^{\frac{r-1 }{2}}(\bar{\mathbf{u}}_{i,2}+\bar{\mathbf{g}}_{i,2}y)\|_{\mathbb{L}^{2}(\mathcal{O}_{ \sqrt{2}k})}^{2}\] \[=-\underbrace{\sum_{q,m=1}^{2}\int_{\mathcal{O}_{\sqrt{2}k}}( \mathrm{I}-\mathrm{P}_{i})}_{:=L_{1}}\Big{[}(u_{q}+g_{q}y)\frac{\partial(u_{m}+g _{m}y)}{\partial x_{q}}\{\varrho_{k}(x)\}^{2}(u_{m}+g_{m}y)\Big{]}\mathrm{d}x\]
\[+\underbrace{y\sum_{q,m=1}^{2}\int_{0\sqrt{2}k}\left(\mathrm{I}- \mathrm{P}_{i}\right)}_{:=L_{3}}\left(\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k} \sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k}\sigma_{k
\[\leq C\int_{\mathcal{O}_{\sqrt{2}k}}\bigl{|}(-\Delta)^{-1}\bigl{[} \nabla\cdot\bigl{[}\nabla\cdot\bigl{(}(\boldsymbol{u}+\boldsymbol{g}y)\otimes( \boldsymbol{u}+\boldsymbol{g}y)\bigr{)}\bigr{]}\bigr{]}\bigr{|}\cdot|\bar{ \boldsymbol{u}}_{i,2}|\mathrm{d}x \tag{4.59}\] \[\qquad+C\int_{\mathcal{O}_{\sqrt{2}k}}\bigl{|}(-\Delta)^{-1} \bigl{[}\nabla\cdot\bigl{[}|\boldsymbol{u}+\boldsymbol{g}y|^{r-1}(\boldsymbol{u }+\boldsymbol{g}y)\bigr{]}\bigr{]}\bigr{|}\cdot|\bar{\boldsymbol{u}}_{i,2}| \mathrm{d}x+C\int_{\mathcal{O}_{\sqrt{2}k}}|(-\Delta)^{-1}[\nabla\cdot \boldsymbol{f}]|\cdot|\bar{\boldsymbol{u}}_{i,2}|\mathrm{d}x\] \[=:C\Bigl{[}\widetilde{Q}_{1}(d,r)+\widetilde{Q}_{2}(d,r)+ \widetilde{Q}_{3}(d,r)\Bigr{]}.\]
#### Estimate of \(\widetilde{Q}_{1}(d,r)\)
Using Holder's inequality, Fourier transformation, Ladyzhenskaya's and Young's inequalities, we get for \(d=2,3\)
\[|\widetilde{Q}_{1}(d,r)| \leq\bigl{\|}(-\Delta)^{-1}\bigl{[}\nabla\cdot\bigl{[}\nabla \cdot\bigl{(}(\boldsymbol{u}+\boldsymbol{g}y)\otimes(\boldsymbol{u}+ \boldsymbol{g}y)\bigr{)}\bigr{]}\bigr{]}\bigr{\|}_{\mathbb{L}^{2}(\mathcal{O} _{\sqrt{2}k})}\|\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{ \sqrt{2}k})}\leq\|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{4}}^{2}\|\bar{ \boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}\] \[\leq C\lambda_{i+1}^{-\frac{4-d}{8}}\bigl{\|}\nabla(\boldsymbol{u }+\boldsymbol{g}y)\|_{\mathbb{H}}^{\frac{d}{2}}\|\boldsymbol{u}+\boldsymbol{g}y \|_{\mathbb{H}}^{\frac{4-d}{2}}\|\nabla\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{ L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{4}{2}}\|\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{ L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{4}{2}} \tag{4.60}\] \[\leq C\lambda_{i+1}^{-\frac{4-d}{8}}\bigl{[}\|\boldsymbol{u}+ \boldsymbol{g}y\|_{\mathbb{H}}^{2}+\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{H}}^{\frac{2(4-d)}{2-d}}+\|\boldsymbol{u}\|_{\mathbb{H}}^{2}+\| \boldsymbol{u}\|_{\mathbb{H}}^{2}\bigr{]}\] \[\leq C\lambda_{i+1}^{-\frac{4-d}{8}}\bigl{[}\|\boldsymbol{u}\|_{ \mathbb{H}}^{2}+\|\nabla\boldsymbol{u}\|_{\mathbb{H}}^{2}+\|\boldsymbol{u}\|_{ \mathbb{H}}^{\frac{2(4-d)}{2-d}}+|y|^{2}+|y|^{\frac{2(4-d)}{2-d}}\bigr{]}.\]
#### Estimate of \(\widetilde{Q}_{2}(d,r)\)
Applying Holder's (see (3.62)), Gagliardo-Nirenberg's (see (3.62)), interpolation and Young's inequalities, we find
\[|\widetilde{Q}_{2}(d,r)| \leq C\times\begin{cases}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r}}^{r}\|\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_ {\sqrt{2}k})},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r}}^{r}\|\bar{\boldsymbol{u }}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})},&\text{for $d=3$ and $r\in[3,5]$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{r}\|\bar{\boldsymbol{u }}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})},&\text{for $d=3$ and $r\in(5,\infty)$},\\ \end{cases}\] \[\leq C\times\begin{cases}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r+1}}^{\frac{(r+1)(r-2)}{r-1}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r-1}}^{\frac{2}{(r-1)}}\|\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^ {2}(\mathcal{O}_{\sqrt{2}k})},&\text{for $d=2$ and $r\in[2,\infty)$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{\frac{(r+1)(3r-5)}{3(r-1 )}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{H}}^{\frac{5-r}{3(r-1)}}\| \bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})},&\text{ for $d=3$ and $r\in[3,5]$},\\ \|\boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{ \mathbb{L}^{r+1}}^{\frac{r-5}{3(r-1)}}\|\bar{\boldsymbol{u}}_{i,2}\|_{ \mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{2(r+1)}{3(r-1)}},&\text{for $d=3$ and $r\in(5,\infty)$},\\ \lambda_{i+1}^{-\frac{1}{3r(r-1)}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{\mathbb{L}^{r+1}}^{\frac{r-5}{3(r-1)}}\| \boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(r+1)}{3r}}\|\nabla\bar{\boldsymbol{u}}_{i,2} \|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{2(r+1)}{3(r-1)}},&\\ \lambda_{i+1}^{-\frac{1}{3r}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{\mathbb{L}^{r+1}}^{\frac{r-5}{3(r-1)}}\| \boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(r+1)}{3r}}\|\nabla\bar{\boldsymbol{u}}_{i,2} \|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{2(r+1)}{3(r-1)}},&\\ \end{cases}\] \[\leq C\times\begin{cases}\lambda_{i+1}^{-\frac{1}{3r(r-1)}}\| \boldsymbol{u}+\boldsymbol{g}y\|_{\mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{ \mathbb{L}^{r+1}}^{\frac{r-5}{3(r-1)}}\|\boldsymbol{u}\|_{\mathbb{H}}^{\frac{2 (r+1)}{3r}}\|\nabla\bar{\boldsymbol{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{ 2}k})}^{\frac{2(r+1)}{3(r-1)}},&\\ \lambda_{i+1}^{-\frac{1}{3r}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{\mathbb{L}^{r+1}}^{\frac{r-5}{3(r-1)}}\| \boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(r+1)}{3r}}\|\nabla\bar{\boldsymbol{u}}_{i,2} \|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{2(r+1)}{3(r-1)}},&\\ \lambda_{i+1}^{-\frac{1}{3r}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{\mathbb{L}^{r+1}}^{\frac{r-5}{3(r-1)}}\| \boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(r+1)}{3r}}\|\nabla\bar{\boldsymbol{u}}_{i,2} \|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{2(r+1)}{3(r-1)}},&\\ \lambda_{i+1}^{-\frac{1}{3r(r-1)}}\|\boldsymbol{u}+\boldsymbol{g}y\|_{ \mathbb{L}^{r+1}}^{r}\|\boldsymbol{u}\|_{\mathbb{L}^{r+1}}^{\frac{r-5}{3(r-1)}}\| \boldsymbol{u}\|_{\mathbb{H}}^{\frac{2(r+1)}{3r}}\|\nabla\bar{\boldsymbol{u}}_{i,2} \|_{\mathbb{L}^{2}(\mathcal{O}_{\sqrt{2}k})}^{\frac{2(r+1)}{3(r-1)}},&\\ \end{cases}\]
\[\leq C\times\begin{cases}\lambda_{i+1}^{-\frac{1}{2(r-1)}}\Big{[}\|\mathbf{u}+\mathbf{g}y \|_{\mathbb{L}^{r+1}}^{r+1}+\|\mathbf{u}+\mathbf{g}y\|_{\mathbb{H}}^{4(r-1)}+\|\mathbf{u}\|_{ \mathbb{H}}^{2(r-1)}+\|\mathbf{u}\|_{\mathbb{V}}^{2}\Big{]},\\ &\text{ for }d=2\text{ and }r\in[2,\infty),\\ \lambda_{i+1}^{-\frac{12}{12}}\Big{[}\|\mathbf{u}+\mathbf{g}y\|_{\mathbb{L}^{r+1}}^{r +1}+\|\mathbf{u}+\mathbf{g}y\|_{\mathbb{H}}^{2}+\|\mathbf{u}\|_{\mathbb{H}}^{10}+\|\mathbf{u}\| _{\mathbb{V}}^{2}\Big{]},\\ \lambda_{i+1}^{-\frac{r+1}{3r(r-1)}}\Big{[}\|\mathbf{u}+\mathbf{g}y\|_{\mathbb{L}^{r+1 }}^{r+1}+\|\mathbf{u}\|_{\mathbb{L}^{r+1}}^{r+1}+\|\mathbf{u}\|_{\mathbb{H}}^{r+1}+\| \mathbf{u}\|_{\mathbb{V}}^{2}\Big{]},\text{ for }d=3\text{ and }r\in(5,\infty).\end{cases} \tag{4.61}\]
_Estimate of \(\widetilde{Q}_{3}(d,r)\)_: Similar to (3.63), we find (for \(d=2,3\))
\[|\widetilde{Q}_{3}(d,r)| \leq C\|(-\Delta)^{-1}[\nabla\cdot\mathbf{f}]\|_{\mathbb{L}^{\frac{d} {d-1}}(\mathbb{R}^{d})}\|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{d}(\mathcal{O} \sqrt{2k})}\] \[\leq C\|\mathbf{f}\|_{\mathbb{L}^{1}(\mathbb{R}^{d})}\|\bar{\mathbf{u}}_{ i,2}\|_{\mathbb{L}^{2}(\mathcal{O}\sqrt{2k})}^{\frac{4-d}{2}}\|\nabla\bar{\mathbf{u}}_{ i,2}\|_{\mathbb{L}^{2}(\mathcal{O}\sqrt{2k})}\] \[\leq C\lambda_{i+1}^{-\frac{4-d}{4}}\|\mathbf{f}\|_{\mathbb{L}^{1}( \mathbb{R}^{d})}\|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}\sqrt{ 2k})}\] \[\leq\frac{\mu}{4}\|\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathcal{O}\sqrt{2k})}^{2}+C\lambda_{i+1}^{-\frac{4-d}{2}}\|\mathbf{f}\|_{\mathbb{ L}^{1}(\mathbb{R}^{d})}^{2}. \tag{4.62}\]
**Estimate of \(L_{5}\):** Applying Holder's and Young's inequalities, we deduce
\[|L_{5}| \leq C\Big{[}\|\mathbf{u}\|_{\mathbb{H}}+\|\nabla\mathbf{u}\|_{\mathbb{H} }+\|\mathbf{f}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}+|y|\Big{]}\|\bar{\mathbf{u}}_{i,2}\| _{\mathbb{L}^{2}(\mathcal{O}\sqrt{2k})}\] \[\leq C\lambda_{i+1}^{-1/2}\bigg{[}\|\mathbf{u}\|_{\mathbb{H}}+\| \nabla\mathbf{u}\|_{\mathbb{H}}+\|\mathbf{f}\|_{\mathbb{L}^{2}(\mathbb{R}^{d})}+|y| \bigg{]}\|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}(\mathcal{O}\sqrt{2k})}\] \[\leq\frac{\mu}{4}\|\nabla\bar{\mathbf{u}}_{i,2}\|_{\mathbb{L}^{2}( \mathcal{O}\sqrt{2k})}+C\lambda_{i+1}^{-1}\bigg{[}\|\mathbf{u}\|_{\mathbb{H}}^{2} +\|\nabla\mathbf{u}\|_{\mathbb{H}}^{2}+\|\mathbf{f}\|_{\mathbb{L}^{2}(\mathbb{R}^{d}) }^{2}+|y|^{2}\bigg{]}. \tag{4.63}\]
Now, combining (4.56)-(4.63), applying the variation of constant formula, using Lemma 4.5 (both (4.33) and (4.35)) and passing limit \(i\to\infty\), (\(\lambda_{i+1}\to 0\) as \(i\to\infty\)), we demonstrate (4.54), as desired (see the proof of Lemma 3.9), which completes the proof.
### Proof of Theorem 1.3
This subsection is devoted to the proof of main result of this section, that is, the existence of pullback \(\mathfrak{D}\)-random attractors and their asymptotic autonomy for the solution of the system (2.6) with \(S(\mathbf{v})=\mathbf{g}\in\mathrm{D}(\mathrm{A})\). For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), the existence of pullback \(\mathfrak{D}\)-random attractors for non-autonomous SCBF equations driven by additive noise on the whole space is established in [38]. For all the cases given in Table 1 (excluding \(d=2\) with \(r=1\)), as the existence of a unique pullback random attractor is known for each \(\tau\), one can obtain the existence of a unique random attractor for an autonomous SCBF equations driven by additive noise on the whole space (cf. [38]).
In view of Propositions 4.4 and 4.6, and Lemmas 4.7 and 4.8, the proof of Theorem 1.3 can be obtained by applying similar arguments as in the proof of [73, Theorem 1.6] (Subsection 3.5 in [73]) and [9, Theorem 5.2].
**Acknowledgments:** The first author would like to thank the Council of Scientific & Industrial Research (CSIR), India for financial assistance (File No. 09/143(0938)/2019-EMR-I). M. T. Mohan would like to
thank the Department of Science and Technology (DST), Govt of India for Innovation in Science Pursuit for Inspired Research (INSPIRE) Faculty Award (IFA17-MA110). Renhai Wang was supported by China Postdoctoral Science Foundation under grant numbers 2020TQ0053 and 2020M680456.
**Declarations: Ethical Approval:** Not applicable
**Competing interests:** The authors declare no competing interests.
**Authors' contributions:** All authors have contributed equally.
**Funding:** CSIR, India, 09/143(0938)/2019-EMR-I (K. Kinra), DST, India, IFA17-MA110 (M. T. Mohan).
**Availability of data and materials:** Not applicable.
|
2309.14446 | Geometric frustration of hard-disk packings on cones | Conical surfaces pose an interesting challenge to crystal growth: a crystal
growing on a cone can wrap around and meet itself at different radii. We use a
disk-packing algorithm to investigate how this closure constraint can
geometrically frustrate the growth of single crystals on cones with small
opening angles. By varying the crystal seed orientation and cone angle, we find
that -- except at special commensurate cone angles -- crystals typically form a
seam that runs along the axial direction of the cone, while near the tip, a
disordered particle packing forms. We show that the onset of disorder results
from a finite-size effect that depends strongly on the circumference and not on
the seed orientation or cone angle. This finite-size effect occurs also on
cylinders, and we present evidence that on both cylinders and cones, the defect
density increases exponentially as circumference decreases. We introduce a
simple model for particle attachment at the seam that explains the dependence
on the circumference. Our findings suggest that the growth of single crystals
can become frustrated even very far from the tip when the cone has a small
opening angle. These results may provide insights into the observed geometry of
conical crystals in biological and materials applications. | Jessica H. Sun, Abigail Plummer, Grace H. Zhang, David R. Nelson, Vinothan N. Manoharan | 2023-09-25T18:04:36Z | http://arxiv.org/abs/2309.14446v1 | # Geometric frustration of hard-disk packings on cones
###### Abstract
Conical surfaces pose an interesting challenge to crystal growth: a crystal growing on a cone can wrap around and meet itself at different radii. We use a disk-packing algorithm to investigate how this closure constraint can geometrically frustrate the growth of single crystals on cones with small opening angles. By varying the crystal seed orientation and cone angle, we find that--except at special commensurate cone angles--crystals typically form a seam that runs along the axial direction of the cone, while near the tip, a disordered particle packing forms. We show that the onset of disorder results from a finite-size effect that depends strongly on the circumference and not on the seed orientation or cone angle. This finite-size effect occurs also on cylinders, and we present evidence that on both cylinders and cones, the defect density increases exponentially as circumference decreases. We introduce a simple model for particle attachment at the seam that explains the dependence on the circumference. Our findings suggest that the growth of single crystals can become frustrated even very far from the tip when the cone has a small opening angle. These results may provide insights into the observed geometry of conical crystals in biological and materials applications.
## I Introduction
The growth of a crystal can be frustrated by interactions with a curved surface such as a spherical or hyperbolic substrate [1; 2; 3; 4]. When the surface has nonzero Gaussian curvature, the frustration stems from variations in the surface metric, which lead to stretching of the crystal lattice. This type of geometrical frustration has been well studied, particularly in colloidal systems [5; 6; 7; 8; 9; 10; 11; 12; 13].
Less well studied is frustration arising on surfaces with no Gaussian curvature but on which crystals can form loops, such as cylinders. Although such surfaces do not stretch the lattice, they can nonetheless frustrate a crystal by imposing a closure constraint. As observed in experiments and simulations on colloidal crystals on cylindrical fibers [14], crystals with orientations that are incommensurate with the closure constraint form seams. These seams, which are stable on cylinders but not on flat surfaces unconstrained by periodic boundary conditions, break the translational symmetry of the crystal.
Here we examine how the closure constraint affects crystallization on a cone, which, unlike a cylinder, has a spatially varying circumference. As a consequence, a seam must form with a width that varies in the axial direction whenever the cone angle does not permit the crystal to wrap perfectly around the cone (for a triangular lattice, such commensurate wrappings can be achieved by placing, for example, a 60deg disclination at the cone apex). The seam is similar to a tilt grain boundary between two misoriented crystals on a flat substrate [15; 16], except that it is a boundary between the misoriented edges of a _single_ crystal that has wrapped around the cone. This seam can break both the translational and rotational symmetry of the crystal. We seek to understand how the closure constraint geometrically frustrates crystal growth on a cone.
There are few previous studies of crystallization on a cone. Basin-hopping simulations of colloidal crystals showed that interacting particles on a cone form seams or scar-like defects [17; 18]. The aim of these simulations was to understand the defect structure and how it changes with the cone geometry. An experimental study of an atomic system, WS\({}_{2}\), showed that crystals on a cone form a distinct seam [19]. The cone in this work had a large opening angle, and the size of the WS\({}_{2}\) crystal was orders of magnitude larger than the size of the atoms. The main aim of this study was to demonstrate the existence of the seam, which the authors refer to as a tilt grain boundary.
Our aim is to examine the growth process, and in particular how conical crystals grow after closure. In contrast to the basin-hopping simulations [17; 18], which re
-veal energy-minimizing crystal structures, we aim to determine how an out-of-equilibrium growth process leads to disorder. Furthermore, we focus on particles with short-ranged attractions, as seen in previous experimental studies of crystallization on a flat surface [20], sphere [11], and cylinder [14].
To isolate the consequences of geometrical frustration on the crystal structure--and avoid complications associated with multiple nucleation sites and kinetics--we use a greedy algorithm to simulate the idealized growth of a crystal on a conical surface. Our approach, based on an algorithm developed by Bennett to study metallic glasses [21], is inspired by previous work on understanding the effects of geometrical frustration in metallic glasses, and on spherical or hyperbolic surfaces [22; 23]. Because we aim to understand effects in the quasi-two-dimensional colloidal systems realized experimentally, we simulate disk packings rather than sphere packings to simplify the computation.
Our simulation is designed to model the slow, reaction-limited growth of a single crystal from a fixed nucleus. Briefly, we initialize the simulation with three disks placed in a triangular configuration with a defined orientation and position on the surface (Fig. 1A). At each subsequent step, the algorithm places a single disk in a position that maximally reduces the energy of the crystal interface. We do not allow the particles to rearrange following placement. On a flat surface, this algorithm produces a perfect crystal. Therefore any deviation from a perfect crystal on a conical surface is the direct result of geometric frustration, due to a \(\delta\)-function of Gaussian curvature at the cone apex [24]. Although this algorithm does not account for the effects of temperature or kinetics, it is a simple and effective way to model the effects of the closure constraint on crystal growth. The details of the method are given in Sec. II.
By modeling crystal growth in this manner, we find a proliferation of defects (defined here as particles with anomalous coordination numbers) for a crystal growing towards the tip of a cone with a small opening angle, as shown in Sec. III. As we shall show, this onset of disorder results from a finite-size effect that depends strongly on the local circumference and is insensitive to the seed orientation and cone angle. Intuitively, the disordered regions appear when a significant fraction of the growing interface consists of the seam of the crystal. We develop a theoretical model that explains the results in Sec. IV.1 and discuss the influence of the crystal seed location in Sec. IV.2 (the appendices provide additional context, including discussions about corrections due to the three-dimensional (3D) nature of the particles, commensurate packings on cylinders, and a particularly interesting alternative seed composed of a ring of particles). We conclude by noting that the transition to a disordered packing can occur surprisingly far from the tip, which may give some insights into the morphology of crystals seen in biology and materials (Sec. V).
## II Methods
Our algorithm is designed to simulate an idealized crystallization process in which a crystal grows from a single nucleus in a reaction-limited fashion. To create the nucleus, we place three close-packed particles with diameter \(a_{0}\) in a tangent plane at a radius \(R=C/\phi\) from the sector vertex and with an orientation \(\theta\) with respect to the cone circumference \(C\) (Fig. 1A). We then add particles one by one, such that each particle contacts the greatest number of other particles, or, equivalently, forms the greatest number of "bonds." A bond is formed when the centers of two particles are less than \(a_{0}\) apart with a tolerance of \(10^{-4}a_{0}\), representing an interaction potential with a narrow attractive well, as is found in a number of colloidal adsorption experiments [11; 14]. If there are several degenerate options for placing the particle, we randomly select one option. We do not place particles such that they form only one bond with the existing assembly because the position of a dangling bond is not
Figure 1: Geometric parameters in the simulation. (**A**) Diagram of a triangular crystal seed. The angle \(\theta\) describes the orientation of the lattice vector relative to the red cone circumference curve \(C\). (**B**) Rendering showing results of a Bennett-type simulation [21] for a 2D packing of disks. We initialize the crystal seed (marked in black) on a 3D cone of angle \(\beta\). The color of each particle indicates its coordination number \(N_{j}\). (**C**) Rendering showing a mapping of the 3D cone in **B** to a 2D unrolled sector of angle \(\phi=2\pi\sin(\beta/2)\), where the seed is at a sector radius \(R\) from the apex. In this example, \(\theta=30^{\circ}\), \(\phi=20^{\circ}\), \(C=12a_{0}\) and \(R=54.4a_{0}\). A seam consisting of particles with coordination number \(N_{j}<6\) runs in the axial direction.
well-defined, and a rotation of the dangling bond quickly leads to contact with two particles. We also do not allow any previously placed particles to move, nor do we let the entire crystal translate or rotate.
We choose 2D circular disks of diameter \(a_{0}\) to represent the effective shapes of 3D spherical particles adsorbed onto a conical surface, and we neglect the anisotropic, position-dependent stretching of particle projections onto a conical surface. This approximation allows us to map the three-dimensional (3D) cone of cone angle \(\beta\) into a flat two-dimensional (2D) circular sector with a periodic boundary condition and a sector angle \(\phi=2\pi\sin(\beta/2)\). The mapping is one-to-one because the circumference \(C\) of a circular cross-section of the 3D cone (Fig. 1B) is equivalent to the arc length \(C\) of the 2D sector at a given \(R\) (Fig. 1C). The resulting 2D algorithm is computationally simpler, yet is still able to capture the effects of the closure condition on frustration (see Appendix A for a discussion of the limitations of this approach).
We truncate the sector at a radial distance of \(20a_{0}\) from the seed position to encourage growth into the sector tip, where we expect to see interesting structures. A simulation terminates when no more particles can be placed into the sector or 1000 particles have been placed. For comparison, the maximum number of particles that can pack into a \(\phi=5^{\circ}\) sector is around 800.
Since our aim is to understand the effects of geometrical frustration, we select conditions under which perfect triangular crystals cannot form. Perfect crystals lack seams and can form only when the sector angle is a "magic" angle \(\phi=60^{\circ}P\), where \(P\) is an integer [24, 25, 26]. For example, a cylinder is a \(P=0\) magic cone with \(\phi=0^{\circ}\). We therefore restrict our study to cones with \(\phi\neq 60^{\circ}P\). A seam on such a cone is shown in Fig. 1B and C. Particles at the crystal self-boundary have coordination number \(N_{j}<6\). For particles with Lennard-Jones-like pair potentials where the range of attractive interaction is comparable to the defect hard-core diameter, one might expect a grain boundary to form [5]. Our simulations, however, have an interaction range of order \(10^{-4}\) times the hard-core diameter, and the resulting seam, like a stacking fault, maintains its integrity.
We choose to study near-cylindrical cones with small opening angles of \(\phi\leq 30^{\circ}\) to facilitate comparison with results of previous studies of crystals on cylinders [27, 14]. When simulating growth on a surface with \(\phi=0^{\circ}\), corresponding to a true cylinder, we select a seed orientation \(\theta\) such that seams are still geometrically required.
To characterize the structures, we calculate the bond orientational order parameter \(\psi_{6,j}\) for each particle \(j\) with nearest neighbors indexed by \(k\)[28, 29]:
\[\psi_{6,j}=\frac{1}{N_{j}}\sum_{k=1}^{N_{j}}e^{i6\theta_{jk}}, \tag{1}\]
where \(\theta_{jk}\) is the angle between the circumferential axis and the vector from particle \(j\) to nearest neighbor \(k\), and \(N_{j}\) is the number of nearest neighbors, or coordination number, of particle \(j\). We consider only particles that are separated by \(a_{0}\pm 10^{-4}a_{0}\) as nearest neighbors.
We also calculate the defect density \(\rho\) as a function of distance \(R\) from the vertex. We define defects as particles that have \(|\psi_{6,j}|<0.9\). This definition allows us to distinguish defects, which disrupt the order of the crystal, from particles on the boundary of the seam, which are part of an ordered crystal. Since the same defect trends are preserved for different cutoff values of \(|\psi_{6,j}|\), we choose a high cutoff value to obtain a sensitive measure of the defect density (see Appendix B). To calculate the defect density, we first bin the particles by \(R\). We then calculate the number of defects per number of particles within each bin of width \(a_{0}\), averaged over 100 trials.
Figure 2: Variation of particle packings with seed orientation \(\theta\) on cones with sector angle \(\phi\). (**A**) Rendering of simulation results at unrolled cone angle \(\phi=5^{\circ}\) show that particles in the vicinity of the seed placed at \(R=114.59a_{0}\) (\(C=10a_{0}\)) are ordered. As the crystal approaches the vertex, the packing becomes disordered. (**B**) Plot of the defect density \(\rho\) as a function of radial distance \(R\) from the tip, where \(\rho\) is the number fraction of particles with \(|\psi_{6,j}|<0.9\) in a given \(R\) bin averaged over 100 trials at each seed orientation \(\theta\). The defect density curves are similar for all \(\theta\) at \(\phi=5^{\circ}\). Note that the defect density rises from the seed toward the tip and then drops to zero at \(R\approx 28.5a_{0}\) because the particles are squeezed out of the tip.
## III Results
We first explore the effects of seed orientation on the structure of the system. We find that the crystal initially grows from the seed as an ordered packing of particles, represented by grey particles with \(N_{j}=6\) in Fig. 2A. As the crystal wraps around the cone, it meets itself to form a seam consisting of particles with \(N_{j}<6\). However, as the seam approaches smaller circumferences, these defects begin to dominate the growth interface, leading to the formation of a disordered region consisting primarily of defect particles with \(N_{j}<6\) (see Fig. 2A, regions near tips of cones).
For all seed orientations, the defect density follows a similar curve (Fig. 2B), tending to increase as \(R\) decreases. The local minimum at approximately \(R=40a_{0}\) corresponds to the location where a crystalline cluster three particles wide can form. The defect density then increases again for smaller \(R\), until it reaches a maximum value and falls rapidly to \(\rho=0\) at approximately \(R=28.5a_{0}\). Note that \(R\approx 23a_{0}\) represents the limit for packing two particles side by side on unrolled cones of angle \(\phi=5^{\circ}\). Overall, the tendency of the defect density to increase as the cone narrows suggests that finite-size effects are responsible for the disorder near the tip.
When we initialize crystals on cones with different sector angles with fixed seed orientation \(\theta=40^{\circ}\), we find that the position at which the disordered region emerges also varies, as seen for unrolled cones in Fig. 3A. For small sector angles, we find more disorder farther from the tip, while for larger sector angles, the disordered region is found nearer the tip (Fig. 3B). These results show that cones with small angles can have long disordered regions.
By rescaling \(R\) by \(R\phi=C\), we find that the defect density collapses as a function of cone circumference (Fig. 3C). Defect proliferation therefore depends on the circumference and not strongly on the sector angle or seed orientation. Although the precise sector angle and seed orientation might alter the details of the closure constraint at the single-particle scale, the circumference at which closure occurs has a greater effect on crystal growth.
The density curves do not perfectly collapse because of our discrete binning procedure, which results in a systematic variation with the sector angle. As \(\phi\) increases, the gradient of C also increases. An annular bin of fixed width centered at C will not only access larger circumferences but also have more particles at the larger circumference, which are less likely to be defects. Therefore, at a given C, the defect density for the annular bin is biased towards a lower \(\rho\) as \(\phi\) increases. It is possible that the gradient of \(C\) could also affect the defect density in other ways. Nonetheless, the near-collapse of the density curves upon rescaling shows that the circumference, rather than the gradient, is the most important parameter to consider.
To show that defect proliferation can be understood as a function of circumference, we examine crystal growth on cylinders, which have a constant circumference. We find that cylinders with large circumferences have lower defect densities than cylinders with small circumferences (Fig. 4A). For large cylinders, the crystal forms a seam, as expected, and is ordered with few defects. As the circumference decreases, however, the crystal becomes increasingly fragmented as more defects are incorporated into the packing. For thin cylinders, the packing is predominantly disordered.
We find that the defect densities on cylinders as a function of circumference follow the same trend as the defect densities of cones (Fig. 4B), provided the orientation \(\theta\) of the triangular seed cluster is not tuned to the special phyllotactic value that allows a commensurate tiling by a triangular crystal [30; 31; 32]. Because our focus here is on seeds with random orientations, we neglect the interesting regular tilings that occur for commensurate seed orientations at fixed cylinder radius, or commensurate cylinder radius at fixed seed orientation (Appendix C). For seed orientations \(\theta=40^{\circ}\), we find that the defect density distributions appear exponential for small \(R\), regardless of whether the surfaces are conical or cylindrical (Fig. 4C). Crystals on cylinders therefore reproduce the finite-size effects seen in crystals on cones, if special commensurate tilings are ignored [30; 31].
## IV Discussion
### Circumference determines defect density
To provide some intuition for exponential defect proliferation observed in our simulations at small cone circumferences, we introduce a simple theoretical model for slow, reaction-limited growth on a _cylindrical_ substrate. We expect our algorithm to simulate slow, reaction-limited growth because particles attach one-by-one to the growing crystal, and each added particle minimizes the local energy of the crystal interface. We use a cylindrical substrate to simplify our theoretical arguments because, as shown above, cylinders reproduce the finite-size effects seen on cones.
Given a crystal with a seam and smooth facets as in Fig. 5, we consider the types of lattice sites that an additional particle can diffuse to in the context of a 2D terrace-ledge-kink model, used to describe ideal surface crystal growth [33]. Particles sitting at the crystalline edges form the ledges where new particles can adsorb. Kinks describe missing particles along the ledge. In our simulation of slow ideal growth, kinks, which have three or more dangling bonds by definition, are higher-energy sites than the smooth ledges, which have two dangling bonds. Therefore, a particle diffusing to the crystal will adsorb to kinks first.
Once the kink sites have been filled, a particle can attach to two types of energetically equivalent lattice sites. The first, a ledge site, is continuous with the preexisting crystal (Fig. 5A, dark blue circle). The second, a seam
site, is incompatible with the preexisting crystal (Fig. 5B, dark red circle).
Particle attachment to a ledge site results in on-lattice growth, meaning that the symmetry of the preexisting crystal is preserved. Particle attachment to a seam site results in off-lattice growth, meaning that the symmetry of the preexisting crystal is broken. We expect the density of disordered defects to be related to the proportion of seam sites to ledge sites. Crucially, off-lattice growth increases the probability of further off-lattice growth because adsorption of a seam particle increases the number of off-lattice candidate states overall (light red circles in Fig. 5). Thus, if off-lattice growth is likely, this argument predicts that it will only become more likely as growth proceeds, initiating the formation of a disordered region
Figure 3: Variation of particle packings with unrolled sector angle \(\phi\) at fixed seed orientation \(\theta\). (**A**) Renderings of simulation results show a long region of \(N_{j}<6\) particles at \(\phi=5^{\circ}\), while the disordered regions are concentrated closer to the tip at \(\phi=10^{\circ}\) and \(\phi=15^{\circ}\). The seed crystals are at \(C=10a_{0}\) with \(\theta=40^{\circ}\). (**B**) Plot of the defect density \(\rho\) as a function of \(R\), averaged over 100 trials. The distribution is broad for \(\phi=5^{\circ}\) and becomes narrower for increasing sector angles. (**C**) For each \(\phi\), \(\rho\) is mapped to \(C\) by \(C=R\phi\). \(\rho\) collapses as a function of \(C\).
Figure 4: Particle packings on 2D cylinders of different circumferences, measured in units of the hard core diameter \(a_{0}\). (**A**) Rendering of representative results for simulations. At \(C=10a_{0}\), the crystal consists of primarily \(N_{j}=6\) particles. As \(C\) decreases, the crystal becomes fragmented and disordered. The seed orientation is fixed at \(\theta=40^{\circ}\). (**B**) Plot of defect density of cylinders (circles) superimposed on the plot for cones from Fig. 3C (lines). The cylinder defect densities are calculated as the number fraction of defects relative to the total number of particles, averaged over 100 trials at each circumference. (**C**) As the circumference decreases, the defect density grows exponentially at small circumferences for both cones and cylinders. The chosen circumferential values \(C/a_{0}=\)2, 4, 6, 8, and 10 in this figure do not include any of the special values that would result in perfect packings for \(\theta=40^{\circ}\) (Appendix C) [32].
as we see in our simulations.
To further develop this simplified picture, we estimate the probability of disordered growth. A perfect crystal with rows composed of \(N_{\text{row}}\) particles and a single seam as in Fig. 5 has of order \(N_{\text{row}}\) candidate sites that lead to on-lattice growth, and only order one candidate sites that lead to off-lattice growth. Therefore, the probability of on-lattice growth occurring initially is \(P_{1}(t=0)\sim(1-1/N_{\text{row}})\), and the probability of off-lattice growth is \(P_{2}(t=0)\sim 1/N_{\text{row}}\). If on-lattice growth occurs, \(P_{1}\) and \(P_{2}\) do not change. If off-lattice growth occurs, we expect that \(P_{2}\) increases by an amount that scales with \(1/N_{\text{row}}\). If \(P_{2}\) reaches some threshold value--say \(P_{2}=1/2\), at which half of the candidate sites lead to off-lattice growth--runaway off-lattice growth results, leading to the formation of a disordered region.
What is the probability of \(P_{2}(t)\) increasing to this threshold value? If on-lattice growth occurs following off-lattice growth, there may be some healing of the disordered seam region, and \(P_{2}\) may decrease. We therefore make the simplifying approximation that \(P_{2}\) increases only when off-lattice growth occurs many times in a row, and \(P_{2}\) increases by \(1/N_{\text{row}}\) every time off-lattice growth occurs consecutively. With these assumptions, the probability of off-lattice growth occurring \(s\) consecutive times scales as
\[\begin{split} P_{s}&=P_{2}(t=0)\,P_{2}(t=1)\dots P_{ 2}(t=s)\\ &\sim\frac{1}{N_{\text{row}}}\times\frac{2}{N_{\text{row}}}\times \dots\times\frac{s}{N_{\text{row}}}=\frac{s!}{\left(N_{\text{row}}\right)^{s}}.\end{split} \tag{2}\]
To find the probability that \(P_{2}\) reaches \(1/2\), we let \(s=N_{\text{row}}/2\) and make Stirling's approximation:
\[P_{N_{\text{row}}/2}=\frac{(N_{\text{row}}/2)!}{\left(N_{\text{row}}\right)^{ N_{\text{row}}/2}}\sim\frac{\sqrt{N_{\text{row}}}}{2^{N_{\text{row}}/2}}e^{-N_{ \text{row}}/2}. \tag{3}\]
Therefore, this simple model predicts that the probability of a disordered region initiating increases exponentially as the number of particles in a crystal row encircling the cylinder decreases. The predicted exponential scaling is consistent with the simulation results, which show that the defect density increases exponentially with decreasing circumference (Fig. 4C). The same argument can explain defect formation on a cone, albeit with some subtleties, which we discuss in Appendix D.
### Seeds close to the tip
These results, all of which concern seeds placed far from the tip of the cone, raise the question of whether placing seeds closer to the tip of the cone might help prevent disorder near the tip. We therefore examine simulations of crystallization with seeds placed at different circumferences.
We find that while crystals seeded far from the tip (at \(C=9a_{0}\)) are able to grow normally towards the tip until the onset of disorder, crystals that are seeded closer to the tip (at \(C=7a_{0}\) and \(C=5a_{0}\)) first form a small crystal before disorder emerges (Fig. 6A). The formation of these small crystals is reflected in the defect density, which shows a dip at the circumference corresponding to the seed location for all seeds placed at \(C<9a_{0}\) (Fig. 6B).
We can explain these results using the model from Section IV.1. A crystal grows until the closure constraint demands formation of a seam. But when the crystal is seeded near the tip, the circumference is small, and hence \(N_{\text{row}}\) is small. Therefore off-lattice growth is more
Figure 5: Illustration of the defect growth process at a seam. Particles in the preexisting crystal are shown as filled gray circles. When all candidate sites result in the formation of two bonds, a particle can attach randomly at (**A**) a ledge site that initiates a new crystal row or (**B**) a seam site that creates more sites that break the symmetry of the preexisting crystal. Dark open circles show new particle locations, with lighter open circles emphasizing candidate sites created by attachment of the new particle. Note that this schematic depicts growth on a cylinder—the crystal rows on either side of the seam are parallel.
Figure 6: Particle packings on sectors initialized at different seed positions. (**A**) Rendering of simulation results for \(\phi=5^{\circ}\) with seed positions at \(C=5a_{0}\), \(C=7a_{0}\), and \(C=9a_{0}\) and a seed orientation of \(\theta=40^{\circ}\). (**B**) Plot of the defect density \(\rho\) as a function of \(C\), averaged over 100 trials. Crystals seeded at small circumferences have dips in the defect density that correspond to the seed location.
probable at these small circumferences, and the crystal becomes frustrated a short distance from the seed.
Interestingly, in some cases we see that new crystals can form at the wider part of the cone, as shown in the \(C=7a_{0}\) example. However, these new crystals quickly become frustrated again as new seams and grain boundaries form. Consequently, the defect density dips at the circumference of the seed and then rises with increasing circumference, until it exceeds the defect density for a seed placed at \(C=10a_{0}\) (Fig. 6B). We conclude that crystals that are seeded near the tip can temporarily escape the finite-size effect leading to disorder near the tip, though at the expense of increased disorder farther from the seed site.
## V Conclusion
We have shown that crystal growth on a cone is geometrically frustrated. For any non-magic cone angle, a seam is required. A disordered region forms near the tip because defects tend to appear at the seam, and the probability of these defects proliferating increases exponentially as the circumference decreases.
This type of frustration has implications for slow, reaction-limited crystal growth on cones. Near-cylindrical cones have long sections in which defects form with high probability, resulting in large areas of potentially disrupted crystallization. In wider cones, the increase in defect probability is concentrated at the tip and can block tip closure, leading to holes at the tips of conical shells.
These results may help explain tip-closure problems observed in experimental systems. For example, the conical capsids of HIV often exhibit large holes at the tip [26]. Also, crystals of WS\({}_{2}\) have been found to terminate unexpectedly far from the tip [19]. Future experiments on colloidal systems, such as the system described in Chapter 6 of Ref. [34], might shed light on whether tip-closure failures are the result of the frustration mechanism revealed by our simulations.
Our results also show that control over nucleation may be crucial to fabricating conical crystals for applications. Disordered regions form near the tip of the crystal, regardless of whether the seed is far or close to the tip. But a crystal seeded near the tip can temporarily bypass the finite-size effect, resulting in a locally reduced defect density. Therefore, if the surface or interactions can be controlled such that nucleation occurs close to the tip, at least small crystals can be formed in this region. Furthermore, if the geometry of nucleation can be controlled, new crystalline structures might be realized experimentally. In Appendix E, we discuss how a nucleus consisting of a ring of particles might grow, and how the size of the resulting crystal depends on the elastic modulus.
Our simulations used a greedy algorithm because our aim was to reveal the geometric frustration faced by crystals growing on a conical surface. Our simulations do not account for kinetics, thermal fluctuations, or vibrational entropy. Future simulations and experiments are therefore needed to develop a more complete physical understanding of conical crystal growth. Nonetheless, our results show that, apart from the special case of magic-angle cones, any conical crystal is subject to geometrical frustration that promotes disorder at small circumferences.
## Data Availability Statement
Data for the simulations are openly available on the Harvard Dataverse [35]. Code for the simulations is available under the GNU General Public License v3 at [https://github.com/manoharan-lab/cone-disk-packings](https://github.com/manoharan-lab/cone-disk-packings).
###### Acknowledgements.
We thank Lara Braverman for insightful conversations. This research was primarily supported by the National Science Foundation through the Harvard University Materials Research Science and Engineering Center under grant number DMR-2011754. Additional support was provided by the National Science Foundation Graduate Research Fellowship Program under grant numbers DGE-2140743 and DGE-1745303. |
2309.06042 | The Curious Early History of CKM Matrix -- miracles happen! | The 1973 Kobayashi Maskawa paper proposed a compelling link between Cabibbo's
flavor-mixing scheme and CP violation but, since it required the existence of
six quarks at a time when the physics community was happy with only three, it
received zero attention. However, two years after the paper appeared -- at
which time it had received a grand total of two citations -- the charmed quark
was discovered and it finally got some notice and acceptance. After this
stumbling start, it subsequently emerged as the focal point of an enormous
amount of experimental and theoretical research activity. In an invited talk at
a KEK symposium to celebrate the 50th anniversary of the KM paper, I reviewed
some of the less well known circumstances that occurred in the years preceding
and following the paper's appearance. | Stephen Lars Olsen | 2023-09-12T08:23:33Z | http://arxiv.org/abs/2309.06042v3 | # The Curious Early History of CKM Matrix
###### Abstract
The 1973 Kobayashi Maskawa paper proposed a compelling link between Cabibbo's flavor-mixing scheme and \({\cal CP}\)violation but, since it required the existence of six quarks at a time when the physics community was happy with only three, it received zero attention. However, two years after the paper appeared--at which time it had received a grand total of two citations--the charmed quark was discovered and it finally got some notice and acceptance. After this stumbling start, it subsequently emerged as the focal point of an enormous amount of experimental and theoretical research activity. In an invited talk at a KEK symposium to celebrate the 50\({}^{\rm th}\) anniversary of the KM paper, I reviewed some of the less well known circumstances that occurred in the years preceding and following the paper's appearance.
Some spoilers:
-- Kobayashi and Maskawa (and a number of other Japanese physicists) were convinced about the existence of the charmed quark nearly three years before its "discovery" at Brookhaven and SLAC.
-- The matrix provided in their seminal 1973 paper was mathematically incorrect. Another version that was in common use for the following twelve years was technically correct, but not really a rotation matrix.
-- The CKM matrix \({\cal CP}\) phase was only measurable because of the very specific hierarchy of the flavor mixing angles and meson masses.
-- Similarly, the neutrino mixing discovery, and the PMNS-matrix measurability were only possible because of favorable values of the neutrino mass differences and mixing angles.
In addition I include some speculations about what may be in store for the future.
Quark flavor mixing, Neutrino flavor mixing, \({\cal CP}\) violation
Introduction
The challenge of reviewing a subject that is fifty years old to a community of experts is to find something to say that isn't already well known to everyone in the audience. However, this obvious truth didn't occur to me when I was invited by the organizers to speak at the KEK special symposium to celebrate the fiftieth anniversary of the Kobayashi-Maskawa six-quark model. An invitation that, in a reckless capitulation to my vanity, I immediately accepted. Upon subsequent reflection, I realized my dilemma: there was precious little that I could say about the hundreds of CKM-related published Belle results--which I expect the organizers had in mind when they offered this invitation--that wasn't already very familiar to the symposium participants. So, instead, I decided to exploit the one advantage I might have over most other participants, and that was that I would be the oldest, or least one of the oldest, person in attendance and reminisce about the early days of the KM era, including some of its pre-history. So, with the forewarning that all historical accounts suffer from mistakes and oversimplifications, and are varnished to match the preconceptions and prejudices of the chronicler, here goes:
## 2 Prehistory: Cabibbo flavor-mixing and the discovery of \(\mathcal{CP}\) violation
The prehistory started sixty years ago during the 1963-64 academic year1 when there were three major discoveries that all played a major roles in the Kobayashi-Maskawa story: flavor-mixing, quarks, and the observation of \(\mathcal{CP}\) violation in \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) decays.
Footnote 1: This happened to coincide with my first year as a graduate student at the University of Wisconsin.
### Cabibbo flavor mixing
In their classic paper that identified the \(V\!-\!A\) coupling of the weak interaction [1], Feynman and Gell-Mann proposed that the weak interaction was a current-current interaction where the hadron current has the form
\[J_{\mu}=g\big{[}\alpha\big{(}V_{\mu}^{\Delta\mathcal{S}=0}-A_{\mu}^{\Delta \mathcal{S}=0}\big{)}+\beta\big{(}V_{\mu}^{\Delta\mathcal{S}=1}-A_{\mu}^{ \Delta\mathcal{S}=1}\big{)}\big{]}, \tag{1}\]
where \(g\) is a coupling constant, \(V_{\mu}^{\Delta\mathcal{S}=0}\) and \(A_{\mu}^{\Delta\mathcal{S}=0}\) are the vector and axial vector currents for strangeness conserving processes and \(V_{\mu}^{\Delta\mathcal{S}=1}\) and \(A_{\mu}^{\Delta\mathcal{S}=1}\) are corresponding currents for \(\Delta\mathcal{S}=\pm 1\) transitions. They also made two additional conjectures. One was _universality_, the notion that
the currents for the \(\Delta{\cal S}=0\) and \(\Delta{\cal S}=\pm 1\) hadronic transitions and the
\[g_{W}\bigl{(}\bar{\nu}_{e}\gamma_{\mu}(1-\gamma_{5})e^{-}\bigr{)}\quad{\rm and} \quad g_{W}\bigl{(}\bar{\nu}_{\mu}\gamma_{\mu}(1-\gamma_{5})\mu^{-}\bigr{)} \tag{2}\]
lepton currents all have a common coupling strength, _i.e._, \(g=g_{W}\), and \(\alpha\) = \(\beta\) = 1 in eqn. 1, where \(g_{W}\) is related to the square root of the Fermi constant \(G_{F}\) by
\[G_{F}=\frac{\sqrt{2}}{8}\biggl{(}\frac{g_{W}}{M_{W}}\biggr{)}^{2}. \tag{3}\]
The other one was the so-called _Conserved Vector Current_ (CVC) hypothesis that says that the hadronic matrix elements for the vector component of the weak interaction current are the same as those for the electromagnetic interactions. This has the consequence that vector form-factors for weak decays of hadrons at zero squared momentum-transfers are unity, \(f_{V}(q^{2}\)=0) = 1. These two conjectures translated into a prediction that the coupling strength extracted from the vector-mediated semileptonic process \(K^{+}\)\(\rightarrow\)\(\pi^{0}e^{+}\nu_{e}\), _i.e._, \(g_{V}^{\Delta{\cal S}=1}\) shown in Fig.1a) should be the same as \(g_{W}\) in \(\mu^{+}\)\(\rightarrow\)\(e^{+}\nu_{e}\bar{\nu}_{\mu}\).
In a paper that appeared in June 1963 [2], Cabibbo pointed out that Feynman-Gell-Mann universality conjecture failed miserably. His comparison of experimental measurements of the partial width for the \(\Delta{\cal S}=1\) vector weak-interaction process \(K^{+}\)\(\rightarrow\)\(\pi^{0}\ell^{+}\nu\)[3] to the well known width for muon decay found
\[\frac{g_{V}^{\Delta{\cal S}=1}}{g_{W}}\approx 0.26 \tag{4}\]
and about a factor of four below expectations. He also found a similar deviation from universality in the ratio of the axial-vector-mediated partial decay widths \(\Gamma(K^{+}\)\(\rightarrow\)\(\mu^{+}\nu)\)/\(\Gamma(\pi^{+}\)\(\rightarrow\)\(\mu^{+}\nu)\):
\[\frac{g_{A}^{\Delta{\cal S}=1}}{g_{A}^{\Delta{\cal S}=0}}\approx 0.26. \tag{5}\]
(Although the axial-vector currents are not "protected" by CVC, corrections to them were expected to be small [4], and certainly not large enough to account for a factor of four.)
Cabibbo proposed modifying the Feynman-Gell-Mann \(\alpha\) = \(\beta\) = 1 conjecture to \(\alpha^{2}\)+\(\beta^{2}\)= 1, in which case
\[g_{V}^{\Delta{\cal S}=1}=\beta g_{W}\quad{\rm and}\quad\frac{g_{A}^{\Delta{ \cal S}=1}}{g_{A}^{\Delta{\cal S}=0}}=\frac{\beta}{\alpha}, \tag{6}\]
where \(\beta\) \(\approx\) \(0.25\) could accommodate the abovementioned experimental results. In his paper, Cabibbo proposed his eponymous angle \(\theta_{C}\), which he estimated to \(\theta_{C}\)\(\approx\) \(14.9^{\circ}\), as a convenient way to express two parameters \(\alpha\) and \(\beta\) that were subject to the constraint \(\alpha^{2}\)+\(\beta^{2}\)=1, and he didn't mention anything about rotations. The earliest experiments that addressed Cabibbo's hypothesis [5] were focused on testing the validity of Cabibbo's relation, \(\alpha^{2}\) + \(\beta^{2}\) = 1.
The notion that this might represent a rotation didn't become apparent until the 1970 GIM paper [6] that proposed the \(c\)-quark as a way to suppress flavor-changing neutral currents. If one accepts the existence of two quark doublets,
\[\begin{pmatrix}u\\ d\end{pmatrix}s\ \Longrightarrow\ \begin{pmatrix}u\\ d\end{pmatrix}\begin{pmatrix}c\\ s\end{pmatrix}, \tag{7}\]
the Cabibbo \(d\)-\(s\) mixed quark state \(d^{\prime}\) = \(d\cos\theta_{C}\) + \(s\sin\theta_{C}\) is produced by the application of a 2x2 unitary rotation matrix:
\[\begin{pmatrix}d^{\prime}\\ s^{\prime}\end{pmatrix}=\begin{pmatrix}\cos\theta_{C}&\sin\theta_{C}\\ -\sin\theta_{C}&\cos\theta_{C}\end{pmatrix}\begin{pmatrix}d\\ s\end{pmatrix}=\begin{pmatrix}d\cos\theta_{C}+s\sin\theta_{C}\\ -d\sin\theta_{C}+s\cos\theta_{C}\end{pmatrix}, \tag{8}\]
and has an orthogonal partner, \(s\) =\(-d\sin\theta_{C}\) + \(s\cos\theta_{C}\). In this formulation, it is apparent that Cabibbo's form of weak universality is the same as Feynman-Gell-Mann universality applied to the rotated \(d^{\prime}\) and \(s^{\prime}\) quarks.2
Footnote 2: In addition to suppressing \(\Delta{\cal S}\) = \(\pm 1\) weak interaction couplings relative to that for muon decay by a factor of \(\sin\theta_{C}\) = \(0.2245\), Cabibbo’s weak universality predicts that \(\Delta{\cal S}\) = 0 couplings are suppressed by a factor of \(\cos\theta_{C}\) = \(0.974\). In fact, nuclear physicists had known since 1955 that the half-life for \({}^{14}\)O\(\rightarrow^{14}\)N\(\beta^{+}\)\(\nu\), a vector-mediated \(0^{+}\)\(\rightarrow\)\(0^{+}\) nuclear \(\beta^{+}\) decay transition, was \(\sim\)3% longer than the value that was predicted using the \(g_{W}\) value determined from muon decay [7, 8]. In 1960, three years before Cabibbo’s paper, this discrepancy was noted in the introductory remarks of a Nuovo Cimento article on the axial-vector current by Gell-Mann and Levy [4], together with a footnote that suggested that this might be because the unitarity condition might be, in fact, \(\alpha^{2}\)+\(\beta^{2}\)= 1, and not the \(\alpha\) = \(\beta\) = 1 condition that was conjectured in the Feyman-Gell-Mann \(V\)-\(A\) paper. The footnote includes a estimate on the mixing that translates into \(\theta\) \(\approx\) 14\({}^{\circ}\), consistent with—and three years before—Cabibbo’s estimate for \(\theta_{C}\) based on \(\Delta{\cal S}\) = 1 transitions. This may explain why K and M, but not C, were awarded the Nobel prize in 2008.
### Gell-Mann Zweig quarks
During this same year Gell-Mann [9] and Zweig [10] proposed the quark model in which hadrons were comprised of fractionally charge fermionic constituents (Zweig called then "aces"). Gell-Mann's paper was published in January 1964; Zweig's paper was never published.3 With rotated quarks, the short-distance weak interaction hadronic currents are the same as those for leptons:
Footnote 3: The story here is that the head of the CERN theory group in 1964, when Zweig was there on a visiting appointment, thought Zweig’s proposed fractionally charged particles was a crackpot idea and refused to provide him with the clerical and drafting support that was needed to prepare a journal-worthy manuscript in the pre-Latex & computer-graphics era. Gell-Mann won the 1969 Nobel physics prize and by 1976, when the head of the theory group became the Director-General of CERN, Zweig was doing biological research and no longer involved in particle physics.
\[J_{\mu}^{q} = g_{W}(\bar{u}\gamma_{\mu}(1-\gamma^{5})d^{\prime})\ +\ g_{W}(\bar{c}\gamma_{\mu}(1-\gamma_{5})s^{\prime})\] \[= g_{W}\sum_{i,j}(\bar{u}_{i}\gamma_{\mu}(1-\gamma^{5})V_{ij}d_{j}),\]
where \((u_{1},u_{2})\) = \((u,c)\) & \((d_{1},d_{2})\) = \((d,s)\), and \(V_{ij}\) is the eqn. 8 quark mixing matrix. The long distance quark-to-hadron processes are described by form factors.
### Discovery of \({\cal CP}\) violation
The Christenson, Cronin, Fitch and Turley discovery of the \({\cal CP}\) violating decay mode \(K_{L}\)\(\rightarrow\)\(\pi^{+}\pi^{-}\) was reported in the summer of 1964 [11]. This was a relatively low priority experiment that was not aimed at investigating \({\cal CP}\) violation but, instead, was designed to investigate some anomalies in coherent \(K_{2}\)\(\rightarrow\)\(K_{1}\) regeneration measurements that had been reported during the previous year [12]. It failed to qualify for a spot in the main experimental hall of the then, almost new, AGS synchrotron that was occupied by spectrometers specialized for total cross section determinations, and \(\pi\), \(K\), \(\bar{p}\) and \(\mu\)-proton elastic scattering measurements. Instead, the experimental apparatus was located in a relatively inaccessible area inside the AGS magnet ring that the laboratory technical staff referred to as "Inner Mongolia,"4 in a neutral particle line that was essentially a hole in the AGS shielding wall that was pointed at a target located in the accelerator's vacuum chamber, as illustrated in Fig. 2a. The high flux of \(\gamma\)-rays emerging from the target were attenuated by a 3.8 cm-thick lead block followed by a collimator and a bending magnet that swept charged particles out of the beam aperture. A double-arm spectrometer consisting of tracking spark chambers before and after two vertically bending magnets measured the directions and momenta of charged particles that were produced by \(K_{L}\) meson decays that occurred in a 2 m-long decay volume that was a plastic bag filled with atmospheric pressure helium--a low-budget approximation of a vacuum chamber--as shown in Fig. 2b.
Footnote 4: In the 1960s diplomatic relations between the U.S. and China were non-existent, and mainland China, including Inner Mongolia, was considered by most Americans to be about as accessible as the far side of the Moon.
Most of the detected events were due to \({\cal CP}\)-allowed \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\pi^{0}\) decays and \(K_{L}\rightarrow\pi^{\pm}\ell^{\mp}\nu\) (\(\ell\!=\!e,\mu\)) _semileptonic_ decays. In these decays, the \(\pi^{0}\) or \(\nu\) was not detected and, as a result, the invariant mass of the two detected charged particles was not, in general, equal to the \(K_{L}\)-meson mass (\(m_{K_{L}}\!=\!498\) MeV). For \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\pi^{0}\) decays where the \(\pi^{0}\) is undetected, the \(\pi^{+}\pi^{-}\) invariant mass is always below 363 MeV; for \(K_{L}{\rightarrow}\pi^{\pm}\ell^{\mp}\nu\), where the \(\nu\) is missed and the \(\ell^{\mp}\) track is assigned a pion mass, the two charged track invariant mass distribution ranges from 280 to 546 MeV, with no peak near \(m_{K_{L}}\). Although the energies of the decaying \(K_{L}\) mesons were not known, their three-momentum directions were confined to be within an rms spread of \(\pm 3.4\) mrad (\(\pm 0.2^{\circ}\)) around the beamline. A consequence of the missed particle in the three-body decay channels was that the vector sums of the two charge track's momentum vectors did not usually point along the well defined \(K_{L}\) beamline.
Thus, the experimental signature for \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) decays was a pair of oppositely charged tracks that, when assigned pion masses, had an invariant mass that was within \(\pm 5\) MeV of \(m_{K_{L}}\) and with a summed vector momentum that is directed along the \(K_{L}\) beam direction. Results for these two quantities are shown in Fig. 2c, where the horizontal axis is the cosine of the angle between the \(\vec{p}_{\pi^{+}}+\vec{p}_{\pi^{-}}\) direction and the \(K_{L}\) beamline, and the upper, central and lower panels show the experimental distributions for \(M(\pi^{+}\pi^{-})\) below, centered on, and above \(m_{K_{L}}\), respectively. In the central panel there is a pronounced peak totally contained within \(\cos\theta\!>\!0.99996\) (\(\theta\!<\!9\) mrad), a feature that is absent in the distributions for \(M(\pi^{+}\pi^{-})\) below or above \(m_{K_{L}}\) shown in the upper and lower panels. The ratio of the branching fractions for \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) to the sum of all (\({\cal CP}\)-conserving) decays to charged particles was \((2.0\pm 0.4)\times 10^{-3}\).
The signal peak in the central panel of Fig. 2c contained an excess of \(45\pm 10\) events, but these were not all \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) events. About ten of them were due to the coherent \(K_{L}{\rightarrow}K_{S}{\rightarrow}\pi^{+}\pi^{-}\) regeneration process on the helium nuclei in the gas bag decay region, and were indistinguishable
Figure 2: **a)** The neutral \(K_{L}\)-beamline at the AGS that was used for the \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) search experiment. **b)** The two-arm \(\pi^{+}\pi^{-}\) spectrometer consisted of a helium-filled decay volume followed by optical tracking spark chambers before and after momentum analyzing magnets **c)** The distribution of events _versus_ the cosine of the angle between the direction of the two-track momentum sum and the beamline. The upper, middle and lower panels are for events with two-track invariant masses that are below, centered on, and above \(m_{K_{L}}\), respectively.
from the \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) signal events. Nature was kind. If the branching fraction had been much smaller or the regeneration cross section were higher, the interpretation of the observed signal peak would have been ambiguous. As mentioned above, this was a low-priority experiment. If it had ended up by simply setting an upper limit on the \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) branching fraction, who knows when, if ever, a follow-up experiment with higher sensitivity would have occurred.
## 3 The Kobayashi-Maskawa paper
The famous Kobayashi-Maskawa paper [13] was written in mid-1972, and published in the February 1973 issue of the Japanese journal _Progress of Theoretical Physics_, where it was basically ignored; during the following two and a half years, it received all of two citations. The paper's title is \({\cal CP}\)_Violation in the Renormalizable Theory of Weak Interaction_, where the Renormalizable Theory of Weak Interaction is the term they used for what we now call the Standard Model.
Simply put, a \({\cal CP}\) violation means that the amplitudes for a processes that involve initial-state particles converting to final-particles and its corresponding antiparticle equivalent are not the same, _e.g._,
\[{\cal M}(a\to bc)=\left\langle bc\right|H_{w}\left|a\right\rangle \neq\bar{\cal M}(\bar{a}\rightarrow\bar{b}\bar{c})=\left\langle\bar{b}\bar{c }\right|H_{w}^{\dagger}\left|\bar{a}\right\rangle. \tag{10}\]
But the hermiticity of the Hamiltonian requires that the squares of the amplitudes are equal:
\[\left|{\cal M}(a\to bc)\right|^{2}=\left|\left\langle bc\right|H_{w} \left|a\right\rangle\right|^{2}=\left|\bar{\cal M}(\bar{a}\rightarrow\bar{b} \bar{c})\right|^{2}=\left|\left\langle\bar{b}\bar{c}\right|H_{w}^{\dagger} \left|\bar{a}\right\rangle\right|^{2}, \tag{11}\]
and the only way these two conditions can be satisfied is if \({\cal M}\) and \(\bar{\cal M}\) differ by a phase, _i.e._,
\[{\cal M}=\left|{\cal M}\right|e^{i\delta_{{\cal CP}}}\quad{\rm and}\quad\bar{ \cal M}=\left|{\cal M}\right|e^{-i\delta_{{\cal CP}}}. \tag{12}\]
So, to incorporate \({\cal CP}\) violation into the Standard Model, all you have to do is find a way to insert a complex phase in it somewhere, which, at first glance, wouldn't seem to be so difficult. However this \({\cal CP}\)-violating phase is special and, unlike all other phases that show up quantum theories, and including it into the the theory is not at all trivial.
There are countless phases that occur in quantum mechanics; both the Schrodinger and Dirac equations have an imaginary coefficient and their solutions are complex wave functions. But all of these phases, with, so far, only one exception, have the same sign for particles and antiparticles. Only a \({\cal CP}\)-violating phase has opposite signs for particles and antiparticles.
The KM paper examines various possible ways that a complex \({\cal CP}\) phase might be incorporated into the Standard Model. In the following I discuss the first five pages and the last page separately.
### The KM paper: pages 1\(\rightarrow\)5
In the first five pages, various possibilities were examined and the authors concluded that "no realistic models of \({\cal CP}\)-violation exist in the _quartet scheme_ without introducing any other new fields." Here, by the "quartet scheme" they meant the four-quark model that included the charmed quark. Note that this was written in 1971, nearly three before the \(c\)-quark discovery in the "November 1974 revolution." This, and their page-five conclusion that no realistic model for \({\cal CP}\)-violation exists with four quarks raise two questions:
_i_) Why were Kobayashi and Maskawa so sure of the existence of the \(c\)-quark at such an early data?
_ii_) Why can't a \({\cal CP}\)-violating phase be introduced together with the Cabibbo angle into the eqn. 8 2\(\times\)2 quark-flavor mixing matrix?
#### 3.1.1 The discovery of charm: Japanese version
In 1970, a small team of experimenters in Japan led by Kiyoshi Niu, exposed a stack of photographic emulsions to cosmic rays in a high altitude commercial cargo airliner [14]. Upon subsequent inspection they found a remarkable event, shown in Fig. 3, in which an ultra-high energy (multi-TeV) cosmic ray particle interaction produced four charged tracks and two very high energy, closely spaced \(\gamma\)-rays that, when attributed to a \(\pi^{0}\rightarrow\gamma\gamma\) decay, had a total energy of \(3.2\pm 0.4\) TeV. Two of the charged tracks, labeled B & C in the figure, have kinks within \(\sim\)5 cm of the production point that are quite distinct in both the \(X\) and \(Y\) projections shown in Figs. 3a & b, indicating that they decayed to charged daughters (tracks B' & C'). When the event is viewed along the flight direction of track B (Fig. 3c), its daughter charged track (B') and the high energy \(\pi^{0}\) are very close to being back-to-back. The transverse momentum of the \(\pi^{0}\) relative to the direction of track B was \(627\pm 90\) MeV, and much higher than was possible for the decay of any known particle at that time. With the \(\pi^{0}\) setting the energy scale and assumptions of two-body decays at each kink: \(\pi^{\pm}\pi^{0}\) for B\(\rightarrow\)B' and \(\pi^{0}p\) for C\(\rightarrow\)C' where the secondary \(\pi^{0}\) is missed, transverse momentum balance was used to estimate the masses and lifetimes of B and C:
\begin{tabular}{c|c|c} \hline Assumed & Mass & lifetime \\ decay mode & (GeV) & (sec) \\ \hline B\(\rightarrow\)\(\pi^{+}\pi^{0}\) & 1.78 & 2.2\(\times 10^{-14}\) \\ C\(\rightarrow(\pi^{0})p\) & 2.95 & 3.6\(\times 10^{-14}\), \\ \hline \end{tabular} The estimated B mass and the proper time intervals are consistent with the GIM estimates of \(\sim\)2 GeV for the charmed-quark mass (and in reasonable agreement with what the now very well determined \(D^{-}\) mass (1.869 GeV) and \(\Lambda_{c}\) mass (2.286 GeV)). The lifetimes were much shorter than that of any known weakly decaying particle as well as the \({\cal O}(10^{-13}\,\rm{s})\) estimate that was given in the GIM paper. But the latter fact is perhaps not too surprising since emulsion measurements are
biased towards shorter lifetimes. For these reasons, Nagoya theorist Shuzo Ogawa interpreted Niu's event as being the associated production of an anticharmed meson charmed-baryon pair and their subsequent decays. Although whether or not Niu's event and Ogawa's interpretation amounted to a Nobel-prize-worthy claim of a discovery might be a subject of dispute, what matters for our story here is that many people in the Japanese theoretical physics community, especially those in Nagoya that included Kobayashi and Maskawa, were convinced that the charmed quark had been discovered, and that four quarks existed in nature, a scenario that they called the "quartet model."
#### 3.1.2 A \({\cal CP}\) phase in the four-quark mixing matrix?
In general, a 2\(\times\)2 matrix like that in eqn. 8 has four complex elements that correspond to eight distinct real numbers. In the four-quark model, flavor mixing is completely described by the single real number \(\theta_{C}\), why can't one of the other seven numbers be used to specify \(\delta_{\cal CP}\), a \({\cal CP}\)-violating phase?
Figure 3: The **a)**\(X\) and **b)**\(Y\) projections of the Niu event. Here the tracks labeled B and C have kinks at depth of 1.38 cm and 5.14 cm, respectively, that are evident in both views. **c)** The same event viewed along track B, where the direction of a high energy \(\pi^{0}\), inferred from two detected \(\gamma\)-rays, is very nearly opposite the direction of B’, the daughter track that emerges from the track B kink. (The figures are taken from ref. [14].)
The flavor-mixing matrix describes a rotation and, thus, has to conserve probability. This means it should be unitary: _i.e._\(\boldsymbol{VV^{\dagger}}\)=\(\boldsymbol{\mathcal{I}}\), where \(\boldsymbol{\mathcal{I}}\) is the identity matrix:
\[\begin{pmatrix}V_{ud}&V_{us}\\ V_{cd}&V_{cs}\end{pmatrix}\times\begin{pmatrix}V_{ud}^{*}&V_{cd}^{*}\\ V_{us}^{*}&V_{cs}^{*}\end{pmatrix}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}. \tag{13}\]
This corresponds to four relations
\[|V_{ud}|^{2}+|V_{us}|^{2}=1\qquad{\rm and}\qquad|V_{cd}|^{2}+|V_{ cs}|^{2}=1 \tag{14}\] \[V_{ud}V_{cd}^{*}=-V_{us}V_{cs}^{*}\qquad{\rm and}\qquad V_{cd}V_{ ud}^{*}=-V_{cs}V_{us}^{*}, \tag{15}\]
that reduces the number of independent parameters from eight to four.
In the weak interaction quark currents (eqn. 9), the total number of quarks is conserved: \(q_{j}\), which annihilates a \(q_{j}\)-quark, is always accompanied by \(\bar{q}_{i}\), which creates a \(q_{i}\)-quark. The theory has a subtle property: if each quark field is multiplied by an arbitrary phase factor,
\[d_{j}\longrightarrow e^{i\phi_{j}}d_{j}\quad{\rm and}\quad\bar{u}_{i} \longrightarrow e^{-i\phi_{i}}\bar{u}_{i}, \tag{16}\]
and the interactions are modified by the same phases,
\[V\longrightarrow\begin{pmatrix}e^{i\phi_{u}}&0\\ 0&e^{i\phi_{c}}\end{pmatrix}\begin{pmatrix}V_{ud}&V_{us}\\ V_{cd}&V_{cs}\end{pmatrix}\begin{pmatrix}e^{-i\phi_{d}}&0\\ 0&e^{-i\phi_{s}}\end{pmatrix}, \tag{17}\]
there is no net effect on on the \(J_{\mu}^{q}\) current:
\[(\bar{u}_{i}\gamma_{\mu}(1-\gamma_{5})V_{ij}d_{j}) \longrightarrow (\bar{u}_{i}e^{-i\phi_{i}}\gamma_{\mu}(1-\gamma_{5})e^{i(\phi_{i}- \phi_{j})}V_{ij}e^{i\phi_{j}}d_{j}) \tag{18}\] \[= (\bar{u}_{i}\gamma_{\mu}(1-\gamma_{5})V_{ij}d_{j}).\]
This process is called _rephasing_ and the four phases can be expressed as three independent phase differences plus one overall phase that has no effect on the physics. Thus, of the the eight real numbers we started with, four are removed by the unitarity constraint, and three can have any value with no net effect. Thus there is only one number left to define the matrix, and that is needed for the rotation angle \(\theta_{C}\). There is no freedom to add a \(\mathcal{CP}\) phase and is what led Kobayashi and Maskawa to conclude that there was no way to incorporate \(\mathcal{CP}\) violation into the four quark model.
### The KM paper: page 6
In his 2008 Nobel prize lecture [15], Maskawa recalled that he and Kobayashi had completed the work that was covered in pages 1-5 of their paper and were disappointed with their failure to find any way to incorporate a \(\mathcal{CP}\) phase into the four quark model, and were reconciled to the unhappy
likelihood that they would have to publish the negative result. Then, one evening, while--as is customary in Japan--he was taking his after-dinner bath, he mentally went through the calculation described in the previous paragraph, except this time for a six-quark scenario with a 3\(\times\)3 flavor-mixing rotation matrix in three dimensions. In this case, there are 9 complex elements that are described by 18 real numbers. Of these, 9 are constrained by the unitarity requirement, 5 are taken up by rephasing, and 3 are needed to specify the 3-dimensional rotation,5 and that left one number that remained available.
Footnote 5: In general, for \(N\) quark generations with an \(N\times N\) mixing matrix, there are \(N^{2}\) elements characterized by \(2N^{2}\) real numbers. Of these \(N^{2}\) are used for unitarity, \(2N\)-\(1\) for rephasing and \(N(N\text{-}1)/2\) are needed to define the rotation angles. The remaining degrees of freedom that can show up as \(\mathcal{CP}\) phases are \((N\text{-}1)(N\text{-}2)/2\), which vanishes for \(N\)=2, but is \(1\)for \(N\)=\(3\) (and would be \(3\) if \(N\)=4).
_Eureka!_ With six-quarks there is room for a \(\mathcal{CP}\)-violating phase!
A sixth page was added to the manuscript that included the remarks:
Next we consider a 6-plet model, another interesting model of \(\mathcal{CP}\) violation,... with a 3\(\times\)3 instead of 2\(\times\)2 unitary matrix. In this case we cannot absorb all phases of matrix elements into the phase convention and can take, for example, the following expression:
\[\begin{pmatrix}\cos\theta_{1}&-\sin\theta_{1}\cos\theta_{3}&-\sin \theta_{1}\sin\theta_{3}\\ \sin\theta_{1}\cos\theta_{2}&\cos\theta_{1}\cos\theta_{2}\cos\theta_{3}-\sin \theta_{2}\sin\theta_{3}e^{i\delta}&\cos\theta_{1}\cos\theta_{2}\sin\theta_{3 }+\sin\theta_{2}\cos\theta_{3}e^{i\delta}\\ \sin\theta_{1}\sin\theta_{2}&\cos\theta_{1}\sin\theta_{2}\cos\theta_{3}+\cos \theta_{2}\sin\theta_{3}e^{i\delta}&\cos\theta_{1}\sin\theta_{2}\sin\theta_{3 }-\cos\theta_{2}\sin\theta_{3}e^{i\delta}\end{pmatrix}. \tag{19}\]
Then we have \(\mathcal{CP}\)-violating effects... that appear only in the \(\Delta\mathcal{S}\neq\)0 non-leptonic processes and semi-leptonic decay of neutral strange mesons (we are not concerned with higher states with the new quantum number)...
(Here, \(\theta_{1}\) is (approximately) the Cabibbo angle, \(\theta_{2}\) is the mixing angle between the 2\({}^{\text{nd}}\)- & 3\({}^{\text{rd}}\)-generation quarks, \(\theta_{3}\) mixes the 1\({}^{\text{st}}\)- & 3\({}^{\text{rd}}\)-generations, and \(\delta\) is the \(\mathcal{CP}\)-violating phase.) And that was it. Discussion about the six-quark model was confined to one paragraph that, together with the expression for the matrix, occupied only about half of the page.
Thus, Kobayashi and Maskawa discovered the way to incorporate a \(\mathcal{CP}\) violating phase into the Standard Model, and established a deep theoretical connection between quark-flavor mixing and \(\mathcal{CP}\) violation, two subjects that had previously been considered to be unrelated. But this came at a cost: you need to have six-quarks. In 1971, thanks to Kiyoshi Niu, this was not such a big stretch for Kobayashi and Maskawa, who were among the fortunate few who were already convinced that there were (at least) four quarks. On the other hand, most of the world-wide particle physics community outside of Japan was quite satisfied with three quarks.
#### 3.2.1 The first proposal for (and the naming of) charm
The distinction between electron- and muon-neutrinos had been established in 1962 [16], and, in what was at that time the beginnings of the Standard Model, the electron and muon and their neutrinos occupied two weak-isospin=1/2 spinors. In a 1964 paper, Bjorken and Glashow [17] discussed an expansion of \(SU(3)\) to \(SU(4)\) with the addition of another strangeness-like flavor quantum number. They formulated their model in terms of the Sakata model, the predecessor of the quark model that used the proton-neutron isospin doublet and isoscalar Lambda as basic constituents, with the Lambda replaced by a doublet that had a fourth baryonic constituent with a non-zero value of the new quantum number. When reformulated in the context of the quark model, which had just emerged at that time with three quarks, this was equivalent to adding a fourth quark with matching patterns for the leptons and quarks:
\[{\rm leptons} {\rm quarks} \tag{20}\] \[\begin{pmatrix}e^{-}\\ \nu_{e}\end{pmatrix}\begin{pmatrix}\mu^{-}\\ \nu_{\mu}\end{pmatrix} \begin{pmatrix}u\\ d\end{pmatrix}\begin{pmatrix}\mathbf{c}\\ s\end{pmatrix}.\]
While their proposal was not very different from schemes that other authors had proposed around that time (see, for example refs. [18, 19, 20]), Bjorken and Glashow gave the new flavor the charismatic name "charm," and that's the one that stuck; the fourth quark was known as the "charmed quark" (and not the grammatically awkward "charm quark") even before it was discovered. In retrospect, this four-quark scheme seems like a pretty sound--almost obvious--idea;6 how could anyone dismiss such a simple and sensible suggestion? Nevertheless, for the next six years this idea didn't go very far. By the time the GIM paper [6] appeared in 1970, the Bjorken-Glashow paper had received a grand total of six citations. For some reason, there seem to have been a special affinity in the physics community for the number three and an aversion to the number four.7 Moreover, even the GIM paper that proposed a four-quark scenario as a compelling explanation for the suppression of flavor-changing neutral-currents, an important theoretical issue at that time, didn't experience a boom in citations until after the \(J/\psi\) discovery, which finally convinced the world-wide physics community that there were (at least) four quarks.
Footnote 6: In a passing comment in his original paper on quarks [9], Gell-Mann mentioned a four-quark scheme that was “parallel with the leptons” as an interesting possibility.
Footnote 7: In east Asian cultures the number four is considered to be very unlucky because the Chinese pronunciations of their words for “four” and “death” are homophonous. As far as I know there is no such taboo in Western cultures.
In 1975, soon after the \(J/\psi\) discovery, Maiani (the "M" in GIM), who, at that time, was unaware of the KM result,8 wrote an interesting paper [21] that contained all of the KM model and then
some. But, thanks to their three-year-long head start, it was Kobayashi and Maskawa--and not Maiani--that got to go to Sweden in December 2008.
The six-quark era started in earnest in 1977, after the discovery of the \(\tau\)-lepton [22] and the \(b\)-quark [23], and the KM model was elevated to role of being the most likely mechanism for \({\cal CP}\) violation. Although the \(b\)-quark's charge=2/3 partner, the \(t\)-quark, wasn't discovered until 1995 [24, 25], there was very little doubt about its existence.9 The only question was its mass; based on the mass pattern of the known quarks, the general consensus was that it was probably in the \(\sim\)25-35 GeV range [30].
Footnote 9: There were some papers on “topless models,” [26, 27, 28] including one with the title: “Does bare-bottom rule out the topless E6 model?” [29], but these did not get much attention.
### "Discovery" of the KM paper
As mentioned above, for over two years the KM paper remained completely unnoticed outside of Japan (and only received limited attention inside Japan). It was finally brought to the attention of the world-wide physics community in a curious set of circumstances that are recounted here.
But first it should be noted that version of the quark-mixing matrix that appeared in the KM paper and is reproduced above in eqn. 19 contains a pretty obvious typographical (or transcription) error. Since the KM matrix nominally describes a rotation, if all three mixing angles and the \({\cal CP}\) phase are set to zero, it should revert to the identity matrix, _i.e._,
\[\mathbf{V}\xrightarrow{(\theta_{1,2,3},\delta)\to 0}\mathbf{\mathcal{I}}=\begin{pmatrix} 1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix} \tag{21}\]
However in the matrix that appears in the KM paper, the zero-angle limit for the lower-right diagonal element \(V_{tb}\), which should be \(V_{tb}\)\(\rightarrow\)1, is incorrect:
\[V_{tb}^{\rm KM}=\cos\theta_{1}\sin\theta_{2}\sin\theta_{3}-\cos\theta_{2}\sin \theta_{3}e^{i\delta}\xrightarrow{(\theta_{1,2,3},\delta)\to 0}0. \tag{22}\]
The first paper to establish that the KM model could account for all that was known about \({\cal CP}\) violations at that time, was by Pakvasa and Sugawara and published in the July 1976 issue of Physical Review D [31]. In their paper, they pointed out that in the six-quark model, a non-zero value of \(\varepsilon\), the neutral \(K\)-meson mass-matrix \({\cal CP}\) violation parameter, of the correct magnitude would be produced by the interference between the virtual \(c\)- and \(t\)-quark contributions to the \(K^{0}\)-\(\bar{K}^{0}\) mixing box-diagram shown in Fig. 4a. They also pointed out that in the KM picture, the penguin diagram10 shown in Fig. 4b, that would mediate direct-\({\cal CP}\) violating \(K_{2}\)\(\rightarrow\)\(\pi^{+}\pi^{-}\) decays,
_i.e._, the \(\varepsilon^{\prime}\) parameter, would be small and consistent with the then existing experimental limits. Since PRD is a widely distributed physics journal, this paper provided the first awareness of the KM paper to the international particle physics community.
In their paper, Pakvasa and Sugawara made no mention of the typo in the KM paper and included an expression for the matrix that they attributed to KM but, in fact, was different. This paper also had a typo that mistakenly identified Toshihide Maskawa as K. Maskawa in their citation to the KM paper. Interestingly, many of the papers that immediately followed the Pakvasa-Sugawara paper used the Pakvasa-Sugawara version of the CKM matrix with no mention of the mistake, and with citations to the KM paper that had T. Maskawa incorrectly listed as K. Maskawa (see, it e.g., refs. [32, 33, 34, 35]). This recurrence of the typo in the citations provided pretty clear evidence that the Pakvasa-Sugawara PRD article was the source that researchers used for both the matrix and the citation, and that the PTP paper itself was probably not very widely read.11
Footnote 11: But why didn’t Pakvasa and Sugawara alert their readers about the problem with the matrix in the KM paper? Sugawara’s recollection is that when he first learned about the KM paper at a University of Tokyo physics seminar, he was impressed by their six-quark scheme and “reconstructed it in [his] own way, without reading their paper carefully.” When he and Pakvasa subsequently did their analysis and wrote up their results, they included Sugawara’s version of the matrix, which was the KM version without the error, in their paper. In fact, Pakvasa and Sugawara remained unaware of the KM paper’s typo. According to Sugawara, “I never realized that the paper had this typo until it was recently pointed out to me.”
But, in addition to the typo in their citation to the KM paper, the Pakvasa-Sugawara version of the KM matrix had a problem of its own. In their paper, the KM expression for \(V_{tb}\), given above in eqn. 22, was replaced by
\[V_{tb}^{\rm PS}=\cos\theta_{1}\sin\theta_{2}\sin\theta_{3}-\cos\theta_{2}\cos \theta_{3}e^{i\delta}, \tag{23}\]
which doesn't go to zero in the limit or zero mixing angles. But neither does it go to 1, instead,
\[V_{tb}^{\rm PS}\xrightarrow{(\theta_{1,2,3},\delta)\to 0}-1, \tag{24}\]
Figure 4: **a)** The \(W\)-exchange Standard Model box diagram for \(K^{0}\)-\(\bar{K}^{0}\) mixing (not shown is the one with heavy quark exchange). In the KM six-quark model, the kaon’s mass-matrix \({\cal CP}\)-violating parameter \(\varepsilon\) is produced by interference between the \(c\)- and \(t\)-quark contributions. **b)** The penguin diagram for direct-\({\cal CP}\)-violating \(K_{2}{\rightarrow}\pi\pi\) decays.
and for \(\delta=0\), has a negative determinant. Nevertheless, this version of the matrix is unitary, which is the only essential requirement for a quark-flavor mixing matrix, and this form was used in much (but not all) of the literature until 1984, when the currently widely accepted Chau-Keung parameterization was proposed and soon thereafter endorsed by the Particle Data Group.
## 4 Reparameterizing the CKM matrix
A rotation in three dimensions can be accomplished by three successive rotations: first by an angle \(\theta_{1}\) around the \(z\) axis that mixes \(x\) and \(y\) [\((x,y)\)\(\rightarrow\)\((x^{\prime},y^{\prime})\))], then by \(\theta_{2}\) about the new \(x^{\prime}\) axis that mixes \(y^{\prime}\) and \(z\) [\((y^{\prime},z)\)\(\rightarrow\)\((y^{\prime\prime},z^{\prime})\)] and, finally, by \(\theta_{3}\) around the \(y^{\prime\prime}\) axis that mixes \(x^{\prime}\) and \(x^{\prime}\) [\((x^{\prime},z^{\prime})\)\(\rightarrow\)\((x^{\prime\prime},z^{\prime\prime})\)]. This is just one of of many ways that can be used to specify a given rotation. For example, the order of the three rotations can be changed, and there is freedom in the choice of the axes that are used to define the rotations. For these different choices, the values of the individual rotation angles are different, as are the expressions for each matrix element in terms of these rotation angles. Ultimately, however, the numerical value of the magnitude of each matrix element for any of these choices has to be the same, and independent of the choice of the individual rotations. In addition, in the case of the CKM matrix, which is complex, rephasing invariance provides five independent arbitrary phase parameters that can be attached to the various matrix elements to establish whatever phase convention may seem convenient. The physics content is independent of these parameterization choices.
On what basis should a parameterization be selected? In answer to this, Haim Harari suggested some criteria for what he would consider to be a "good" parameterization. These included [36]:
* There should be a simple relation between the most directly measurable matrix elements \(V_{ij}\) and the quark mixing angles.
* The matrix elements above the diagonal, which correspond to kinematically allowed decay processes that are directly measurable, should have the simplest possible expressions.
* If possible, the \({\cal CP}\) violating phase should be linked to only one angle, and preferably the sine of that angle.
During the years immediately following the wide recognition of the KM paper, there was considerable effort aimed at finding a suitable parameterization. This was aided by concurrent experimental measurements of the relative magnitudes of the \(V_{cb}\) and \(V_{ub}\) matrix elements using \(B\)-mesons that were produced via \(e^{+}e^{-}\) annihilations at two colliders that existed at that time: PEP at SLAC and CESR at Cornell.
### Experimental information about quark transitions
The PEP project was initially conceived as a two ring proton-electron-positron collider with an electron-positron ring that could support \(E_{\rm cm}\)= 30 GeV \(e^{+}e^{-}\) collisions primarily to search for the top-quark (if its mass was less than 15 GeV), and a second 150 GeV proton ring that could support \(e^{\pm}p\) collisions with \(E_{\rm cm}\) \(\approx\) 100 GeV for measurements of deep inelastic scattering at high energies and \(Q^{2}\) values. The CESR collider was conceived during 1974 as a follow-up to the Cornell fixed-target \(e^{-}p\) scattering program, and proposed to the U.S. National Science Foundation in 1975, soon after the \(J/\psi\) discovery, as an \(E_{\rm cm}\)\(\approx\) 16 GeV \(e^{+}e^{-}\) collider primarily aimed at studies of charmed particles. The PEP and CESR projects were both well underway when the \(b\)-quark was discovered in 1977. Fortuitously, the initial CESR design energy could comfortably operate in the \(E_{\rm cm}\)= 9\(-\)11 GeV range, and cover the three narrow \(\Upsilon(1S,2S,3S)\) resonances and the threshold region for \(e^{+}e^{-}\to B\bar{B}\) meson pair production, that was expected to be around \(E_{\rm cm}\)= \(10.5\) GeV.12
Footnote 12: The CESR facility spent its life operating at \(E_{\rm cm}\)\(\sim\)10–11 GeV.
The two projects started running in 1979 and soon thereafter provided convincing experimental evidence for a strong hierarchy among the weak interaction mixing angles for the three quark generations. It was already well established that transitions _within_ the first quark generation, _e.g._, \(u\)\(\rightarrow\)\(d\) and _within_ the second generation (\(c\)\(\rightarrow\)\(s\)) were strongly favored over transitions _between_ the first and second generations (\(s\)\(\rightarrow\)\(u\) and \(c\)\(\rightarrow\)\(d\)), _i.e._, the Cabibbo angle. Experiments at PEP found that transitions _between_ the second and third generation (\(c\)\(\rightarrow\)\(b\)) are more suppressed those between the first and second generations, and the CESR experiment determined that transitions between the first and third generations (\(u\)\(\rightarrow\)\(b\)) are the least favored of all.
1st\(\leftrightarrow\) 2nd generation: The suppression of strangeness-changing decays that was noted by Cabibbo in 1963, led to the realization that the relative strengths of the \(u\)\(\rightarrow\)\(d\) and the strangeness-changing \(u\)\(\rightarrow\)\(s\) transitions are modulated by factors of \(\cos\theta_{C}\) and \(\sin\theta_{C}\), respectively, where \(\theta_{C}\) = 13\({}^{\circ}\) is the Cabibbo angle. Thus the diagonal element \(V_{ud}\) = \(0.974\) is nearly unity and almost five times larger that its adjacent entry \(V_{us}\) = 0.225.
2nd\(\leftrightarrow\) 3rd generation: In 1983, the MAC experiment at the PEP collider used \(b\)-flavored hadrons (mainly \(B^{\pm}\) and \(B^{0}\) mesons) produced via the \(e^{+}e^{-}\)\(\rightarrow\)\(b\bar{b}\) annihilation process (about 1/10th of the total annihilation cross section) to determine the \(b\)-quark lifetime by measuring the impact parameters of charged leptons from \(B\)\(\rightarrow\)\(X\mu\nu\) and \(Xe\nu\) inclusive semileptonic decays, where \(X\) is a hadronic system [37]. (The definition of the impact parameter is indicated in Fig. 5a.) This was a difficult measurement: the parent \(B\)-meson's energy and direction were not precisely known on an event-by-event basis; there was contamination from semileptonic decays
of charmed mesons; and the mean value of the impact parameter that was eventually determined (\(\approx\)170 \(\mu\)m) was substantially smaller than the experimental resolution (\(\sim\)600 \(\mu\)m) as well as the horizontal size of the beam-beam interaction region (\(\sim\)400 \(\mu\)m). Nonetheless, the measured impact parameter distributions for muons and electrons shown in the upper and lower panels of Fig. 5b, respectively, both had excesses at positive values and these translated into a \(b\)-quark lifetime of \(\tau_{b}\) = \(1.8\pm 0.7\) ps. Since this was about a factor of four times longer than the lifetime of the much lighter \(c\)-quark, it was a big surprise at that time. As discussed below, \(b\)\(\rightarrow\)\(c\) transitions are the \(b\)-quark's dominant decay mechanism, and the measured lifetime could be used to determine a value for \(|V_{cb}|\) using the expression
\[\Gamma_{b}=1/\tau_{b}=|V_{cb}|^{2}\frac{G_{F}^{2}m_{b}^{5}c^{4}}{192\pi^{3}h^{ 7}}(2_{\rm quarks}\times 3_{\rm colors}+3_{\rm leptons}), \tag{25}\]
which is the (textbook) expression for the muon lifetime tailored to the \(b\)-quark mass, modified to include the effect of \(|V_{cb}|\) on the coupling strength, and multiplied by the number of accessible final states: two quark flavors, three quark colors, and three types of leptons, as illustrated in Fig. 5c. The result was \(|V_{cb}|\) \(\approx\) 0.04, and about a factor of five smaller than \(|V_{us}|\), a difference that is similar to (but not exactly the same as) the factor of five Cabbibo suppression13 between \(V_{us}\) and \(V_{ud}\). The MAC results were confirmed by the MarkII and DELCO experiments at PEP [39, 40] and the TASSO experiment [41] at PETRA, an \(E_{\rm cm}\)=\(40\) GeV \(e^{+}e^{-}\) collider at the DESY laboratory in Hamburg.
Footnote 13: The PDG 2020 [38] value for the \(B\) meson lifetime is \(\tau_{b}\)= \(1.519\pm 0.004\) ps and that for the \(V_{cb}\) matrix element is \(|V_{bc}|\) = \(0.0410\pm 0.0014\), which are both within the error ranges of the MAC measurements.
\(1^{\rm st}\)\(\leftrightarrow\)\(3^{\rm rd}\) generation: The CLEO experiment studied semileptonic \(B\)\(\rightarrow\)\(X\ell\nu\) decays of \(B\) mesons that were produced at \(E_{\rm cm}\) = \(10.58\) GeV, the peak14 of the \(\Upsilon(4S)\)\(\rightarrow\)\(B\bar{B}\) resonance [42]. Since this energy is only 20 MeV above the \(B\bar{B}\) threshold, the \(B\) mesons are produced very nearly at rest (the boost factor is \(\gamma\beta\) = \(0.062\)) and the energy of a decay lepton (\(\ell\) = \(e,\mu\)) in the laboratory frame is very nearly equal to what it is in the \(B\) meson rest frame. In \(b\)\(\rightarrow\)\(c\ell\nu\)-mediated decays, the minimum mass of the hadronic system is \(M_{X}^{\rm min}\)= \(m_{D}\) = \(1.86\) GeV, and this translates into a maximum lepton momentum of \(p_{c\ell\nu}^{\rm max}\) = \(2.31\) GeV/\(c\); in \(b\)\(\rightarrow\)\(u\ell\nu\)-mediated decays, the mass of the hadronic system can be as light as \(M_{X}^{\rm min}\)= \(m_{\pi}\), and the end-point momentum is \(p_{ut\ell\nu}^{\rm max}\) = \(2.60\) GeV/\(c\). Thus, measurements of the lepton momentum spectra in the end-point region can be used to determine the relative strengths of the \(b\)\(\rightarrow\)\(c\) and \(b\)\(\rightarrow\)\(u\) transitions and extract the value of \(|V_{ub}|^{2}/|V_{bc}|^{2}\). Figure 6 shows the measured momentum distributions for electrons (_upper_) and muons (_lower_) together with expectations for \(b\)\(\rightarrow\)\(c\ell\nu\) (dashed curves)
and \(b\!\!\to\!u\ell\nu\) (dotted curves). There are no events in the 2.31\(<\!\!p_{\rm lepton}\!\!<\)2.60 GeV/\(c\) range that could be unambiguously attributed to \(b\!\!\to\!u\ell\nu\) decays and the shapes of the spectra are consistent with expectations for \(\sim\)100% \(b\!\!\to\!c\ell\nu\) with no significant contribution from \(b\!\!\to\!u\ell\nu\). From these data, the CLEO group established a 90% CL upper limit15 of \(|V_{ub}|/|V_{cb}|\!\!<\)0.14.
Footnote 15: The PDG 2020 [38] value is \(|V_{ub}|/|V_{cb}|\!=\!0.093\pm 0.004\).
### The Chau-Keung and Wolfenstein parameterizations
Since the early parameterizations fail on all three of the Harari criteria, a better one was needed. Of a number of proposed replacements[21, 36, 43, 44, 45], two continue to be widely used today: one by Chau and Keung [43] and the other by Wolfenstein [44].
#### 4.2.1 Chau-Keung parameterization
The parameterization proposed by Chau and Keung was specifically motivated by the occurrence of the \({\cal CP}\) violating phase in the \(V_{tb}\!=\!\cos\theta_{1}\cos\theta_{2}\cos_{3}\!-\!\cos\theta_{2}\cos\theta_{3 }e^{i\delta}\) term in the Pakvasa-Sugawara version of the original KM parameterization that seemed to suggest that there is a large \({\cal CP}\) violation that is confined to the (\(t,b\)) quark sector. This was in sharp contrast to the results of detailed calculations of measurable \({\cal CP}\) violating effects that invariably resulted in small numbers that involved factors of \(\sin\theta_{2}\sin\theta_{3}e^{i\delta}\).
Figure 5: **a)** A sketch of the projections of tracks from the semileptonic decay of a \(B\) meson onto a plane perpendicular to the \(e^{+}e^{-}\) beamline, together with indications of the size of the beam-beam interaction region, and the definition of the impact parameter, _i.e._, the distance of closest approach of the charged lepton track to the \(e^{+}e^{-}\) interaction point. The unknown \(B\) meson’s direction was assigned with reasonably good accuracy to be along the event’s thrust axis. **b)** Impact parameter distributions for muons (upper) and electrons (lower) (from the MAC experiment [37]). Here each entry is weighted by the inverse square of its measurement error. **c)** A quark-line diagram for \(b\)-quark decays. The subscript \(\alpha\) indicates the quark color.
(For clarity, in the following we follow the more transparent PDG notation that uses \(\theta_{ij}\) to denote the mixing angle around an axis that is perpendicular to the \((i,j)\) plane. The translation between \(\theta_{1,2,3}\) and \(\theta_{ij}\) is: \(\theta_{1}\) = \(\theta_{12}\), \(\theta_{2}\) = \(\theta_{23}\), and \(\theta_{3}\) = \(\theta_{13}\). In addition we abbreviate \(\cos\theta_{ij}\) by \(c_{ij}\) and \(\sin\theta_{ij}\) by \(s_{ij}\).)
The Chau-Keung parameterization, which is exactly unitary, is given by the (right-to-left) sequence of two-dimensional rotations:
\[V_{\rm CKM} = \overbrace{\begin{pmatrix}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{pmatrix}}^{\theta_{23\ {\rm about}\ d^{\prime\prime}}} \overbrace{\begin{pmatrix}c_{13}&0&s_{13}e^{-i\delta}\\ 0&1&0\\ -s_{13}e^{i\delta}&0&c_{13}\end{pmatrix}}^{\theta_{13\ {\rm about}\ s^{\prime}\ ({\rm incl.\ }\delta)}} \overbrace{\begin{pmatrix}c_{12}&s_{12}&0\\ -s_{12}&c_{12}&0\\ 0&0&1\end{pmatrix}}^{\theta_{12\ {\rm about}\ b}}\] \[= \begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^ {i\delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^ {i\delta}&c_{23}c_{13}\end{pmatrix},\]
where it is apparent that this version passes the requirement that \(\boldsymbol{V}\)\(\rightarrow\)\(\boldsymbol{\mathcal{I}}\) when \(\theta_{ij}\)\(\rightarrow\) 0. Here Harari's three criteria are satisfied: the \(\theta_{12},\theta_{23},\theta_{13}\) mixing angles correspond to experimentally distinct \(u\)\(\leftrightarrow\)\(s\), \(c\)\(\leftrightarrow\)\(b\), & \(u\)\(\leftrightarrow\)\(b\) transitions, respectively; the \(\mathcal{CP}\) phase factor is always multiplied by a factor containing \(s_{13}\,(=\,|V_{ub}|)\); and the three above-diagonal terms have simple forms. Moreover, in this parameterization, \(\theta_{12}\), \(\theta_{23}\) and \(\theta_{13}\) are all in the first quadrant, which means that \(s_{ij}\) and \(c_{ij}\) are all positive, the \(\mathcal{CP}\) phase \(\delta\) is also positive and in the range \(0\leq\delta<2\pi\), and the matrix reduces to the
\(2\times 2\) Cabibbo matrix for \(\theta_{23}\!=\!\theta_{13}\!=\!0\). This has been the PDG parameterization of choice since 1986 [46] and, according to recent measurements [38],
\[\theta_{12}=13.09^{\circ}\pm 0.03^{\circ} \theta_{23}=2.32^{\circ}\pm 0.04^{\circ} \tag{27}\] \[\theta_{13}=0.207^{\circ}\pm 0.007^{\circ} \delta=68.53^{\circ}\pm 0.51^{\circ}.\]
#### 4.2.2 Wolfenstein parameterization
The Wolfenstein parameterization is an approximation that employs a polynomial expansion in terms of \(\lambda\!\equiv\!\sin\theta_{C}\)= 0.2265 that reflects the hierarchical character of the CKM matrix. With an accuracy up to \(\mathcal{O}(\lambda^{3})\) it has the form:
\[V_{\rm CKM}=\begin{pmatrix}1-\frac{1}{2}\lambda^{2}&\lambda&A\lambda^{3}( \rho-i\eta)\\ -\lambda&1-\frac{1}{2}\lambda^{2}&A\lambda^{2}\\ A\lambda^{3}(1-\rho-i\eta)&-A\lambda^{2}&1\end{pmatrix}+\mathcal{O}(\lambda^{ 4}), \tag{28}\]
where the parameter \(A\!\approx\!0.8\) accounts for the fact that the Cabbibo-like suppression between \(V_{cb}\) and \(V_{cd}\) is about 20% more severe than that between \(V_{cd}\) and \(V_{ud}\). In this parameterization, all of the \(\mathcal{CP}\) violation resides in the single parameter \(\eta\) that is confined to the \(V_{cd}\) and \(V_{ub}\) corners of the matrix where it is multiplied by \(\lambda^{3}A\), a small number.
This parameterization is very convenient and is widely used, but in a somewhat modified form that was suggested by Buras and colleagues [47]. In the Buras version, the Wolfenstein \(\lambda\), \(A\), \(\rho\) and \(\eta\) parameters are redefined in terms of the Chao-Keung angles to be exactly
\[\lambda \equiv s_{12}=\frac{|V_{us}|}{\sqrt{|V_{ud}|^{2}+|V_{us}|^{2}}} \tag{29}\] \[A\lambda^{2} \equiv s_{23}=\lambda\frac{|V_{cb}|}{|V_{us}|}\] (30) \[A\lambda^{3}(\rho-i\eta) \equiv s_{13}e^{i\delta}=V_{ub}^{*}. \tag{31}\]
With these parameter definitions, \(V_{ub}\) is the same as in the Chau and Keung parameterizations, and the higher corrections to \(V_{us}\) and \(V_{cb}\) start at \(\mathcal{O}(\lambda^{7})\) and \(\mathcal{O}(\lambda^{8})\), respectively. In addition Buras _et al._ defined new parameters \(\bar{\rho}\) and \(\bar{\eta}\) as
\[\bar{\rho}+i\bar{\eta}=-\frac{V_{ud}V_{ub}^{*}}{V_{cd}V_{cb}^{*}}, \tag{32}\]
in which case
\[\bar{\rho}=\rho\big{(}1-\frac{\lambda^{2}}{2}\big{)}+\mathcal{O}(\lambda^{4}) \quad\text{and}\quad\bar{\eta}=\eta\big{(}1-\frac{\lambda^{2}}{2}\big{)}+ \mathcal{O}(\lambda^{4}) \tag{33}\]
and
\[V_{td}=A\lambda^{3}(1-\bar{\rho}-i\bar{\eta})+\mathcal{O}(\lambda^{7}). \tag{34}\]
Although the distinction between Wolfenstein's \(\rho,\ \eta\) and (the nearly equal) \(\bar{\rho},\ \bar{\eta}\) parameters may seem to be confusing and unnecessary, the latter parameters are preferred and are more generally used (for reasons that are discussed in detail in ref. [48]). The PDG review [38] only provides values for the four (redefined) "Wolfenstein parameters" that, in 2020, were
\[\lambda=0.22650\pm 0.00048 A=0.790^{+0.017}_{-0.012} \tag{35}\] \[\bar{\rho}=0.141^{+0.016}_{-0.017} \bar{\eta}=0.357\pm 0.011.\]
The eqn. 28 form of the matrix is carefully tuned for \({\cal CP}\) violations in the \(b\)-quark sector that is produced by the imaginary parts of \(V_{ub}\) and \(V_{td}\), and, since all the matrix elements in the second row and column, _i.e._, the ones that involve the strange and charmed quarks, are real, it is not applicable to descriptions of \({\cal CP}\) violation the \(c\)- and \(s\)-quark sectors. For this, Wolfenstein provided a version of the matrix that is expanded to include \({\cal CP}\)-violating terms of \({\cal O}(\lambda^{4})\) in \(V_{ts}\) and \({\cal O}(\lambda^{5})\) in \(V_{cb}\):
\[V_{\rm CKM}=\begin{pmatrix}1-\frac{1}{2}\lambda^{2}&\lambda&A\lambda^{3}(\bar {\rho}-i\bar{\eta}))\\ -\lambda(1+iA^{2}\lambda^{4}\bar{\eta})&1-\frac{1}{2}\lambda^{2}&A\lambda^{2} \\ A\lambda^{3}(1-\bar{\rho}-i\bar{\eta})&-A\lambda^{2}(1+i\lambda^{2}\bar{\eta})& 1\end{pmatrix}+{\cal O}(\lambda^{6}). \tag{36}\]
An expression involving the same parameters that is exactly unitary is given by Kobayashi in ref. [49]
## 5 \({\cal CP}\) violation in \(b\)-quark decays?
In the KM model, \({\cal CP}\) violation in neutral \(K\)-meson decays, other than the \(\varepsilon\) mass-matrix parameter, are mainly produced by complex phases in the upper-left 2\(\times\)2 corner of the KM matrix16 that, in Wolfenstein's \({\cal O}(\lambda^{5})\) parameterization, (eqn. 36) is the confined to the phase of \(V_{cd}\), and is tiny:
Footnote 16: \(V_{ub}\) can contribute to kaon decays via penguin diagrams, but these contributions are suppressed by similar \({\cal O}(A\lambda^{4})\) factors.
\[\arg(V_{cd})=A^{2}\lambda^{4}\bar{\eta}=5.9\times 10^{-4}\approx 0.03^{\circ}. \tag{37}\]
In contrast, the CKM matrix element for charmless decays of \(B\)-mesons that proceed via \(b\!\rightarrow\!u\) transitions is \(V_{ub}\), with a \({\cal CP}\) phase \(\delta\) that (we now know) is \(\delta\!\approx\!70^{\circ}\). If the kaon's direct-\({\cal CP}\)-parameter \(\varepsilon^{\prime}\), caused by a tiny, fraction of a degree phase, can be measured in the 20\({}^{\rm th}\) century, the observation of \({\cal CP}\) violations produced by a 70\({}^{\circ}\) phase in \(B\)-meson decay in the 21\({}^{\rm st}\) century should be easy.
_Wrong!_ There are a number of important differences between the neutral kaon and \(B\)-meson systems that make the types of measurements that were used to discover and elucidate the properties of \({\cal CP}\) violations in the kaon system inapplicable to the \(B\)-meson system.
\(B\)-mesons have a huge number of different decay channels._
In contrast to the \(K\)-meson system, where 99.98% of \(K_{S}\) decays are to either \(\pi^{+}\pi^{-}\) or \(\pi^{0}\pi^{0}\) final states, and 99.7% of \(K_{L}\) decays are to either \(\pi\ell\nu\) or \(\pi\pi\pi\) final states, \(B\)-mesons have hundreds of different decay modes almost all of which have, at best, fraction of a percent level branching ratios. _The \(B^{0}\)-\(\bar{B}^{0}\) mass eigenstates have very short, and nearly equal lifetimes._ In the \(K\)-meson system, the \(K_{S}\) and \(K_{L}\) mass eigenstates have large lifetime differences (0.1 ns _vs._ 52 ns, respectively), and an essentially pure \(K_{L}\) beam can be achieved by simply making a neutral beam line that is longer than several \(K_{S}\) proper decay lengths. In comparison, the equivalent \(B_{H}\) and \(B_{L}\) mass eigenstates have very nearly equal lifetimes of a mere 1.5 ps (\(c\tau_{B}\) = 0.45 mm), and there is no possibility for making a beamline of \({\cal CP}\)-tagged \(B\)-mesons, much less one that distinguishes between the two different \({\cal CP}\) values. _\(B\)-meson decays to final states that are eigenstates of \({\cal CP}\) are infrequent._
\(K_{S}\) mesons decay almost exclusively to \({\cal CP}\)-even \(\pi^{+}\pi^{-}\) and \(\pi^{0}\pi^{0}\) eigenstates; \(K_{L}\) decays to \({\cal CP}\)-odd \(\pi^{+}\pi^{-}\pi^{0}\) and \(\pi^{0}\pi^{0}\pi^{0}\) eigenstates occur with branching fractions of 19.5% and 12.5%, respectively. In contrast, the most prominent \({\cal CP}\)-eigenstate decay mode for neutral \(B\)-mesons is \(B^{0}\)\(\rightarrow\)\(J/\psi K_{S}\), with a meager 0.045% branching fraction.
Moreover, as noted above, \(V_{cb}\), which has no \({\cal CP}\)-violating phase, has a magnitude that is an order of magnitude larger than \(V_{ub}\) and, thus, branching fractions for \(b\)\(\rightarrow\)\(u\) mediated "non-charmed" decays of \(B\) mesons are strongly suppressed. As a result the prospects for finding and studying \({\cal CP}\) violations in the \(B\)-meson system looked pretty hopeless.
### Prospects for testing the KM \({\cal CP}\) mechanism: pre 1980
Sometime around 1979-80, Abraham Pais who, along with Murray Gell-Mann was responsible for many of the fundamental theoretical discoveries in the early days of flavor physics, discussed the prospects for \({\cal CP}\) measurements with charmed and beauty mesons in a seminar at Rockefeller University in New York City, where he had this to say about \({\cal CP}\) violations with heavy quarks [50]:
"There is good news and bad news. The good news is that \({\cal CP}\) violation in a heavy meson system is quite similar to that of the \(K\)-meson system. The bad news is that there is little distinction like the \(K_{S}\)-\(K_{L}\) mass eigenstates. For heavy meson systems, both lifetimes are short."
In the audience was a young theorist Ichiro (Tony) Sanda, who recalls thinking at that time [50]:
"\({\cal CP}\) _violation in a heavy meson system is quite similar to that of the \(K\) meson system?_--How could anything as interesting as \({\cal CP}\) violation be so uninteresting."
and he resolved to find a way to prove that Pais was wrong.
### Tony Sanda's great idea
At this same time, I was one of the founding members of the CLEO experiment that was located on the Cornell University campus in upstate New York, about a two-hour drive from New York City.17 The CESR \(e^{+}e^{-}\) collider was in its infancy and had a maximum instantaneous luminosity of \({\cal L}\)\(\sim\)\(5\times 10^{30}\)cm\({}^{-2}\)s\({}^{-1}\). We had just discovered the \(\Upsilon(4S)\) resonance [51] and while running at \(E_{\rm cm}\) = 10.58 GeV, its peak energy, we could collect about 30 \(\Upsilon(4S)\)\(\rightarrow\)\(B\bar{B}\) events/day (see Fig. 7a). This was, at that time, the most prolific source of \(B\) mesons in the world and we were anxious to make good use of them. To this end, my Cornell colleagues invited Sanda for some seminars. During his first seminar, the best strategy that Sanda had to offer was a vague plan to search for a \(\ell^{+}\ell^{+}\)_vs._\(\ell^{-}\ell^{-}\) asymmetry in events of the type
Footnote 17: At that time I was at the University of Rochester, a two-hour drive in the opposite direction.
\[e^{+}e^{-}\to B^{0}\bar{B}^{0}\rightarrow\ell^{\pm}\ell^{\pm}+{\rm anything}, \tag{38}\]
with the faint hope that somehow a measurable \({\cal CP}\) violation would show up. But this was very much like the frequently performed \(K^{0}(\tau)\)\(\rightarrow\)\(\pi^{-}\ell^{+}\nu\)_vs._\(K^{0}(\tau)\)\(\rightarrow\)\(\pi^{+}\ell^{-}\bar{\nu}\) asymmetry measurements, but without any of the above-listed advantages that make the neutral kaon system so special. The experimenters in the audience, who were all hyped up to do great and wondrous things with the \(B\) mesons that they had worked so hard to produce, were noticeably disappointed. When Sanda got back to New York City, he felt under strong pressure to come up with something that was new and unique to \(B\) mesons.
A few months later, in his second seminar at Cornell, he did just that. He proposed a scheme that he developed in collaboration with Ashton Carter [52, 53] for using interference between the \(B^{0}\)\(\rightarrow\)\(K_{S}J/\psi\) & \(B^{0}\)\(\rightarrow\)\(\bar{B}^{0}\)\(\rightarrow\)\(K_{S}J/\psi\) decay amplitudes that eventually became the primary motivation for the BaBar and Belle asymmetric \(B\)-factory experiments and was the basis for Kobayashi and Maskawa's Nobel prize.
The idea, which is illustrated in Fig. 7 b, was very elegant. You start with a flavor-tagged \(B^{0}\) (or \(\bar{B}^{0}\)--here I use a tagged \(B^{0}\) to illustrate the idea) that can directly decay via the \(B^{0}\)\(\rightarrow\)\(K^{0}J/\psi\) diagram show in the top panel, or decay via the indirect route where it first mixes into a \(\bar{B}^{0}\) that then decays via \(\bar{B}^{0}\)\(\rightarrow\)\(\bar{K}^{0}J/\psi\). But experiments don't distinguish between \(K^{0}\) or \(\bar{K}^{0}\) decays, instead they measure \(K_{S}\) and \(K_{L}\) decays. Thus, in events where a \(K_{S}\) is detected, the direct and indirect decay routes access identical final states and interfere. (Likewise for events where a \(K_{L}\)is detected, except here the interference term has an opposite sign.)
The direct amplitude has no \({\cal CP}\) phase (at least not at leading order), but the indirect amplitude has an extra factor of \(V_{td}^{2}\) (not \(|V_{td}|^{2}\)!) and so, a \({\cal CP}\) phase of \(-2\beta\). For tagged \(\bar{B}^{0}\) decays,
the CKM factor is \(V_{td}^{*2}\) and the \({\cal CP}\) phase is \(+2\beta\). Thus, the \(\bar{B}^{0}(\tau){\rightarrow}f_{{\cal CP}}\)_vs._\(B^{0}(\tau){\rightarrow}f_{{\cal CP}}\) time-dependent asymmetry, where \(f_{{\cal CP}}\) is any \({\cal CP}\)-eigenstate is
\[{\cal A}_{B\rightarrow{f_{{\cal CP}}}}^{{\cal CP}}(\tau)=\frac{\bar{\Gamma}_{ \bar{B}^{0}\to f_{{\cal CP}}}(\tau)-\Gamma_{B^{0}\to f_{{\cal CP}}}( \tau)}{\bar{\Gamma}_{\bar{B}^{0}\to f_{{\cal CP}}}(\tau)+\Gamma_{B^{0} \to f_{{\cal CP}}}(\tau)}=-\xi_{{\cal CP}}\sin(2\beta)\sin(\Delta M_{B} \tau), \tag{39}\]
where \(\xi_{{\cal CP}}\) (=\(-1\) for \(K_{S}J/\psi\) and +1 for \(K_{L}J/\psi\)) is the \({\cal CP}\) eigenvalue of \(f_{{\cal CP}}\), \(\Delta M_{B}\) = \(M_{H}\)-\(M_{L}\) is the mass difference between the neutral \(B_{H}\) and \(B_{L}\) mass eigenstates (_i.e._, the \(B^{0}\)-\(\bar{B}^{0}\) mixing frequency), and \(\tau\) is the proper time between the \(B^{0}{\rightarrow}KJ/\psi\) (\(B_{{\cal CP}}\)) decay and the flavor-specific decay of the accompanying \(\bar{B}^{0}\) meson (\(B_{\rm tag}\)), whose decay products are used to tag the \(B^{0}\) meson's flavor. Note that in \(e^{+}e^{-}\) colliders, \(\tau\) can be positive (when the \(B_{{\cal CP}}\) decay occurs after the \(B_{\rm tag}\) decay), or negative (when the \(B_{{\cal CP}}\) decay occurs first), and the time-integrated asymmetry is zero.
#### 5.2.1 Great idea! but is it practical?
The idea was new, and the mechanism was unique to \(B\) mesons, but there were many pieces that had to fall into just the right places for Sanda's proposal to have any chance of being practical. Since at that time there was no experimental information about the \(b\)-quark-related CKM elements or \(B^{0}\)-\(\bar{B}^{0}\) mixing, there was no way to form any opinion about the prospects for their favorability.
Tens of millions of tagged \(B{\rightarrow}f_{{\cal CP}}\) decays would be required:
In 1980, the world's best source of \(B\)-mesons was CESR, with a production rate of \({\sim}30\)\(B\bar{B}\) events/day, of which only half were the desired \(B^{0}\bar{B}^{0}\) pairs. Sanda's golden mode was
\(B^{0}(\bar{B}^{0})\)\(\rightarrow\)\(K_{S}J/\psi\), which he estimated to have an \({\cal O}(10^{-3})\) branching fraction, and this implied that the fractional probability of usable events would be
\[{\cal F}_{K_{S}J/\psi}<\underbrace{{\cal B}(B^{0}\to K_{S}J/\psi)}_{ \sim 10^{-3}}\underbrace{{\cal B}(K_{S}\rightarrow\pi^{+}\pi^{-}){\cal B}(J/ \psi\rightarrow\ell\ell)}_{\sim 10^{-1}}\underbrace{(\epsilon_{\rm trk})^{4} \epsilon_{\rm eff}^{\rm tag}}_{\sim 10^{-1}}\approx 10^{-5}, \tag{40}\]
where \(\epsilon_{\rm trk}\) is the efficiency for charged track detection that, even in a nearly perfect detector, cannot be much higher than \(\epsilon_{\rm trk}\) \(\approx\) 0.85, and \(\epsilon_{\rm eff}^{\rm tag}\) \(\approx\) 0.3 is an estimate of the maximum possible effective \({\cal B}\)-flavor tagging efficiency. Thus, an \({\cal A}_{KJ/\psi}^{\cal CP}\) measurement with even modest precision would require \(\sim\)30 M \(B\bar{B}\) events (and a million days of operation at 1980 state of the art collider and detector performance levels).
\(B^{0}\)-\(\bar{B}^{0}\) mixing had to be substantial, \(|V_{cb}|\) had to be small, and \(|V_{ub}|\) even smaller:
An essential part of the Sanda-Carter scheme is that the fraction of \(B^{0}\) mesons that oscillate into a \(\bar{B}^{0}\) before they decay has to be reasonably large. This meant that \(\Delta M_{B}\) would have to be similar in magnitude to \(\Gamma_{B}\) = \(1/\tau_{B}\), which was then known to be \(\Gamma_{B}\)\(\approx\) \(4.4\times 10^{-10}\) MeV. In the 1980s, when the \(t\)-quark mass was (almost universally) expected to be \(m_{t}\) \(\sim\) 35 GeV, calculations [54] found \(\Delta M_{B}\) \(\approx\) \(1.2\times 10^{-10}\) MeV. If this were the case, only \(\sim\)6% of the tagged \(B^{0}\)-mesons would oscillate into a \(\bar{B}^{0}\) before decaying. (After three lifetimes, the \(\sin\Delta M_{B}\tau\) factor in eqn. 39 would have barely reached 0.5.) In addition the \(B\) lifetime had to be relatively long: _i.e._, \(|V_{cb}|\)\(<\)0.1, and \(|V_{ub}|\)\(<\)\(|V_{cb}|\).
The time sequence between the tag- and \(f_{\cal CP}\)-decay has to be distinguished:
The asymmetry in eqn. 39 has opposite signs for negative and positive values of \(\tau\), which makes it essential to distinguish between events in which the \(B_{\cal CP}\) decays occurred first from those when it decayed last. The \(B\) mesons that are produced in \(\Upsilon(4S)\)\(\rightarrow\)\(B\bar{B}\) decays have c.m. momenta \(p_{B}\) = 327 MeV/\(c\), corresponding to \(\gamma\)\(\beta\) = \(0.062\) and have a mean decay distance of \(\beta\gamma c\tau_{B}\) = 28\(\mu\)m, which is unmeasurably small in a c.m. \(e^{+}e^{-}\) collider environment. For a collider operating at the \(\Upsilon(4S)\), it would be impossible to distinguish the time sequence of the \(B_{\cal CP}\) and \(B_{\rm tag}\) decays.
Since the existence of six-quarks was pretty well established, the KM-mechanism provided a compelling and almost obvious mechanism for explaining the existence of \({\cal CP}\) violation. However the prospects or a conclusive experimental test of this idea seemed hopeless. The fortuitous set of circumstances that made studies of \({\cal CP}\) violation in the neutral kaon system possible seemed unlikely to be repeated.
### Three miracles
Nevertheless, in spite of these obstacles, Sanda maintained a nearly mystical belief that "_Mother Nature has gone out of Her way to show us \({\cal CP}\) violation, and She will also show us the way to the
_fundamental theory_" [50], and forcefully advocated an aggressive program of experimental investigations of \({\cal CP}\) violation in the decays of \(B\) mesons. However, for the reasons itemized above, Sanda's advocacy was initially met with considerable skepticism from his colleagues in both the theoretical and experimental physics communities.
And then three miracles occurred:
Miracle 1: \(\mathbf{B^{0}}\)-\(\mathbf{\bar{B}^{0}}\) mixing was discovered at DESY:
The most exciting event in flavor physics during the 1980s was the 1987 discovery of a large signal for \(B^{0}\)-\(\bar{B}^{0}\) mixing by the ARGUS experiment at DESY [55]. The strength of the mixing was clear evidence the that top-quark mass, now known to be 173 GeV, was nearly an order of magnitude larger than expected, which was shocking news to almost everyone. This discovery, coupled with the 1.5 ps \(B\)-meson life-time measurements from PEP and PETRA that translated into \(|V_{cb}|\)\(\approx\) 0.04, and the suppression of \(b\)\(\rightarrow\)\(u\) relative to \(b\)\(\rightarrow\)\(c\) transitions meant that \(|V_{ub}|\) was about a factor of ten smaller than \(|V_{cb}|\). These measurements confirmed Sanda's strong belief that Mother Nature would indeed help us "find the way to the fundamental theory."
Miracle 2: Three-order-of-magnitude improvement in \(\mathbf{e^{+}e^{-}}\) collider luminosity:
Advances in the understanding and modeling of beam dynamics, the use of separate magnet rings that enabled multibunch collisions, and major advances in RF feedback systems provided the huge increases in the \(e^{+}e^{-}\)\(\rightarrow\)\(\Upsilon(4S)\) production rate that were required by the experiment [56, 57].
Miracle 3: The invention of asymmetric \(e^{+}e^{-}\) colliders and innovations in detector technology:
Pierre Oddone realized that an \(e^{+}e^{-}\) collider operating at the \(\Upsilon(4S)\) resonance with a modest (_i.e._, factor of \(\sim\)2) difference between the \(e^{+}\) and \(e^{-}\) beam energies would produce boosted \(B\)-mesons with \({\cal O}(100\;\mu{\rm m})\) separation distances between the \(B_{\rm tag}\) and \(B_{\cal CP}\) vertices [58]. This idea, coupled with the concurrent development of high resolution silicon-strip vertex detectors that could measure such small displacements in a collider environment [59, 60, 61, 62], offered a realistic solution to the decay-time-sequence determination problem. Parallel improvements in detector performance levels in areas of particle identification [63, 64], and \(\gamma\)-ray & \(K_{L}\) detection [65, 66, 67] advanced the state of the art levels of detection efficiencies and \({\cal B}\)-flavor-tagging quality.
## 6 First measurements of the KM angle \(\beta\)
At leading order, measurements of the eqn. 39 Carter-Sanda asymmetry determines \(\sin 2\beta\), where \(\beta\) = \(\tan^{-1}\left(\bar{\eta}/(1-\bar{\rho})\right)\) and \(\bar{\eta}\) & \(\bar{\rho}\) are the modified Wolfenstein parameters described in Section 4.2.2. Prior to the summer of 2001, the best measurements of \(\sin 2\beta\) were from CDF [68]
(\(0.79\pm 0.44\)), BaBar [69] (\(0.34\pm 0.21\)) and Belle [70] (\(0.58\pm 0.34\)). The BaBar result was based on a sample of 23 M \(\Upsilon(4S)\)\(\rightarrow\)\(B\bar{B}\) events and Belle, which was struggling with electron cloud effects in the KEKB positron ring [71], had a smaller data sample of 11 M \(\Upsilon(4S)\)\(\rightarrow\)\(B\bar{B}\) events. Each of the three measurements were about \(1.5\sigma\) from zero, and their weighted average, \(0.46\pm 0.17\) indicated a non-zero \({\cal CP}\) violation at the \(\sim\)2.5\(\sigma\) level. The situation was tantalizing, but not conclusive.
This changed in August 2001 when, in back-to-back articles in Physical Review Letters, BaBar, now with a sample of 32 M \(B\bar{B}\) pairs, reported [72]
\[\sin 2\beta=0.59\pm 0.14\pm 0.05\quad{\rm BaBar}\ (2001), \tag{41}\]
and Belle, with a 31 M \(B\bar{B}\) pair data sample, reported [73]
\[\sin 2\beta=0.99\pm 0.14\pm 0.06\quad{\rm Belle}\ (2001). \tag{42}\]
The BaBar data sample contained 803 \(B_{{\cal CP}}\) event candidates with a signal purity of about 80%. The top three panels in Fig. 8a show the BaBar experiment's measured time distributions for \(\xi_{{\cal CP}}\)=\(-1\)\(B_{{\cal CP}}\) decays. The uppermost plot shows the number of \(B^{0}\) tags where it is evident that there are more events with \(\tau\)\(>\)0 than with \(\tau\)\(<\)0, while the \(\bar{B}^{0}\) tags in the panel beneath it display an opposite pattern. The third panel shows the bin-by-bin asymmetry where there is a clear indication of the sine-like behavior that is expected for a \({\cal CP}\) violation as given in eqn. 39. The three lowest panels show the corresponding results for \(B\rightarrow\)\(K_{L}J/\psi\) decays with \(\xi_{{\cal CP}}\)=\(+1\), where the \(\tau\)-dependent asymmetries have opposite signs, again as expected.
Figure 8b shows the results from the Belle experiment [73] that used 747 \(\xi_{{\cal CP}}\)=\(-1\)\(B\)-decay candidates (mostly \(B\)\(\rightarrow\)\(K_{S}J/\psi\)) with 92% purity and 569 \(\xi_{{\cal CP}}\)=\(+1\)\(B\)\(\rightarrow\)\(K_{L}J/\psi\) decay candidates with 61% purity. The top plot on the left side of the panel shows the proper-time distribution for the \(\xi_{{\cal CP}}\)=\(-1\) modes _minus_ that for \(\xi_{{\cal CP}}\)=\(+1\) modes. The 2\({}^{\rm nd}\) and 3\({}^{\rm rd}\) panels from the top show the \(\xi_{{\cal CP}}\)=\(-1\) and \(\xi_{{\cal CP}}\)=\(+1\) modes separately, where the different \({\cal CP}\) modes have opposite-sign asymmetries as expected. The curves show the results of fits to the data. The bottom panel shows the asymmetry for a large sample of self-tagged (non-\({\cal CP}\) eigenstate) \(B\) decays (\(B^{0}\)\(\rightarrow\)\(D^{(*)-}\pi^{+}\), \(D^{*-}\rho^{+}\), \(K^{*0}(K^{+}\pi^{-})J/\psi\), and \(D^{*-}\ell^{+}\nu\)), where a non-zero asymmetry would have to be due to instrumental effects; the fit to this sample returned an asymmetry amplitude of \(0.05\pm 0.04\).
The open circles in the plot on the right side of Fig. 8b show the time distribution for \(\bar{B}^{0}\)-tags (\(q\)=1) in \(\xi_{{\cal CP}}\)=\(-1\)\(B_{{\cal CP}}\) decays plus \(B^{0}\)-tags (\(q\)=-1) in \(\xi_{{\cal CP}}\)=\(+1\) decays (_i.e._, \(q\xi_{{\cal CP}}\)=\(-1\)), with the fit results shown as a dashed curve. The black dots and solid curve show the sum of the opposite combinations (\(q\xi_{{\cal CP}}\)=\(+1\)) and their fit result.
The BaBar and Belle results excluded a zero value for \(\sin 2\beta\) by \(4\sigma\), and \(6\sigma\), respectively, and their combined significance established conclusively that \({\cal CP}\) symmetry is violated in the \(B\)-\(\bar{B}\)
mixing process as as predicted by the KM six-quark model. Makoto Kobayashi and Toshihide Maskawa shared half of the 2008 Nobel Prize in Physics "_for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature._" The Nobel committee's remarks that accompanied their announcement included the following:
"[Kobayashi and Maskawa] explained broken symmetry within the framework of the Standard Model, but required that Model be extended to three families of quarks. These predicted, hypothetical new quarks have recently appeared in physics experiments. As late as 2001, the two particle detectors BaBar at Stanford, USA and Belle at Tsukuba, Japan, both detected broken symmetries independently of each other. The results were exactly as Kobayashi and Maskawa had predicted almost three decades earlier."
During the two decades following the Belle and BaBar reports, there have been many hundreds of measurements of non-zero \(\mathcal{CP}\) violating symmetries in \(B\) meson decays, mostly by BaBar and Belle, which continued operating until 2008 and 2010, respectively. This program is being continued by the LHCb [74], an experiment specialized for heavy flavor physics at the CERN Large Hadron Collider that began operating in 2010, and Belle II [75], an upgraded version of Belle at KEK that began operating in 2018. As is discussed in more detail in other contributions to this symposium, all of these hundreds of measured \(\mathcal{CP}\) violations are well explained as being due to the effects of the single KM \(\mathcal{CP}\)-phase angle \(\delta\).
Figure 8: **a)** BaBar results: the top three plots show the proper time distributions and fit results for \(B^{0}\)-tags, \(\bar{B}^{0}\)-tags and their asymmetry for \(\xi_{\mathcal{CP}}\)=\(-1\)\(B_{\mathcal{CP}}\) decays. The bottom three plots show the same distributions for \(\xi_{\mathcal{CP}}\)=\(+1\)\(B_{\mathcal{CP}}\) decays. The shaded areas indicate the background levels. (From ref. [72]). **b)** Belle results: the top three plots on the left show the time-dependent asymmetry and fit results for: (from top down) the combined (\(\xi_{\mathcal{CP}}\)=\(-1\) events _minus_\(\xi_{\mathcal{CP}}\)=\(+1\) events) samples; the \(\xi_{\mathcal{CP}}\)=\(-1\) (mostly \(K_{S}J/\psi\)) events; and the \(\xi_{\mathcal{CP}}\)=\(+1\) (\(K_{L}J/\psi\)) events. The bottom plot shows results for non-\(\mathcal{CP}\)-eigenstate decay modes where no asymmetry is expected. The open circles and dashed curve on the right show background-subtracted time distributions for \(q\xi_{\mathcal{CP}}\)=\(-1\) events and the fit results. The solid circles and curve show the same quantities for \(q\xi_{\mathcal{CP}}\)=\(+1\) events. (From ref. [73]).
The KM model has been a remarkable success.
## 7 A few comments on mixing and \(\mathcal{CP}\) violation in the neutrino sector
As mentioned above, the two-doublet nature of the of the leptons was identified in 1961 [16], a decade before it was established for quarks. The notion of neutrino-mixing was first proposed four years earlier by Pontecorvo [76] in 1957 and the PMNS neutrino mixing matrix was suggested Maki, Nakagawa and Sakata [77] in 1962, a year before Cabibbo's paper appeared. When the \(\tau\)-lepton was discovered in 1975, the six-lepton picture was established and the PMNS matrix for neutrinos expanded to the same 3\(\times\)3 structure as the CKM matrix for quarks. If neutrinos are Dirac particles, the mathematics of the neutrino-flavor mixing matrix are the same as the KM matrix with three mixing angles \(\theta_{ij}\) and one \(\mathcal{CP}\) violating phase \(\delta_{\mathcal{CP}}\), the so-called the "Dirac" phase. If neutrinos are Majorana particles, lepton number is not conserved and the matrix's number of degrees of freedom increases by two and there are two additional \(\mathcal{CP}\)-violating "Majorana" phases [78] that have no measurable effects on neutrino mixing experiments (which makes them hard to measure). The commonly used parameterization of the PMNS matrix that doesn't include the Majorana phases uses same mixing angle definitions as the eqn. 26) Chau-Keung version of the CKM matrix, but with the sequence of rotations reversed, _i.e._,
\[\begin{pmatrix}\nu_{e}\\ \nu_{\mu}\\ \nu_{\tau}\end{pmatrix}=\!\begin{pmatrix}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{pmatrix}\begin{pmatrix}c_{13}&0&e^{-i\delta_{\mathcal{CP }}}s_{13}\\ 0&1&0\\ -e^{-i\delta_{\mathcal{CP}}}s_{13}&0&c_{13}\end{pmatrix}\begin{pmatrix}c_{12} &s_{12}&0\\ -s_{12}&c_{12}&0\\ 0&&1\end{pmatrix}\begin{pmatrix}\nu_{1}\\ \nu_{2}\\ \nu_{3}\end{pmatrix}, \tag{43}\]
where \(\nu_{1},\nu_{2},\nu_{3}\) denote the three neutrino mass eigenstates and \(\theta_{12}\), \(\theta_{23}\), and \(\theta_{13}\) are the "solar," "atmospheric," and "reactor" neutrino mixing angles. The explicit form of the matrix is:18
Footnote 18: \(\mathcal{CP}\)-violating Majorana phases \(\psi_{1}\) & \(\psi_{2}\) can be included by \(U^{\rm Maj}_{\rm PMNS}\) = \(U^{\rm Dirac}_{\rm PMNS}\mathbb{P}\), where \(\mathbb{P}\) = \(\begin{pmatrix}e^{i\psi_{1}}&0&0\\ 0&e^{i\psi_{2}}&0\\ 0&0&1\end{pmatrix}\).
\[U^{\rm Dirac}_{\rm PMNS} = \left(\begin{array}{ccc}U_{e1}&U_{e2}&U_{e3}\\ U_{\mu 1}&U_{\mu 2}&U_{\mu 3}\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}\end{array}\right)\] \[= \left(\begin{array}{ccc}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{- i\delta_{\mathcal{CP}}}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta_{\mathcal{CP}}}&c_{12}c_{23}-s_{12} s_{23}s_{13}e^{i\delta_{\mathcal{CP}}}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta_{\mathcal{CP}}}&-c_{12}s_{23}-s_{12} c_{23}s_{13}e^{i\delta_{\mathcal{CP}}}&c_{23}c_{13}\end{array}\right),\]
and the PDG 2020 [38] world averages for the three rotation angles are:
\[\sin^{2}\theta_{12} = 0.307\pm 0.013\quad\Rightarrow\theta_{12}=33.6^{\circ}\pm 0.8^{\circ} \tag{45}\] \[\sin^{2}\theta_{23} = 0.545\pm 0.021\quad\Rightarrow\theta_{23}=47.6^{\circ}\pm 1.2^{\circ}\] \[\sin^{2}\theta_{13} = 0.0218\pm 0.007\quad\Rightarrow\theta_{13}=8.48^{\circ}\pm 0.14^{ \circ}.\]
Although mixing in the quark- and lepton-sectors have the same mathematical structure, there are important differences in their practical applications, including:
_Strikingly different hierarchies:_ The three PMNS mixing angles listed above differ in values by at most a factor of five, in contrast with the the CKM-matrix where the corresponding mixing angles are smaller and span a two-order-of-magnitude range in magnitudes:
\[\theta_{12}^{\rm CKM}\approx 13.0^{\circ}\quad\theta_{23}^{\rm CKM}\approx 2.4^{ \circ}\quad\theta_{13}^{\rm CKM}\approx 0.20^{\circ}. \tag{46}\]
Differences between the two matrices are illustrated in Fig. 9a, where the areas of the squares are proportional to the magnitudes of the matrix elements. Here Nature has been helpful again; if the PMNS matrix had a hierarchy that was similar to that of the CKM matrix, neutrino oscillations and neutrino masses would likely not have been discovered.
_They operate in opposite "directions:"_ In the CKM case, the quark mass-eigenstates are well known and the matrix is used to determine the flavor states. For the PMNS case, the neutrino flavor states are well known and the matrix is used to determine the mass eigenstates. Oscillation experiments have determined the mass-difference hierarchies shown in Fig. 9b, where [38]
\[{\rm atmos:}\Delta m_{32}^{2}=\pm(2.44\pm 0.03)\times 10^{-3}({\rm eV})^{2}\quad{ \rm and}\quad{\rm solar:}\Delta m_{21}^{2}=(7.53\pm 0.18)\times 10^{-5}({\rm eV})^{2}.\]
The still unknown sign of \(\Delta m_{32}^{2}\) (= \(m_{3}^{2}\)-\(m_{2}^{2}\)) leaves two possible hierarchies as shown in Fig. 9b.
Figure 9: **a)** The area of the squares illustrates the magnitudes of the CKM (_left_) and PMNS (_right_) matrix elements. **b)** A not-to-scale illustration of the normal and (_left_) and inverted (_right_) neutrino mass hierarchy. The flavor content of each of the \(\nu_{1},\nu_{2},\nu_{3}\) mass eigenstate is indicated by different shades of gray.
Quarks decay, neutrinos (probably) don't:CKM-related measurements are almost entirely based on measurements of decay processes with a formalism for oscillations and \({\cal CP}\) violations is done in the hadron's restframe. In contrast case, PMNS-related measurement always involve production processes that produce pure flavor states and detection experiments located at some baseline distance that determine how the neutrino flavor-contents changed during their propagation to the detector. Since the neutrino's restframe is ill defined,19 the formalism is usually done in the laboratory frame and expressed in terms of \(E_{\nu}\)- and \(\Delta m^{2}\)-dependent oscillation lengths, _i.e._,
Footnote 19: If the lightest neutrino has zero mass it doesn’t have a restframe.
\[l(E_{\nu})\equiv\lambda/2\pi=E_{\nu}/1.27\Delta m^{2}, \tag{47}\]
where the factor of 1.27 is specific to (\(l,E_{\nu},\Delta m^{2}\)) units of (m, MeV, eV\({}^{2}\)), or (km, GeV, eV\({}^{2}\)). For the atmospheric and solar mass differences given above, these are
\[l^{\rm atm}(E_{\nu})\approx 320\ ({\rm km}/{\rm GeV})\times E_{\nu}\ \ \ \ {\rm and}\ \ \ l^{\rm sol}(E_{\nu})\approx 10,500\ ({\rm m}/{\rm MeV})\times E_{\nu},\]
where the units km/GeV and m/MeV are equivalent (and interchangeable), with km/GeV usually attached to atmospheric and m/MeV to solar for historical reasons.
The 320 km atmospheric length is long, but not impossibly long. It is nearly the same as the 295 km distance between J-PARC and Super Kamiokande (and the soon-to-be-completed HyperK detector), and corresponds to a 90\({}^{\circ}\) phase-change-induced oscillation maximum for 600 MeV muon-neutrinos that can be copiously produced by the J-PARC synchrotron. The 10.5 km solar length corresponds to an oscillation maximum for \(E_{\nu}\)= 3 MeV reactor anti-electron-neutrinos at a baseline of 50 km, which is the baseline for the JUNO reactor neutrino experiment that will soon be operating in China. The HyperK [79] and JUNO [80] experiments will certainly not be easy, but they will be done. If the neutrino oscillation lengths were factors of two or more longer, both experiments would be much more difficult, if not impossible.
## 8 Conclusions
The related subjects of \({\cal CP}\)-violation and quark & neutrino flavor mixing provide peeks into some of Nature's most intimate secrets. When Fitch and collaborators measured the \(\sim\)0.2% branching fraction for \(K_{L}\)\(\rightarrow\)\(\pi^{+}\pi^{-}\) in 1963, they were seeing the influence of the \(t\)-quark that wasn't discovered until thirty years later (see Fig. 4). Thanks to Kobayashi and Maskawa, the existence and most of the properties of the \(t\)-quark (other than its mass) were pretty well understood more than twenty years before it was discovered.
### Nature has been kind
(Einstein famously said "The Lord God is subtle, but not malicious.")
A common thread that characterizes this entire story is that we have been able to probe these subjects at considerable depth, even though it didn't _a priori_ have to be that way. The 0.2% \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) branching fraction is as large as it is because of the phase-space suppression of the partial decay width for the \({\cal CP}\)-allowed \(K_{L}{\rightarrow}\,3\pi\) mode that has a \(Q\)-value of only 83 MeV. This is a consequence of the relative masses of the \(K\)- and \(\pi\)-mesons: if the kaon mass were higher and/or the pion mass were lighter, the \(K_{L}{\rightarrow}\,3\pi\) partial width would be larger, and the \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) branching fraction would be reduced. It wouldn't take very large changes in these masses to push the \(\pi^{+}\pi^{-}\) branching fraction down to a value that was below the Fitch experiment's sensitivity level. Unlike parity violation, which is a huge effect that is difficult to miss, the particle physics consequences of \({\cal CP}\)-violations in processes other than \(K_{L}\) decays have only been seen in elaborate, highly focused experiments that, absent the \(K_{L}\) measurements, very likely wouldn't have occurred.
As described above, the KM phase was only measurable because Nature's parameters are aligned in a way that meet the stringent requirements that were first identified in the Carter-Sanda papers, including a strong \(|V_{us}|{>}|V_{cb}|{>}|V_{ub}|\) hierarchy and a large enough \(t\)-quark mass to produce a \(B\)-\(\bar{B}\) mixing rate that is close to the \(B\)-meson lifetime, but not so large that the mixing rate was much faster than the lifetime. As a result, the phase could be measured, as could all three of the CKM matrix's mixing angles.
Remarkably, the same thread extends into the neutrino sector in which the PMNS matrix has a very different, almost flat hierarchy that facilitated the discovery of neutrino oscillations. This hierarchy has also enabled precise measurements of all three of its mixing angles and will likely allow for a determination of its Dirac phase in the not-so-distant future. Moreover, and as mentioned above, Nature's choices of the differences between the \(\nu_{1},\nu_{2},\nu_{3}\) eigenstate masses have made these measurements realizable.
### What's next?... \({\cal CPT}\)?
Although \({\cal CP}\) violations only show up as small, subtle effects in particle physics experiments, the influence of \({\cal CP}\) violations on the evolution of the Universe is glaringly obvious [81]. However, the \({\cal CP}\) violations that we measure in \(K\)-, \(B\)- and (recently [82]) \(D\)-meson decays that are produced by the KM mechanism for quarks and, likely, leptons, cannot nearly account for the matter-antimatter asymmetry of the Universe (see, _e.g._, ref. [83]). There must be other sources of \({\cal CP}\) violation that have yet to be discovered, and these are the primary motivations for the LHCb and Belle II experiments. With the KM phase, Nature has given us a glimpse of \({\cal CP}\) violation, but not the whole story.
But what about \({\cal CPT}\)? The \({\cal CPT}\) theorem [84] states that any quantum field theory that is _Lorentz invariant_, has _local point-like interaction vertices_, and is _Hermitian_ (_i.e._, conserves probability) is invariant under the combined operations of \({\cal C},{\cal P}\,\)and\(\,{\cal T}\). Since the three QFTs that make up the Standard Model--QED, QCD, and Electroweak theory--all satisfy these criteria, \({\cal CPT}\) symmetry has been elevated to a kind of hallowed status in particle physics. But the non-renormalizability of quantm gravity calls into question the validity of the locality assumption [85], and suggests that at some scale, \({\cal CPT}\) has to be violated. Strictly speaking, this violation only has to occur at impossibly high energies near the \({\cal O}(10^{19}\,{\rm GeV})\) Planck scale, but maybe the same thread of Nature that gives us a taste of \({\cal CP}\) violations at energy scales well below the scale needed to explain the baryon asymmetry of the Universe, will give us a hint of \({\cal CPT}\) violation at scale below the one that's needed to rescue quantum gravity. In any case, since it is a fundamental feature of the Standard Model, \({\cal CPT}\) invariance should be routinely challenged at the highest feasible experimental sensitivities.
To date the most stringent test of the \({\cal CPT}\) prediction that particle-antiparticle masses are equal comes from kaon physics20[86, 87] and sets a 90% C.L. limit on the \(K^{0}\)-\(\bar{K}^{0}\) mass difference of
Footnote 20: The \({\cal CPT}\) test in kaon physics involves a comparison the phase of the \(\eta_{+}\)-\({\cal CP}\) violation parameter in \(K_{L}{\rightarrow}\pi^{+}\pi^{-}\) decays with the “superweak phase” \(\phi_{\rm SW}{\equiv}{\tan}^{-1}\left(2(M_{K_{L}}{-}M_{K_{S}})/(\Gamma_{K_{S} }{-}\Gamma_{K_{L}})\right)\).
\[|M_{\bar{K}^{0}}-M_{K^{0}}|<5\times 10^{-19}\ {\rm GeV}, \tag{48}\]
which is seven orders-of-magnitude more stringent than that for \(m_{\bar{e}}{-}m_{e}\) and nine orders-of-magnitude more stringent than the \(m_{\bar{p}}{-}m_{p}\) limit. This high sensitivity is because of the magic of the virtual processes in the Fig. 4a box diagram and the unique properties of the neutral kaons.
The kaon result is based on experiments done nearly thirty years ago with data samples containing tens of millions of \(K{\rightarrow}\pi^{+}\pi^{-}\) decays. One of the reasons these measurements have not been updated since then is that technologies for improved sources of flavor-tagged neutral kaons have not been pursued. However, if the above-described three order of magnitude improvements in \(e^{+}e^{-}\) collider luminosity that were developed for the \(B\)-factories would be applied to a dedicated collider operating at \(J/\psi\) mass peak, multi-billion-event/year samples of flavor-tagged neutral kaons, produced via \(J/\psi{\rightarrow}K^{\mp}\pi^{\pm}K^{0}(\bar{K}^{0})\) decays, would be available to support \({\cal CPT}\) tests with more than an order-of-magnitude improved sensitivity. More modest improvements in \({\cal CPT}\) sensitivity would be provided by new colliders in the \(\tau\)-charm energy range that are being proposed in China [88] and Russia [89], if they spend sufficient time operating at the \(J/\psi\) peak.
Maybe during the next sixty years, \({\cal CPT}\) violation studies will prove to be as interesting and provocative as \({\cal CP}\) studies have been during the past sixty years.
## Acknowledgment
I thank the organizers of KM50 for inviting me to participate in this interesting symposium, the editors of PTEP for inviting this manuscript, and this paper's referee, whomever he or she may be, for many important corrections and helpful suggestions. This work was supported in part by the National Research Foundation of Korea under Contract No. NRF-2022R1A2C1092335.
|
2302.14734 | Langlands duality for skein modules of 3-manifolds | I introduce new Langlands duality conjectures concerning skein modules of
3-manifolds, which we have made recently with David Ben-Zvi, Sam Gunningham,
and Pavel Safronov. I recount some historical motivation and some recent
special cases where the conjecture is confirmed. The proofs in these cases
combine the representation theory of double affine Hecke algebras and a new
1-form symmetry structure on skein modules related to electric-magnetic
duality. This note is an expansion of my talk given at String Math 2022 in
Warsaw, and is submitted to the String Math 2022 Proceedings publication. | David Jordan | 2023-02-28T16:37:51Z | http://arxiv.org/abs/2302.14734v1 | # Langlands duality for skein modules of 3-manifolds
###### Abstract.
I introduce new Langlands duality conjectures concerning skein modules of 3-manifolds, which we have made recently with David Ben-Zvi, Sam Gunningham, and Pavel Safronov. I recount some historical motivation and some recent special cases where the conjecture is confirmed. The proofs in these cases combine the representation theory of double affine Hecke algebras and a new 1-form symmetry structure on skein modules related to electric-magnetic duality. This note is an expansion of my talk given at String Math 2022 in Warsaw, and is submitted to the String Math 2022 Proceedings publication.
## 1. Introduction
In his 1967 letter to Andre Weil, Robert Langlands proposed a mysterious conjectural correspondence - now known as Langlands reciprocity - between what are now called **automorphic** representations of a simple algebraic group \(G\) and **Galois** representations to its _Langlands dual_ group \({}^{L}G\). Let us fix a number field \(\mathbb{F}\) with ring of adeles \(\mathbb{A}_{\mathbb{F}}\). Automorphic representations of \(G\) are, loosely speaking, certain \(G(\mathbb{A}_{\mathbb{F}})\)-representations which may be realised inside \(L^{2}(G(\mathbb{A}_{\mathbb{F}})/G(\mathbb{F}))\), and hence are closely related to automorphic forms. The Langlands dual group \({}^{L}G\) is another simple algebraic group obtained from \(G\) by interchanging root data. Galois representations to \({}^{L}G\) are group homomorphisms from the absolute Galois group \(\Gamma(\overline{\mathbb{F}}/\mathbb{F})\) to \({}^{L}G(\mathbb{C})\). Among the many remarkable features of Langlands reciprocity already evident is that the objects it relates are of _a priori_ very different nature. Langlands reciprocity and its many consequences and relatives - including variants for function fields and local fields - became collectively known as **arithmetic Langlands duality**.
Decades after Langlands circulated his conjectures, Beilinson and Drinfeld discovered a new kind of Langlands duality - also conjectural - taking place between certain moduli spaces of bundles over a smooth projective curve \(X\). Their proposal, now known as the **de Rham geometric Langlands duality**, asserts an equivalence between different algebro-geometric categories living over two such moduli spaces. On the automorphic side lies a certain category of \(\mathcal{D}\)-modules - i.e. systems of polynomial differential equations - on a moduli space of holomorphic \(G\)-bundles on \(X\). On the Galois side lies a category of coherent sheaves - i.e., finite quotients of maps between vector bundles - on a moduli space of flat \({}^{L}G\)-bundles (i.e. \({}^{L}G\)-bundles equipped with a flat connection 1-form) on \(X\). As in the arithmetic setting, the geometric Langlands duality relates objects of _a priori_ very different nature, and bearing no elementary relationship.
The thread connecting the arithmetic and geometric conjectures passes through a deep series of analogies - known as Weil's Rosetta stone - which relate the arithmetic of function fields to the geometry of complex curves. One imagines the ring of adeles \(\mathbb{A}_{\mathbb{F}}\) of a function field \(\mathbb{F}\) to be a complex curve, with the primes as points, and hence regards \(G(\mathbb{A}_{\mathbb{F}})\) as giving "local coordinates" of a \(G\)-bundle in the formal neighborhood of each prime; on the geometric side this corresponds to the specification of a \(G\)-bundle by its Taylor expansion at each point of a curve. One thereby situates the category of \(\mathcal{D}\)-modules on \(\mathrm{Bun}_{G}\) on the geometric side opposite to a category of \(\ell\)-adic sheaves on the arithmetic side; these in turn are a kind of categorification of automorphic forms, via the "sheaf-to-function" correspondence for function fields. On the other side of Langlands duality, the absolute Galois group is well-understood to be an arithmetic analog of the fundamental group of the curve.
Beilinson and Drinfeld's conjectures were later deformed in the language of twisted \(\mathcal{D}\)-modules by Feigin, Frenkel and Gaitsgory, following a proposal of Stoyanovsky. In their **quantum de Rham geometric Langlands duality** a new symmetry emerges which is not present in the arithmetic or geometric settings: the distinction between automorphic and Galois sides evaporates, and both moduli spaces which appear are those of holomorphic \(G\)-bundles (respectively, \({}^{L}G\)-bundles), while the categories appearing are both categories of (\(\Psi\)- or \({}^{L}\Psi\)-twisted, respectively) \(\mathcal{D}\)-modules. Here the twisting depends on a possibly infinite complex parameter \(\Psi\in\mathbf{CP}^{1}\), and \({}^{L}\Psi=-1/(n_{G}\Psi)\), where \(n_{G}\in\{1,2,3\}\) is the lacing number of \(G\). The classical geometric Langlands conjecture is the special case \(\Psi=\infty,{}^{L}\Psi=0\).
A physical manifestion of quantum geometric Langlands duality was discovered by Kapustin and Witten, as an instance of \(S\)**-duality**, also known as Montonen-Olive duality. This duality relates certain low-energy approximations called "twists" of \(\mathcal{N}=4\) super Yang-Mills quantum field theory in four dimensions. \(S\)-duality is a sort of non-abelian Fourier transform, whose existence is predicted by \(\mathcal{M}\)-theory, and which can be understood as a generalisation of electric-magnetic duality in Maxwell theory. In this telling \(G\) and \({}^{L}G\) each appear as gauge groups of distinct theories, and the parameters \(\Psi,{}^{L}\Psi\) each give \(\mathbb{CP}^{1}\) charts on the space of twists. \(S\)-duality asserts an equivalence of physical theories,
\[\mathcal{Z}_{G,\Psi}\simeq\mathcal{Z}_{{}^{L}G,{}^{L}\Psi},\]
and Kapustin and Witten explained how to derive the categories appearing in both the classical and quantum geometric Langlands conjectures as categories of surface operators \(\mathcal{Z}_{G,\Psi}(X),\mathcal{Z}_{{}^{L}G,{}^{L}\Psi}(X)\). This reformulation has been very influential in both mathematics and physics communities, and motivates much of the discussion which follows.
Ben-Zvi and Nadler proposed a further combinatorial incarnation of the geometric Langlands duality, now known as the **Betti geometric Langlands duality**. The key idea is to apply the Riemann-Hilbert correspondence on both sides of de Rham conjectures. On the Galois side, to a de Rham local system on \(X\) is associated its monodromy homomorphism \(\pi_{1}(X)\to{}^{L}G\). On the automorphic side, to a \(\mathcal{D}\)-module on \(\mathrm{Bun}_{G}(X)\) is associated its sheaf of flat sections over \(\mathrm{Bun}_{G}(X)\). The Betti geometric Langlands duality conjectures assert an equivalence between a suitable category of coherent sheaves on the character variety - the moduli space
of homomorphisms \(\pi_{1}(X)\to{}^{L}G\) - and a category of sheaves of vector spaces on \(\operatorname{Bun}_{G}(X)\). In particular the latter is conjectured, like the former, to depend only on the topological surface \(\Sigma\) underlying the algebraic curve \(X\).
The Galois side of the Betti correspondence was subsequently \(q\)-deformed in the framework of quantum groups in our works with Brochier, Ben-Zvi, Snyder and Safronov, where it was called the quantum character theory TFT, and given the structure of a fully extended four-dimensional TFT (with divergent partition function, what is sometimes called a once-categorified 3-dimensional TQFT, or a (3+1)-TQFT). The essential idea in these constructions was that the Betti Galois moduli spaces may be regarded as mapping stacks \(\operatorname{Maps}(\Sigma,BG)\) into the classifying stack \(BG\), with \(\operatorname{Coh}(BG)=Rep(G)\). The stack \(BG\) has a canonical 2-shifted symplectic structure, whose deformation quantization \(BG_{q}\) is given algebraically by the braided tensor category \(\operatorname{Rep}_{q}(G)\). Using the language of higher Morita theory and factorization homology, it was possible to make formal sense of a functor "\(\operatorname{Maps}(\Sigma,BG_{q})\)", and to establish that this extends to a 4-dimensional TQFT with divergent partition function.
The purpose of this note is to share a recent conjecture we have made jointly with David Ben-Zvi, Sam Gunningham and Pavel Safronov, which envisions Langlands duality between the skein modules of closed oriented 3-manifolds. The \(G\)-skein module \(\operatorname{Sk}_{G,q}(M)\) of a closed, oriented 3-manifold is the formal linear span of \(G\)-labelled ribbon graphs, modulo local relations modelled on the ribbon braided tensor category \(\operatorname{Rep}_{q}(G)\) of representations of the quantum group \(U_{q}\mathfrak{g}\); skein categories are defined in a similar spirit (see Section 2 for precise definitions and some examples). In its simplest form, our conjecture states:
**Conjecture 1.1**.: _Let \(G\) be a semisimple algebraic group, and let \({}^{L}G\) denote its Langlands dual group. Let \(M\) be a closed, oriented 3-manifold, suppose that \(\Psi\in\mathbb{C}^{\times}\) is transcendental, and let \(q=e^{\mathrm{i}\Psi}\) and \({}^{L}q=e^{\mathrm{i}^{L}\Psi}\). Then we have a linear isomorphism,_
\[\operatorname{Sk}_{G,q}(M)\cong\operatorname{Sk}_{{}^{L}G,{}^{L}q}(M).\]
See Section 4 for the statement of various refinements and strengthenings of the conjecture, and Section 5 for proofs in a few special families of examples.
**Remark 1.2**.: It was conjectured by Edward Witten and proved in our work with Gunningham and Safronov that the skein modules are in fact finite-dimensional under the above assumptions, so that our conjecture can be rephrased as an equality of natural numbers:
\[\dim\operatorname{Sk}_{G,q}(M)=\dim\operatorname{Sk}_{{}^{L}G,{}^{L}q}(M).\]
Moreover our method of proof established that the dimension is independent of \(q\) so long as it is taken to be transcendental (it is expected to suffice to require only that \(q\) is not root of unity). Hence the precise specification of parameters \(q\) and \({}^{L}q\) in the conjecture is somewhat superfluous, and is included only to give the reader a sense for the parallel with de Rham quantum geometric Langlands conjectures.
In particular, to confirm the conjecture as stated it is enough to compute the generic dimension of the skein module for \(G\) and for \({}^{L}G\). In fact, all the cases where
we can confirm the conjecture are by independently computing both dimensions. Because of the transcendental relationship between \(q\) and \({}^{L}q\), a canonical isomorphism of vector spaces (e.g. one intertwining the action of the diffeomorphism group on each side) is likely to depend analytically, rather than algebraically, on the parameter \(q\).
The following table summarises the many instances of Langlands duality recounted so far. Note the symmetry shared between the quantum de Rham and quantum skein module formulations.
\begin{tabular}{c|c|c c c} Shorthand & Object of study & Automorphic/A side & & Galois/B side \\ \hline Arithmetic & Number field F & \(\{V\subseteq L^{2}(G(\mathbb{A}_{\mathrm{F}})/G(\mathbb{F}))\}\) & \(\leftrightarrow\) & \(\{\rho:\Gamma(\mathrm{F}/\mathrm{F})\rightarrow{}^{L}G\}\) \\ \hline Classical & & & & \\ \hline de Rham & Complex curve \(X\) & \(\mathcal{D}(\mathrm{Bun}_{G}(X))\) & \(\leftrightarrow\) & \(\mathrm{Coh}(\mathrm{Loc}_{{}^{L}G}(X))\) \\ Betti & Real surface \(\Sigma\) & \(\mathrm{Shv}(\mathrm{Bun}_{G}(\Sigma))\) & \(\leftrightarrow\) & \(\mathrm{Coh}(\mathrm{Ch}_{{}^{L}G}(\Sigma))\) \\ \hline Quantum & & & & \\ \hline de Rham & Complex curve \(X\) & \(\mathcal{D}_{\Psi}(\mathrm{Bun}_{G}(X))\) & \(\leftrightarrow\) & \(\mathcal{D}_{{}^{L}\Psi}(\mathrm{Bun}_{{}^{L}G}(X))\) \\ Betti & Real surface \(\Sigma\) & \(\mathrm{Shv}_{q}(\mathrm{Bun}_{G}(\Sigma))\) & \(\leftrightarrow\) & \(\mathrm{Coh}_{q}(\mathrm{Ch}_{G}(\Sigma))\) \\ Skein & 3-manifold & \(\mathrm{Sk}_{{}^{L}G,q}(M)\) & \(\cong\) & \(\mathrm{Sk}_{{}^{L}G,{}^{L}q}(M)\) \\ \end{tabular}
The most basic motivation for the conjecture is that the \({}^{L}G\)-skein module of a 3-manifold \(M\) specialises at \(q=1\) to the algebra of functions on the \({}^{L}G\)-character variety of \(M\), and the skein category of a surface \(\Sigma\), specialises at \(q=1\) to the \({}^{L}G\)-character stack of a surface \(\Sigma\). Indeed, a \({}^{L}G\)-labelled ribbon graph defines in a straightforward way a holomorphic function on the \({}^{L}G\)-character variety, essentially by sending a \({}^{L}G\)-local system \(E\) to the trace of an associated bundle living along the support of the graph: these functions, and their deformations, are called Wilson loop observables. It is then natural to hope - though far from automatic - that the one-parameter family of quantum deformations given by the skein module indeed coincides with the one-parameter family of twists in the Kapustin-Witten theories. The rich structure of extended topological field theory enjoyed by skein modules - they are expected to coincide with the value of the quantum character theory TFT on 3-manifolds, and hence to describe the Kapustin-Witten state space - as well as their natural interpretation as \(q\)-deformed Wilson loop operators lends support for this hope.
Another source of motivation for our conjecture comes from the deep analogies of Mazur, Kapranov, and Reznikov, relating 3-manifold topology to algebraic number theory, and from the subsequent development of _arithmetic quantum field theory_. In this "MKR dictionary" - which builds on Weil's Rosetta stone - one imagines a number field (more precisely its ring \(\mathbb{O}\) of integers) to be a closed 3-manifold; one treats ideals in \(\mathbb{O}\) as links, and prime ideals as knots. Units in \(\mathbb{O}\) are treated as embedded surfaces, field extensions are treated as branched covers, homology groups as ideal class groups, etc. Building on this dictionary, the impetus of arithmetic field theory is to compute arithmetic invariants of number fields as if they were partition functions for a quantum field theory. This idea was explored first by Kim, who formulated an arithmetic analog of 3D Chern-Simons TQFT as a mechanism for constructing \(L\)-functions, and taken up more recently by Ben-Zvi, Sakellaridis, and Venkatesh who have proposed to study various aspects of arithmetic Langlands duality using ideas from Kapustin and Witten's 4D TQFT.
It is perhaps remarkable in hindsight given the MKR dictionary that our conjecture so significantly post-dates the geometric Langlands conjectures. Just as
skein theory attaches a vector space to each 3-manifold, arithmetic Langlands duality attaches vector spaces - the space of automorphic forms, and the space of algebraic functions on the arithmetic character variety, on each side of the duality - to each number field. The geometric Langlands dualities differ in two key respects: they involve complex curves (\(\leftrightarrow\) function fields) as opposed to 3-manifolds (\(\leftrightarrow\) number fields), and they replace vector space-valued invariants with categorical ones. This of course makes Langlands duality for skein modules more elementary to formulate, and to falsify, than geometric Langlands duality, since it reduces to a statement about equality of integer dimensions rather than about equivalences of \(\infty\)-categories. From a more physical perspective, it is interesting to note that Kapustin and Witten's expressed quantum geometric Langlands duality for _2-dimensional manifolds_ using a _four-dimensional_ QFT, but largely left aside the intervening question of mathematically describing the Hilbert spaces attached to _3-manifolds_.
This sense of anachronism can be partly explained by posing the following two natural and (to our knowledge) unanswered questions concerning Langlands duality in the arithmetic and physical settings. The answers to each question would fill in some more of the dots between our conjectures and the canon of Langlands duality which I have surveyed above. On the side of arithmetic:
**Question 1.3**.: What sense, if any, can be made of a **quantum arithmetic Langlands duality**?
In particular, what role does the symplectic structure on character varieties, and its deformation quantization, play in number theory, and what is the arithmetic meaning of the deformation parameter \(q\)? On the physical side:
**Question 1.4**.: What is the Hilbert space attached to a closed oriented 3-manifold by the Kapustin-Witten \(A\)-side twist at \(\Psi=0\)?
The answer to this question would sit across Langlands duality from the \({}^{L}G\)-character variety of \(M\), and should have an \(A\)-side flavour involving symplectic/contact geometry, Fukaya categories, Floer theory, etc. One may contemplate possible answers either by degenerating our understanding for generic \(\Psi\) (as a skein module), or by extrapolating up in dimension from the case of surfaces, where the \(A\)-side twist at \(\Psi=0\) is a category of \(\mathcal{D}\)-modules on \(\operatorname{Bun}_{G}(X)\).
One among many complications which arise when contemplating this \(A\)-side twist at \(\Psi=0\) for 3-manifolds is that while \(S\)-duality predicts an equivalence of theories \(\mathcal{Z}_{G,\Psi}\simeq\mathcal{Z}_{{}^{L}G,{}^{L}\Psi}\), it nevertheless _interchanges_ the Dirchlet and Nahm pole boundary conditions. The skein module is, in some sense by definition, the orbit of the Dirichlet boundary condition by the Wilson loop operators of the theory; and for irrational non-zero \(\Psi\) (equivalently, \(q\) not a root of unity) we anticipate that these orbits will coincide. We however do not expect such a coincidence at \(\Psi=0,\infty\), and therefore one must formulate the Nahm pole boundary condition on 3-manifolds in mathematical terms.
|
2309.06724 | Deep Nonparametric Convexified Filtering for Computational Photography,
Image Synthesis and Adversarial Defense | We aim to provide a general framework of for computational photography that
recovers the real scene from imperfect images, via the Deep Nonparametric
Convexified Filtering (DNCF). It is consists of a nonparametric deep network to
resemble the physical equations behind the image formation, such as denoising,
super-resolution, inpainting, and flash. DNCF has no parameterization dependent
on training data, therefore has a strong generalization and robustness to
adversarial image manipulation. During inference, we also encourage the network
parameters to be nonnegative and create a bi-convex function on the input and
parameters, and this adapts to second-order optimization algorithms with
insufficient running time, having 10X acceleration over Deep Image Prior. With
these tools, we empirically verify its capability to defend image
classification deep networks against adversary attack algorithms in real-time. | Jianqiao Wangni | 2023-09-13T04:57:12Z | http://arxiv.org/abs/2309.06724v2 | Deep Nonparametric Convexified Filtering for Computational Photography, Image Synthesis and Adversarial Defense
###### Abstract
We aim to provide a general framework of for computational photography that recovers the real scene from imperfect images, via the Deep Nonparametric Convexified Filtering (DNCF). It is consists of a nonparametric deep network to resemble the physical equations behind the image formation, such as denoising, super-resolution, inpainting, and flash. DNCF has no parameterization dependent on training data, therefore has a strong generalization and robustness to adversarial image manipulation. During inference, we also encourage the network parameters to be non-negative and create a bi-convex function on the input and parameters, and this adapts to second-order optimization algorithms with insufficient running time, having 10X acceleration over DIP. With these tools, we empirically verify its capability to defend image classification deep networks against adversary attack algorithms in real-time.
## 1 Introduction
Computational photography aims to recover real scenes from imperfect images captured by cameras. Understanding the physics behind the image formation, like gain control, aperture, exposure time and depth of focus, etc, are fundamental works in the area. Besides the physics principles that are clear and universal, lots of factors like object depth, are treated as a random variable. For computational photography, we usually solve an inverse problem with a statistical prior of these factors. The characterization of the prior is thus an important task and contributes significantly to the selection of a suitable algorithm. Take image denoising, for example, the Wiener filter works better for Gaussian white noise, and the median filter fits the salt-and-pepper noise [6]. Besides, any estimation or pre-calibration of the noise level function [20][15] helps with denoising. Meanwhile, some other imaging factors are more difficult to estimate from limited images. Take the single image blurring task as an example; there is no proper prior knowledge of the essential variables like object depth, which directly affects the point spread function (PSF) of out-of-focus blur kernel; object and camera movement are also unknown. There were some works in the line of deep learning that learn a statistical prior from massive training data for photography tasks, e.g, super-resolution [9], image dehazing [3], deblurring [26] and denoising [32][30]. However, rigorously, such approaches have to rely on the assumption that testing images resemble training images to ensure generalization.
We view the aforementioned computational photography algorithms as _nonparametric_, that their parameters are specific to each image, and do not depend on training data. On a very different track of applications, where the images are used in semantic tasks like classification with deep neural networks, the researchers found the networks are extremely easy to be fooled, and the images seem to be deceptive in the eyes of deep learning [2][25][14][17][4]. Without precaution, an adversary party can manually create images to make the deep network having incorrect predictions with high-confidence, even those images were only manipulated in details that human eyes are unable to notice. We understand this from the intrinsic nature of the deep networks, which are mostly over-parameterized, and most math operations are differentiable. It is easier to get better performance for deep learning, as these parameters are practical to optimize. This can be a double-edged sword, since if one party can easily train the parameters of the network, then the adversary party can easily manipulate the images to the wrong side. This inspires us to think that any deep network that has no specific parameters will leave no chances of attack to the adversary, by being exactly the opposite of conventional deep learning approaches.
A recent method named Deep Image Prior (DIP)[27][7][19][13] addresses the problem by describing an image prior implicitly by the network structure, and it no longer needs any training. DIP assumes that each image has an underlying parameterization, which is specific to the individual image itself. The target image \(I_{t}\) is synthesized by a neural network \(f\) with \(\theta\) as its parameters, and it is
generated from a random vector variable \(z\).
\[I_{t}=f_{\theta^{*}}(z). \tag{1}\]
The network may consist of convolution, downsampling, deconvolution, and upsampling layers specifically for different tasks. Simply defining the structure of \(f\), the target images \(I_{t}\) is reconstructed through gradient-based optimization w.r.t. \(\theta\), based on the observed source image \(I_{s}\),
\[\theta^{*}=\arg\min_{\theta}\mathcal{L}\left(I_{s},f_{\theta}(z)\right). \tag{2}\]
where the loss function \(\mathcal{L}\) can be square \(\ell_{2}\) distance or total variation. The interpretation behind the formulation resembles that the target image should be an optimal point of a regularized energy function \(E(I;I_{s})+\mathcal{S}(I)\), the regularization \(\mathcal{S}(I)=0\) for \(\exists z,I=f_{\theta}(z)\) and \(\mathcal{S}(I)=+\infty\) otherwise, which states that \(I\) is constrained by the network structure, and the prior is defined by its expressiveness.
We study a better network structure of \(f\) that better fits the physical process and a faster inference pipeline. Start with considering the simplicity of the loss function driven by physics, e.g.; we observe a deblurring model that adopts a denoising feature as
\[\widehat{I}_{t},\widehat{\mathcal{K}}=\min_{I_{t},\mathcal{K}}\|I_{t}* \mathcal{K}-I_{s}\|^{2}, \tag{3}\]
where \(\mathcal{K}\) is the unknown point spread function (PSF). This blind deconvolution model is though nonconvex w.r.t the unknown variables \(\{I_{t},\mathcal{K}\}\), once the PSF is known, the loss function is convex w.r.t the target image. The neural networks are typically composed of many layers, being inherently highly nonlinear and nonconvex w.r.t to the network parameter and inputs, and may consist of up to millions of parameters, so they lose the simplicity of physical equations, many of which are linear or convex. In this paper, we try to explore a framework that are as expressive as deep networks and also being explainable and nonparametric as blind deconvolution.
## 2 Approach
This section presents our nonparametric deep network approach to solve computational photography. Our first consideration is to have a decomposed neural network of two meaningful parts: a physical interference model \(f\) se
Figure 1: Image denoising. Left to right: clean, noisy, TV [5], DIP (92.7 sec), DeepRED[19] +TV (68.1 sec), DNCF (11.9 sec).
quentially connects to the generative model \(g\), thus \(I_{s}=f_{\theta}(g_{\theta}(z))\). Here \(f\) and \(g\) have different set of network parameters, but for convenience we use \(\theta\) to represent a concatenation of both network, which also indicates the two networks could be jointly trained. Ideally, the intermediate feature \(g_{\theta}(z)\) simulates the real scene \(I_{t}\) and \(f_{\theta}\) simulate the physical interference such as blur and noise to approximate the observed image \(I_{s}\), though \(I_{t}\) may not directly exist in any layer of \(f\). The first initiative is the following objective with a clear definition of the convolution kernel \(\mathcal{K}\):
\[I_{t},\theta^{\star}=\arg\min_{y,\theta}\|y*\mathcal{K}-I_{s}\|+\beta\|y-g_{ \theta}(z)\|. \tag{4}\]
We wish to reserve the highly strong expressivity of \(g\) to simulate a complex enough scene and simultaneously make \(f\) towards a simple function to mimic a physical corruption process. To further make the physical interference more complex, we substitute the convolution by \(f\) and obtain:
\[I_{t},\theta^{\star}=\arg\min_{y,\theta}\|f_{\theta}(y)-I_{s}\|+\beta\|y-g_{ \theta}(z)\|. \tag{5}\]
Here we pursue a smart initialization of \(g_{\theta}(z)\), such as \(y=G_{\rho}(I_{s})+\epsilon\), where \(G\) could be Gaussian smoothing filters of standard deviation \(\rho\) or bicubic interpolation, for specific tasks. This is to suggest that in the degrade case of \(f\) being an identity mapping, the gradient of the objective function are nonzero. By initializing the intermediate feature at \(y_{0}=G_{\rho}(I_{s})\), to approximate a feasible solution of image, but not too close to \(I_{s}\) as to having a degenerated solution like \(I_{t}=I_{s}\) and \(f\) is an identity mapping. We therefore add additional regularization \(\mathcal{R}\) and further refine the results by gradient descent, by jointly optimizing the following:
\[\mathcal{L}(\theta,y)=\|f_{\theta}(y)-I_{s}\|^{2}+\beta\|y-G_{\rho}(I_{s})\|^{ 2}+\mathcal{R}(\theta,y).\]
We refer to our methods as the Deep Nonparametric Non-negative Nonlinear Filters (DNCF). As it degenerates to a simple filter \(G\) as a trivial solution, and if \(\beta\) is small enough, it could lead to a fixed point solution as \(y=f_{\theta}(y)=I_{s}\). We generate the real scene by choosing a better result (OPT) according to the photometric measurement, via heuristic se
Figure 2: Visualization of Super-resolution with limited running time. Left to right: high resolution, low resolution, DIP (353 sec), DeepRED + TV (112.1 sec), DNCF (12.0 sec).
lection. This prevents the hard cases of DIP being degenerated, e.g. the current result \(f_{\theta}(y)\) within restricted running time still resembles random signal (as \(y\) is randomly initialized), and in this case we take passive prediction \(y\), or try a aggressive prediction \(f_{\theta}(f_{\theta}(y^{\star}))\) for luck,
\[I_{t}=OPT(y^{\star},f_{\theta}(y^{\star}),f_{\theta}(f_{\theta}(y^{\star})), \quad y^{\star}=\arg\min_{y}\min_{\theta}\mathcal{L}(y,\theta).\]
Inspired by the previous work on regularization for blind deconvolution, we also seek a proper formulation of \(\mathcal{R}\) ideally with convexity, and easier inference. We note that the second quadratic term \(\|y-g(I_{s})\|^{2}\) contributes to the strong convexity to the objective function, and maintains consistency between two branches of prediction. We also include a idea that a non-negative combination of convex functions is still convex. By mathematical reduction, we know that a sequential network \(f\) is convex w.r.t. the random vector \(z\), given input data \(I_{s}\), if all weights \(\theta\) have non-negative elements, and the activation function like ReLU is convex and non-decreasing. The pioneering work from the input-convex network [1] uses this rule and puts a strong non-negative constraint on the parameters, which also limits the expressiveness of the network and therefore the final performance. In this paper, we use an interpolation between the convex network and an arbitrary one, through a soft regularization on the parameter weights. We use \(\mathcal{R}(\theta)\) denotes the additional regularization to the negative weights
\[\mathcal{R}(\theta)=\gamma\|\max(-\theta,0)\|. \tag{6}\]
The norm may change to \(\ell_{1}\) or \(\ell_{2}\). In an extreme case that \(\gamma\) goes to infinity, then this becomes a constrained problem that requires all weights are nonnegative as [1]. Another middle way that goes between the rigorous input convexity and nonconvexity is to limit partial channels, or partial layers to have nonnegative weights:
\[\mathcal{R}^{\prime}(\theta)=\sum\gamma\|\max(-\theta[indx],0)\|, \tag{7}\]
where \(indx\) is the indices of a subset of parameters. We typically only select those layers above a predefined depth to regularize, to keep lower layer being expressive.
We use alternating optimization between the network parameters \(\theta\) and the intermediate variables \(y\), similiar to the Expectation-Maximization (EM). Although EM is nonconvex in general and is hard to optimize, a part of it might have an analytical solution. E.g., gradient descent can achieve a linear convergence rate on a strongly convex and smooth function; that is, to obtain a suboptimality gap decreases at an exponential speed[22]. However, without the strong convexity, the decreasing rate may slow down inversely linear (for smooth and convex functions)[21]. Unfortunately, the (strong) convex property does not hold for the popular approaches based on neural networks.
## 3 Computational photography
**Denoising.** Blind denoising is to recover the underlying clean image from a noisy observation without knowledge of the noise formation process. Assuming the additive noise model as \(I_{s}=I_{t}+\epsilon\), where \(I_{s}\) is source image for this task, i.e the noisy image and \(I_{t}\) is the target image, i.e. clean image, and \(\epsilon\) follows Gaussian distribution. DNCF optimizes the following objective function (\(G\) as Gaussian smoothing):
\[\mathcal{L}(\theta,y)=\|f_{\theta}(y)-I_{s}\|^{2}+\beta\|y-G(I_{s})\|^{2}+ \mathcal{R}(\theta,y).\]
**Texture synthesis and inpainting.** This technique [24][16][11][10] is complementary to computational photography, if there is a defected region in pictures, and the
Figure 3: Visualization of Super-resolution with limited running time. Left to right: high resolution, low resolution, DIP (353 sec), DeepRED + TV (112.1 sec), DNCF (12.0 sec).
synthesis and inpainting technique is hereafter to repair by filling in using textures from visually similar images. Some further applications include artistic manipulation of removing an object or individual from a picture while keeping the other pixels as photo-realistic as possible. There were several lines of classical approaches. For the inpainting problem with a picture \(I_{s}\) that is removed pixels according to a mask \(M\in\{0,1\}^{H\times W}\), DIP optimize the parameterization for the following objective function,
\[\mathcal{L}_{inpaint}(\theta,y)=\|M\odot f_{\theta}(y)-M\odot I_{s}\|^{2}+ \mathcal{R}(\theta,y)\]
**Single image super-resolution (SR).** This is to generate higher resolution images from a lower resolution image restricted by camera sensors. If the resolution degradation problem is known to be from physical factors like blur effects, i.e. the observed image is a simple smoothed interpolation of the high-resolution target image, then SR algorithms could resemble image deblurring by alternating the estimation the blur kernel and deconvolution, such as blind image deconvolution as Eq.(3). By denoting \(I_{s}\) as the source image of lower resolution, \(D\) as the downsampling filter, \(G\) as a one step SR filter, e.g. bicubic interpolation, DNCF-SR optimizes the following objective:
\[\mathcal{L}_{SR}(\theta,y)=\|D(f_{\theta}(y))-I_{s}\|^{2}+\beta\|y-G(I_{s})\|^ {2}+\mathcal{R}(\theta,y).\]
**Flash/no-flash photography.** Under some lousy lighting conditions, there will be a flash lighting control trade-off challenging to find. On the one hand, flash images contain unrealistic colors as photons from ambient illumination is comparably under-estimated, and this reduces the brightness of the natural color of objects, on the other hand, no-flash images have to rely on strong gains of cameras as photons are insufficient, and makes a stronger noise being inevitable as a byproduct. A representation algorithm [23] is to use a joint bilateral filter on the no-flash image, where the flash image serves to provide edges information due to its advantage of less noise. To apply DIP on this task, we denote the flash image as \(I_{f}\) and the no-flash image as \(I_{nf}\), and let \(\epsilon\) follows multivariate Gaussian distribution and \(\gamma\) to be the noise regularization, \(G\) as the Gaussian smoothing filter, and DNCF-Flash optimizes the following:
\[\mathcal{L}_{flash}(\theta,y)=\|f_{\theta}(y)-I_{nf}\|^{2}+\beta\|y-G(I_{f}+ \epsilon)\|^{2}+\mathcal{R}(\theta,y).\]
## 4 Adversarial defense for machine learning
There are several ways to categorize adversarial attack methods. One perspective to classify is based on the final goal of the adversary. For example, poisoning attacks refer to add fake images and labels to make the overall trained models be poor performance [2], and this of course, requires access to the training sets and are plausible for web-crawled data. The evasion attacks give up on misleading the models
Figure 4: Visualization of image inpainting. From left to right: the original image, masked image, recovered images with DIP, and the results with D3NF. (_inpaint1_ on the top and _inpaint2_ on the bottom).
Figure 5: Visualization of the optimization procedure. In left-right, top-down order, flash image, initialized \(I_{t}\), \(5\) middle results by every 200 iterations, and then the original non-flash image.
but try to generate samples that are hard to recognize, and this resembles an extreme version of data augmentation, e.g. to put a mask on a traffic sign [12]. Some adversary attacks aim for fix classes of samples \(I\), say with label \(t\), and they try to generate visually similar samples \(I^{\prime}\) but with false label \(t^{\prime}\), and these are referred to as targeted attack, and the generalized attacks without specific label designation are non-targeted attacks. From another perspective, we could categorize the attacks by whether the adversary has access to the machine learning models configurations and parameters, whose leakage will boost the capability of attacks. White-box attacks refer to the methods with knowledge of the machine learning models. such as L-BFGS [25], FGSM [14], BIM [17] and CW [4].
**Fast gradient sign method (FGSM)** This was proposed in [14]. We denote the classifier model as \(C\), and the misleading images \(I^{adv}\) are generated from real samples \(I\) and the true label \(t\) as
\[I^{adv}=I+\epsilon\operatorname{sign}(\nabla_{I}\mathcal{L}(C(I),t)). \tag{8}\]
**Basic iterative method (BIM)** This [17] could be viewed as a multi-step variant of FGSM of smaller step size \(\alpha\) with clipping to \(\epsilon\)-ball each iteration, which is a projecting operation to restrict the adversary examples being within \(\epsilon\) distance from the original image \(I_{0}\)
\[I_{n}^{adv}=\textit{Proj}_{\epsilon}(I_{n-1}+\alpha\operatorname{sign}( \nabla_{I}\mathcal{L}(C(I_{n-1}),t))). \tag{9}\]
**Carlini \(\&\) Wagner** The L-BFGS attacks was proposed in[25] which formulated the attack as an optimization algorithm for finding an optimal adversarial example \(I\) that is predicted as an designated label \(t\), which can be relaxed as an regularized optimization problem
\[\min_{I}\|I_{t}-I\|+\mathcal{L}(I,t),\quad s.t.\quad I\in[0,1]^{m}. \tag{10}\]
and solving this with L-BFGS. On top of the objective function, Carlini and Wagner proposed the following margin loss in [4], to maximize the relative confidence of the label \(t\) comparing to all other labels,
\[\min_{I}\|I_{t}-I\|+\max(\max_{t^{\prime}}\mathcal{L}(I,t^{\prime})-\mathcal{ L}(I,t),0), \tag{11}\]
**DNCF as adversary defense** The difference between the physical interference and adversary attacks is that the later case could be as bad as possible for classification. However, DNCF is insensitive to those pixel-wise changes, and will generate results randomly each time. DNCF optimizes the following function with \(G\) as Gaussian smoothing and \(\epsilon\) as random perturbation:
\[\mathcal{L}(\theta,y)=\|I^{adv}-f_{\theta}(y)\|^{2}+\beta\|G(I^{adv})+ \epsilon-y\|^{2},\]
and perform the classification model on the denoising result \(f_{\theta^{*}}(y^{*})\). Comparing to computational photography, DNCF might be easier to remove adversary noises without the need to fixate on the pixel-wise accuracy for visual effects.
## 5 Optimization perspectives
This partial channel-wise convexification in Eq.(7) aims to provide a weaker definition of strong convexity (SC), e.g., the constant nullspace strong convexity (CNSC) [31][33], and try to pursue a faster optimization algorithm under these conditions. CNSC rises naturally from the notation of the general convexity: as a function may not be strongly convex on the entire linear space \(\mathcal{Y}\), it is still possible that the function is strongly convex on a subspace \(\Phi\) of \(\mathcal{Y}\); however, it may not be convex on the orthogonal space \(\Phi^{\top}\). For notation, we use \(\textit{Proj}_{\Phi}\) to represent the projection to a subspace \(\Phi\).
**Definition 1**.: _A function \(f(y)\) that is twice-differentiable w.r.t. \(y\in\mathbb{R}^{K}\), is defined as having constant nullspace strong convexity of constant \(m\geq 0\) if there exists a subspace \(\Phi\in\mathbb{R}^{K^{\prime}}\) on which the function \(f(y)\) only depends, as \(z=\textit{Proj}_{\Phi}(y)\) and \(f^{\prime}(z)=f(y)\), and the Hessian matrix \(H(z)\) of \(f^{\prime}(z)=f(y)\) has the following property:_
\[v^{\top}H(z)v\geq m\|v\|^{2},\quad v^{\top}H(z)=0,\quad\forall v\in\Phi. \tag{12}\]
We notice that if the subspace \(\Phi\) is equivalent to \(\mathbb{R}^{K}\), then CNSC equals to SC (Strong Convexity). Generally speaking, for the problem that takes much fewer data samples, technically, only one sample during inference, than the number of elements in that sample, CNSC is a more reasonable condition to satisfy since the inference procedure, since the gradient vector w.r.t. input lies in a much lower-dimensional subspace. For one last layer in the decomposition,
\[\mathcal{L}(y)=\|\sigma(\theta_{l}^{\top}y+\theta_{l}^{\top}f_{h}(z)+b)-I_{s} \|_{2}^{2}+\lambda\|y\|^{2}, \tag{13}\]
then the gradient vectors \(\partial\mathcal{L}(y)\) lies in the subspace spanned by \(\theta_{l}\), whose rank might very likely be much lower than \(y\), which is typically high-dimensional by design.
**Lemma 2**.: _If the objective function lies in subspace \(\Phi\), that there exists an alternative expression of \(L\), say \(L^{\prime}\) only depends on the subspace \(\Phi\), such that \(\mathcal{L}(y)=\mathcal{L}^{\prime}(\textit{Proj}_{\Phi}(y))\), then the gradient vector \(p(y)\) and Hessian matrix \(H\) also lies in the subspace \(\Phi\)._
We propose to use a quasi-Newton optimization algorithm to update the intermediate features \(y\). The basic idea of this second-order algorithms is to minimize the second-order expansion around each iteration. Based on the the last iteration \(y_{t}\), the gradient vector \(p_{t}=\partial\mathcal{L}/\partial y^{t}\), and an approximation \(B_{t}\in\mathbb{R}^{D\times D}\) to the Hessian matrix, we give the ascent direction \(d_{t}\) of this iteration by
\[d_{t}=\arg\min_{d}p_{t}^{\top}d+\frac{1}{2}d^{\top}B_{t}d. \tag{14}\]
The step size for this iteration \(\alpha_{t}\) is obtained by line search, which is iterative tried over \(\{b^{c}\},c\in[0,1,\cdots],b\in(0,1)\), from the largest to the smallest, to meet the Armijo rule of
\[\mathcal{L}(y^{t}+\alpha_{t}d_{t})\leq\mathcal{L}(y^{t})+\alpha_{t}\xi p_{t}^{ \top}d_{t}. \tag{15}\]
where \(\xi\in(0,1)\) is a constant. Applying BFGS algorithm [8][29], the approximated Hessian \(B_{t+1}\) for the next iteration is:
\[B_{t+1}=B_{t}-\frac{B_{t}s_{t}s_{t}^{\top}B_{t}}{s_{t}^{\top}B_{t}s_{t}}+\frac{ o_{t}o_{t}^{\top}}{o_{t}^{\top}s_{t}}, \tag{16}\]
where \(s_{t}=y_{t+1}-y_{t},\quad o_{t}=p_{t+1}-p_{t}\). However, only updating \(y\) leads to insufficient diversity in the result, as \(y\) is constrained in the subspace \(\Phi\), so we heuristically update \(\theta\) and \(y\) alternatively.
## 6 Experiments
**Visualization of convexification.** We first visualize the convexification effect of Eq.(7) on the neural networks for smaller tasks. We give a simple example of synthetic data as the fully input convex network (FICNN) as described in [1], with a \(2\)-dimensional regression problem. This network takes both the data \(z\in\mathbb{R}^{2}\) and a label proposal \(y\in\{0,1\}\) as the network input and predicts the compatibility score of the proposal. The layers within the network have a special structure, in the \(l\)th hidden layer, we require \(z_{l}\) of \(f\) has the following formulation,
\[z_{l}=\sigma(\theta_{l}^{z}z_{l-1}+\theta_{l}^{y}y+b_{l}),\quad z_{l}\in \mathbb{R}^{K_{l}}, \tag{17}\]
where \(\sigma\) is ReLU function. The last layer projects to a scalar variable and the loss function is square distance. We name the network with our proposed regularization as CVXR-Net. FICNN uses the same network architecture with CVXR-Net, they both have two fully connected layers of \(200\)-dimensional latent features. Beside Partially ICNN (PICNN) is built on top of ICNN, except that latent features \(z_{l}\) have a branch connecting \(y\) that allows positive weights and therefore has relatively stronger expressive power. FICNN and PICNN use projected gradient descent for optimization to keep the network weights strictly non-negative. After training both networks to convergence, we demonstrate the decision boundary in figure(7) for several synthetic data sets, where red and blue points are from two different classes. We see that CVXR-Net produces a better boundary for harder examples and maintains a similar behavior on easier examples.
**Computational photography.** We conduct the experi
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Attack & time & Ac(orig) & Ac(attack) & Ac(defense) \\ \hline CW & 1646.2 & 75.48 & 0.00 & 64.42 \\ BIM & 32.26 & 67.79 & 0.00 & 21.63 \\ FGSM & 5.60 & 72.60 & 18.27 & 33.65 \\ PGD & 30.21 & 74.04 & 0.00 & 15.87 \\ FFGSM & 5.45 & 75.00 & 16.83 & 30.29 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of DNCF for adversary defense: the running time (seconds) of attacks. the accuracy (\(\%\)) of the original network, the accuracy after being attacked, and the accuracy after defense. The running time of DNCF is 126 seconds for 200 images.
Figure 6: ImageNet data attacked by CW, with predictions of attacked images on the top and defended predictions on the bottom.
ments of DNCF with simpler version of the original implementation of DIP by smaller network. The visual results are in figures (1 2 4). We use the _skip_ network for all experiments, an encoder-decoder architecture with skip connections. The network consists of several blocks, and each block has an equal number of channels in each of the convolution layers. Each block has four convolution layers, four batch normalization layers, and four ReLU layers, where the second convolution layer has a stride of \(\times 2\), therefore, it reduces the resolution and an upsampling layer follows to expand the resolution by \(\times 2\). The last block also consists of an extra convolution layer to project the last feature tensor to \(3\)-channel that matches the images. For general applications, the output of the network is in the same resolution with the input, except that for super-resolution, there is a \(\times 8\) resolution expansion effect, since we set to generate images in this resolution. We compare against total variation (TV) denoising [5], bicubic interpolation, and DeepRED [19] combined to We restricted the algorithm to run for at most 400 iterations on NVIDIA GTX 1060 with 6GB graphics memory, as the original parameter of DIP runs considerably slow, as long as 300 seconds for a best result of super-resolution. We put the details of results for different tasks in Table.(2).We start with \(\gamma\) in the table as the regularization coefficient. Each time we found that the convexification regularization is larger than the mean square error (MSE) loss in forward-propagation, we reduce \(\gamma\) to \(1/4\) of its last value, to prevent that the regularization dominates the total loss. This adaptive regularization also saves us from extensive tuning \(\gamma\) for each task individually. Our essence is to make the DIP practical on personal computers, therefore despite that, we inherit most hyperparameters, e.g, the number of layers, we reduce numbers of channels for those applications to fit in \(4GB\) GPU memory, and reduce the number of extra iterations without visible changes. We report the performance in Table.(2). In the table, we show MSE of both DIP and DNCF; except for the super-resolution (SR) task, the metric is the peak SNR (PSNR) of the ground-truth against the SR version. Here \(\uparrow\) means the higher the better and \(\downarrow\) means reversely. On the flash task, we run \(800\) iterations of optimizing the whole network and then only optimizing the last block and its \(32\)-channel intermediate features for about \(1,200\) iterations using LBFGS, then compare with DIP baseline which is to optimize the whole network using Adam optimization, for the same period of time. We plot the convergence of MSE, the transition of the image flash-no flash photography in Figure(5), i.e. from no-flash to flash version, for every \(200\) iterations. Note that for the \(5\) middle results of \(I_{t}\), and \(600\) iterations on a \(32\)-channel DNCF give a good enough result, in comparison to original DIP of \(96\)-channels with the same amount of iterations. The convexification improves the visual details and decreases MSE. In Table.(2) DNCF reduces MSE on all applications. We also see a great amount of information from the figures, e.g. in Figure.(1,2), DNCF generates images with weaker blur degradation and clearer quality; in Figure.(4) DNCF fills in the hole with better patterns.
**Adversary Defense.** We tested the effect of removing adversary attacks from images and use GoogleNet Inception V3 as the classifier. We randomly select 100 classes from the validation set of ImageNet ILSVRC 2012. We randomly tested 200 images for each methods of attacking (so the accuracy on clean images varies slightly), including the aforementioned methods, and projected gradient de
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline task & DIP & DNCF & Figures \\ \hline Denoise & 9.9e-4\(\downarrow\) & 9.4e-3 \(\downarrow\) & fig1 \\ Flash & 6.887e-3 \(\downarrow\) & 6.705e-3 \(\downarrow\) & fig5 \\ Inpaint1 & 7.08e-4\(\downarrow\) & 5e-4 \(\downarrow\) & fig4 \\ Inpaint2 & 9.521e-3\(\downarrow\) & 8.4e-3\(\downarrow\) & fig4 \\ SR & 19.067\(\uparrow\) & 19.447 \(\uparrow\) & fig2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison between DNCF (less than 700 iterations) and DIP (with unconstrained running time). Here \(\downarrow\) means better images have smaller values and vice versa.
Figure 7: Visualization of prediction regions. 3 columns on the top: Nonnegative regularization; 3 columns on the bottom: ICNN.
scent (PGD) [18] and Fast FGSM (FFGSM) [28]. We set the \(\epsilon\) value (the norm of maximum perturbation on the images), \(\alpha\) value (the step size) and \(n\) value (number of steps) as follows: \(\epsilon=8/256,\quad\alpha=2/256,\quad n=7\). We use a simple DNCF of only 3 convolution layers. In this case, we could use a batch of image and run them in parallel, so they actually share a network parameterization as we have lower standard in photometric measurement. We put the classification results on original images, attacked images and defended images on Table.(1). We show the running time comparison between attacking and defense as well. We notice that DNCF is extremely effective against CW, and have different improvement against others.
## 7 Conclusion
In this paper we propose a deep nonparametric filter (DNCF) as a general framework of modelling physical interference and adversary attacks on images, and lead to a solution of computational photography and adversary defense.
|
2309.10146 | Comparing an android head with its digital twin regarding the dynamic
expression of emotions | Emotions, which are an important component of social interaction, can be
studied with the help of android robots and their appearance, which is as
similar to humans as possible. The production and customization of android
robots is expensive and time-consuming, so it may be practical to use a digital
replica. In order to investigate whether there are any perceptual differences
in terms of emotions based on the difference in appearance, a robot head was
digitally replicated. In an experiment, the basic emotions evaluated in a
preliminary study were compared in three conditions and then statistically
analyzed. It was found that apart from fear, all emotions were recognized on
the real robot head. The digital head with "ideal" emotions performed better
than the real head apart from the anger representation, which offers
optimization potential for the real head. Contrary to expectations, significant
differences between the real and the replicated head with the same emotions
could only be found in the representation of surprise. | Amelie Kassner, Christian Becker-Asano | 2023-09-18T20:56:18Z | http://arxiv.org/abs/2309.10146v1 | # Comparing an android head with its digital twin regarding the dynamic expression of emotions
###### Abstract
Emotions, which are an important component of social interaction, can be studied with the help of android robots and their appearance, which is as similar to humans as possible. The production and customization of android robots is expensive and time-consuming, so it may be practical to use a digital replica. In order to investigate whether there are any perceptual differences in terms of emotions based on the difference in appearance, a robot head was digitally replicated. In an experiment, the basic emotions evaluated in a preliminary study were compared in three conditions and then statistically analyzed. It was found that apart from fear, all emotions were recognized on the real robot head. The digital head with "ideal" emotions performed better than the real head apart from the anger representation, which offers optimization potential for the real head. Contrary to expectations, significant differences between the real and the replicated head with the same emotions could only be found in the representation of surprise.
facial expression, emotion, empirical study, android robot, social robotics
## I Introduction and motivation
Robots are expected to support us as social partners in the future. For natural interaction, it is necessary that their appearance and behavior are adapted to their environment [1]. Emotions serve as a nonverbal tool that can enhance such interactions [2]. Android robots can be used for human-human and human-robot interaction research due to their very human-like appearance. However, their expressiveness is limited by their hardware and their production is still quite expensive. Virtual robot heads, on the other hand, can be produced comparatively easy without high cost. A virtual robot head can be used to mimic the human face and better understand its functionality [3]. Furthermore, virtual robot heads have more freedom of movement in their animation since they are not bound by physical constraints.
In order to investigate possible differences regarding emotion perception between a real and virtual robot head, a physical android robot head was digitally recreated in Unreal Engine 5 (UE5). This replica serves as a basis for further research and provides information about possible adaptations of the real robot head to better represent emotions on it. Six basic emotions [9] were modelled with the robot head and validated in a pre-experiment. Afterwards, the real and the virtual version of the robot head were compared against each other using a 3D visualization inside a head-mounted display for the virtual version.
The remainder of this paper is structured as follow. In the following section related work will be presented and discussed. In Section III the experimental hypotheses will be stated and the hardware setup will be introduced, before a pre-study is explained in Section IV. The main study is described in Section V with its results presented and analyzed in Section VI. A general discussion in Section VII concludes our presentation.
## II Related work
### _Emotions_
There are many different approaches trying to define emotions [4, 5], but a unified definition has not been found yet. Emotion theories and models such as basic emotion theory (BET) [6], main emotion systems [7] or prototypical approaches [8] try to look at emotions from different perspectives. Ekman's research has defined the basic emotions of anger, disgust, fear, happiness, sadness, and surprise, and assumes that they are universal and culturally independent [9]. Even though the validity of this research has been doubted in some cases (cf. [6, 10]), these basic emotions serve as a basis for research in many cases, including in the field of human-robot interaction [11]. Ekman's results could be investigated and replicated in further studies [12].
### _Social robots and ardroids_
Social robots are able to interact naturally with humans via verbal and nonverbal signals. Emotions are a part of this and can help represent a robot's internal state and allow viewing individuals to interpret and respond to it [13]. Androids have an appearance as similar to humans as possible and are intended to advance research regarding human-human and human-robot interaction. Human movements and facial expressions are of great importance here for natural interaction [14]. Geminoid HI-1 and Geminoid F have already been used to study cross-cultural differences in terms of emotion perception, where fear was more difficult to detect and confusion varied depending on nationality [2]. Replicating human faces is difficult due to their high complexity [3], yet this has been attempted several times. Robot heads are often able to represent emotions, which has |
2309.14320 | MUTEX: Learning Unified Policies from Multimodal Task Specifications | Humans use different modalities, such as speech, text, images, videos, etc.,
to communicate their intent and goals with teammates. For robots to become
better assistants, we aim to endow them with the ability to follow instructions
and understand tasks specified by their human partners. Most robotic policy
learning methods have focused on one single modality of task specification
while ignoring the rich cross-modal information. We present MUTEX, a unified
approach to policy learning from multimodal task specifications. It trains a
transformer-based architecture to facilitate cross-modal reasoning, combining
masked modeling and cross-modal matching objectives in a two-stage training
procedure. After training, MUTEX can follow a task specification in any of the
six learned modalities (video demonstrations, goal images, text goal
descriptions, text instructions, speech goal descriptions, and speech
instructions) or a combination of them. We systematically evaluate the benefits
of MUTEX in a newly designed dataset with 100 tasks in simulation and 50 tasks
in the real world, annotated with multiple instances of task specifications in
different modalities, and observe improved performance over methods trained
specifically for any single modality. More information at
https://ut-austin-rpl.github.io/MUTEX/ | Rutav Shah, Roberto Martín-Martín, Yuke Zhu | 2023-09-25T17:45:31Z | http://arxiv.org/abs/2309.14320v1 | # Mutex: Learning Unified Policies from
###### Abstract
Humans use different modalities, such as speech, text, images, videos, etc., to communicate their intent and goals with teammates. For robots to become better assistants, we aim to endow them with the ability to follow instructions and understand tasks specified by their human partners. Most robotic policy learning methods have focused on one single modality of task specification while ignoring the rich cross-modal information. We present Mutex, a unified approach to policy learning from multimodal task specifications. It trains a transformer-based architecture to facilitate cross-modal reasoning, combining masked modeling and cross-modal matching objectives in a two-stage training procedure. After training, Mutex can follow a task specification in any of the six learned modalities (video demonstrations, goal images, text goal descriptions, text instructions, speech goal descriptions, and speech instructions) or a combination of them. We systematically evaluate the benefits of Mutex in a newly designed dataset with 100 tasks in simulation and 50 tasks in the real world, annotated with multiple instances of task specifications in different modalities, and observe improved performance over methods trained specifically for any single modality. More information at [https://ut-austin-rpl.github.io/MUTEX/](https://ut-austin-rpl.github.io/MUTEX/)
Multimodal Learning, Task Specification, Robot Manipulation
## 1 Introduction
When working in a team, humans regularly make use of different modalities to specify tasks and improve communications, _e.g._, sharing high-level task goals ("Let's cook a meal!"), verbal instructions ("We will go to the kitchen, get the pot from the cabinet, and then put it on the stove."), or
Figure 1: **Overview. We introduce Mutex, a unified policy that learns to perform tasks conditioned on task specifications from multiple modalities (image, video, text, and speech) in the forms of instructions and goal descriptions. Mutex takes advantage of the complementary information across modalities to become more capable of completing tasks specified by any single modality than methods trained specifically for each one.**
fine-grained visual demonstrations (showing a cooking video). Human-robot teams should aspire to a similar level of understanding. While recent research in robot learning has studied various modalities for specifying robotic tasks, including text, images, speech, and videos, most previous studies have treated these individual modalities as separate problems, such as language-conditioned policy learning [1; 2; 3; 4], instruction following [5], visual goal-reaching [6; 7; 8; 9], and imitation from video demonstrations [10; 11; 12]. Consequently, these approaches lead to siloed systems tailored to individual task specification modalities.
A burgeoning body of interdisciplinary AI research has suggested that joint learning across multiple modalities, such as image and text [13; 14; 15; 16], video and text [17; 18; 19] and vision and touch [20; 21; 22], gives rise to richer and more effective representations that improve understanding of individual modalities. These results align with findings in cognitive science and psychology, which suggest that incorporating multimodal cues (_e.g._, visual and verbal) into human learning processes enhances learning quality over using individual modalities alone [23; 24]. Drawing inspiration from the effectiveness of cross-modal learning in prior work, our goal is to develop **unified policies capable of reasoning about multimodal task specifications for diverse manipulation tasks**, where each task will be defined in a single modality that changes from task to task. We seek to harness the complementary strengths of different modalities -- some providing compact and high-level information like goal descriptions, while others providing fine-grained information like step-by-step video demonstrations -- thereby improving the model's ability to execute tasks from task specifications of different modalities.
Prior works primarily focus on enhancing language understanding using visual data [25; 26; 27; 28] or on a subset of modalities such as language and robot demonstration [29], language and image [30]. These approaches typically cover one or two modalities, failing to encompass the diverse ways in which humans express their goals and intent. The key challenge associated with learning across varied modalities is effectively leveraging cross-modal information to reinforce each other and having a highly versatile architecture to encapsulate the variability introduced by multiple modalities.
To effectively learn from different task specification modalities, we improve upon two representation learning techniques, **masked modeling**[13; 18; 31] and **cross-modal matching**[14; 32], to foster cross-modal interaction through a shared embedding space. Firstly, we exploit the complementary strengths of each modality -- text and speech specifications provide guidance for the model to extract task-relevant features from visual specifications (image goals and video demonstration), and visual specifications, in turn, help ground the text and speech to real-world observations. The model is trained on the representation learning objectives in tandem with the policy learning objective (_i.e._, behavior cloning) such that the representation of the task specification also captures action-relevant information. After we build richer, more informed representations for each modality, we bring them to a common space [13; 14; 18; 31]. Unlike prior work that maps visual specifications to language embeddings [33], we exploit the fact that human video demonstrations contain more fine-grained information about the task. We enrich the representations of other modalities by matching them with the information-dense video representations. This cross-modal matching leads to compact yet informative multimodal representations that can be used to execute tasks specified by any modality.
For the model to execute a task specified by any modality, it must handle the variable input length of task specification tokens. Meanwhile, it has to predict robot actions alongside a variable number of masked signals for masked modeling of different modalities. To achieve this, we design a Perceiver-style encoder [34] where a variable number of task specification tokens are attended with a fixed history of robot observations using the cross-attention mechanisms. The embeddings obtained from the encoder are then passed through the Perceiver-style decoder [35] to predict robot actions and masked signals for individual modalities. This architecture design allows the model to learn a policy that can execute tasks specified by any single modality or arbitrary aggregation of several.
In summary, we introduce Mutex (**MU**ltimodal **T**ask specification for robot **EX**ecution), a unified policy capable of executing tasks specified as goal descriptions or more detailed instructions in text, speech, or visual modalities. Mutex is versatile and performant. It can not only understand multimodal task specifications but also improve the robustness of task execution for every single
modality. We demonstrated this with a comprehensive evaluation benchmark, including 100 diverse manipulation tasks in simulation and 50 in the real world, leading to 6000 evaluated trials (per method) in simulation and 600 evaluation trials in the real world. Remarkably, real-world evaluation indicates that Mutex can effectively interpret human video demonstrations and perform tasks successfully with the robot, albeit with morphology differences. As part of this effort, we provide a large real-world dataset of 50 tasks for a complete list of real-world tasks with 30 trajectories for each task containing tasks like "putting bread on a microwave tray and closing it", "opening an air fryer and putting hot dogs in it", or "placing the book in the front compartment of the caddy", specified with multiple rich multimodal task specifications: text goal description, text instruction, goal image, video specification, speech goal description, and speech instruction [Figure 3], supporting future research in multimodal task specification.
## 2 Related Work
**Task Specification in Robot Manipulation.** One way to communicate tasks to a multi-task policy is through one-hot encoding vectors [36]. However, this approach is limited to a predefined set of tasks and cannot be extended to new ones. On the other hand, methods that use language to specify tasks [1; 33; 2; 4; 37; 38] have shown improved multi-task generalization due to a richer semantic task representation. However, learning with language specification can be ineffective as it requires grounding language to the robot's observation and action spaces [39; 40]. Moreover, the compact nature of language poses a challenge for tasks that need more detailed or accurate descriptions [30]. Visual task specifications (images [7; 9; 8; 41] or videos [10; 11; 33; 12]) offer dense information which makes the policies falsely depend on task-irrelevant information (_e.g._, a visual demonstration of moving an object also shows the locations and motions of background objects in the scene [42]), causing poor generalization behaviors. The peculiarity of individual modalities limits methods that focus on unimodal specifications and fail to leverage complementary information across modalities.
**Cross-modal Representation Learning.** In the past years, there has been a large body of literature on learning rich representation from multimodal data with cross-modal learning objectives [16; 43; 44]. These works have provided convincing evidence that learning across multiple modalities can substantially boost model performances on conventional visual recognition tasks (such as image classification [15; 45; 46; 47], object detection [48; 49; 50], segmentation [51; 46], activity recognition [52]), language understanding tasks (such as sentiment analysis, paraphrasing [53; 54]), and multimodal reasoning tasks (such as visual QA [55; 19; 56] and cross-modal retrieval [13; 14]). In robotics, multi-sensory observations have been shown to improve the performance of manipulation tasks [20; 21; 22]. Inspired by these successes, Mutex learns a cross-modal representation of task specifications for multi-task imitation learning for robot manipulation.
**Multi-Task Imitation Learning in Manipulation:** Multitask imitation learning for robot manipulation has been extensively studied with language task specifications [1; 37; 33; 4] and visual demonstration [12; 9], respectively. A closer line of work to ours consists of methods that harness multi-modal task specifications. Some leverage multimodal data for model training, but the final policies are deployed to only operate on a single modality type [25; 26; 27; 33]. Others demonstrate that policies consuming multimodal specifications can generalize to novel task instances in one shot, but the new task must be specified in _all_ modalities [29]. More recent works have explored specifying a task with multimodal prompts (text and image tokens) defined in a combination of modalities [30], providing a more flexible interface and partially alleviating grounding problems. Nevertheless, none of these prior works has offered the flexibility to specify the task using _any_ individual modality, nor support as many different modalities as Mutex.
## 3 Mutex Model and Dataset
Our goal is to learn a unified policy that performs diverse tasks based on a dataset of demonstrations annotated with multimodal task specifications, including language, speech, and visual specifications. We assume that, during test time, the task to perform will be specified in a single or
a subset of modalities that can vary from task to task. For each task, \(T_{i}\in\{T_{1},T_{2},\ldots,T_{n}\}\), we assume that the agent learns from a set of human demonstrations obtained through teleoperation, \(D_{i}=\{d_{i}^{1},d_{i}^{2},\ldots,d_{i}^{m}\}\), where \(m\) is the number of demonstrations for task \(T_{i}\), forming an entire dataset of demonstrations, \(D\). Each demonstration \(d_{i}^{j}\) presents the form of a sequence of observations and expert actions, \(d_{i}^{j}=[(o_{1},a_{1})_{i}^{j},\ldots(o_{T},a_{T})_{i}^{j}]\).
The goal of each task \(T_{i}\) is specified by \(k\) alternative task descriptions (\(t\)) in six different forms within three modalities: text instructions \(t\in\{L_{i}^{1},L_{i}^{2},\ldots,L_{i}^{k}\}\), text goal description \(t\in\{l_{i}^{1},l_{i}^{2},\ldots,l_{i}^{k}\}\), video demonstration \(t\in\{V_{i}^{1},V_{i}^{2},\ldots,V_{i}^{k}\}\), goal image \(t\in\{v_{i}^{1},v_{i}^{2},\ldots,v_{i}^{k}\}\), speech instructions \(t\in\{S_{i}^{1},S_{i}^{2},\ldots,S_{i}^{k}\}\), and speech goal description \(t\in\{s_{i}^{1},s_{i}^{2},\ldots,s_{i}^{k}\}\). Note that we characterize the modalities with the letters \(L/l\), \(V/v\), and \(S/s\), and make use of capital letters to denote _detailed instructions_ and small letters to denote _goal state specifications_. Our goal is to learn a unified policy, \(\pi(a|t,o)\), that outputs continuous actions, \(a\in A\), given current observations, \(o\in O\), conditioned on a task specification in one or more of the possible modalities, \(t\in L|l|V|v|S|s\). We aim to create a policy that not only performs the \(n\) tasks in the training dataset \(D\) (seen tasks) in new initial conditions (_e.g._ positions of the objects) but also generalizes to previously unseen task descriptions.
### Mutex Training Procedure
Our goal is not only to obtain a unified policy that understands task specifications in different modalities but also to improve the policy performance when a task is specified on every single modality by exploiting cross-modal interactions during training. To that end, we leverage two representation learning techniques that we integrate sequentially in Mutex's training procedure: _1) Mask Modeling_ to promote cross-modal interactions of all modalities into a shared learned latent space, and _2) Cross-Modal Matching_ to enrich each modality with information of the information-denser one. Both stages of our procedure are combined with a behavior cloning objective for policy learning to ensure that the learned representation contains relevant information for the agent to perform the manipulation task. The overview of the proposed approach is shown in Fig. 2.
**Masked Modeling for Cross-Modal Learning:** In the first stage [Step 1 in PseudoCode 1], Mutex promotes cross-modal interactions between the model components interpreting the different
Figure 2: **Mutex’s Model Architecture and Training Losses**. Task specifications in each modality are encoded with pretrained modality-specific encoders, CLIP, and Whisper models [15; 57]. During the first stage of training, one or more of these modalities are randomly selected and masked before being passed to projection layers. The resultant tokens obtained from projection layers are combined with observation tokens through \(N\) blocks of self- and cross-attention layers. The encoder’s hidden state is passed to Mutex’s transformer encoder that is queried for actions (behavior learning loss) and the masked features and tokens (masked modeling loss), promoting action-specific cross-modal learning. In the second stage of training, all modalities are enriched with information from video features through a cross-modal matching loss. Mutex predicts closed-loop actions to achieve the task based on the provided observations and a task specification modality (one or more) at test time.
modalities: text instructions (\(L\)), text goal (\(l\)), video demonstration (\(V\)), image goal (\(v\)), speech instructions (\(S\)), and speech goal (\(s\)). Inspired by the success in other cross-modal learning tasks [53, 19, 43, 44], we mask certain tokens or features of each modality and learn to predict them with the help of other modalities. This enforces the model to use the information from other modalities to enhance the representation of each one of them. Intuitively, text- and speech-masked prediction helps in focusing relevant information from visual modalities, while image- and video-masked prediction helps in grounding other modalities. During testing, a task specification in only one or a subset of the modalities is used. Thereby, to obtain robust single-modality representations [Table 1, 2], we recreate these conditions in our training procedure by randomly sampling modalities in each iteration.
Specifically, at each iteration of the training process, we randomly select task specifications of one or a subset of modalities. If more than one modality is selected, we mask certain parts of each modality. We require Mutex to predict the masked parts alongside the action values the expert demonstrated at each step. Different modalities would mask either input or intermediate features and use a different metric loss to measure prediction error (\(\ell_{1}\) or \(\ell_{2}\)). For _masked text modeling_ (masking elements of a text goal description or text instructions), we mask out words [58] that are then passed through pretrained CLIP language model [15] to extract the features. We use the standard cross-entropy loss between the predicted (\(\hat{y}\)) and ground truth (\(y\)) tokens, _i.e._, \(\mathcal{L}_{CE}(y,\hat{y})=-\sum_{i=1}^{N}y_{i}\log(\hat{y}_{i})\), where \(N\) is the number of tokens in the vocabulary. For _masked visual modeling_ (masking elements of an image goal or a video instruction), we mask out intermediate regions of the features obtained from a pre-trained CLIP model [15] and, following prior works in vision-language modeling [59], we employ a simple \(\ell_{1}\)-regression loss between the predicted and ground truth features, \(\mathcal{L}_{\ell_{1}}=|y-\hat{y}|\). For _masked speech modeling_ (masking elements of a speech goal description or speech instructions), we use a similar approach to visual modeling but use features from a pre-trained Whisper model [57] instead with an \(\ell_{1}\)-regression loss (Refer to Appendix 6.2).
**Cross-Modal Matching for Richer Representations:** In the second stage of Mutex's training procedure [Step 2 in PseudoCode 1], we enrich the common embedding space for each task modality by associating it with the features of the information-richer one. To that end, in contrast to prior works that use a cross-modal contrastive loss to learn a common embedding space [14, 16], we use a simple \(\ell_{2}\) loss to pull the representations of all modalities towards the one with more information and better performance. Video specifications contain the most information, leading to more elucidative and stronger features; therefore, we enrich other modalities with information from the video representation space obtained after cross-modal learning.
Concretely, let \(f_{L}\), \(f_{l}\), \(f_{v}\), \(f_{s}\), \(f_{S}\) be the feature representations of the text instructions, text goal, image goal, speech goal, and speech instructions, and \(f_{V}\) the feature representation of the video demonstration for the same task. Our cross-modal matching loss is given by:
\[\mathcal{L}_{match}=\mathcal{L}_{\ell_{2}}(f_{L},f_{V})+\mathcal{L}_{\ell_{2} }(f_{l},f_{V})+\mathcal{L}_{\ell_{2}}(f_{S},f_{V})+\mathcal{L}_{\ell_{2}}(f_{ s},f_{V})+\mathcal{L}_{\ell_{2}}(f_{s},f_{V}) \tag{1}\]
where \(\mathcal{L}_{\ell_{2}}\) is the \(\ell_{2}\)-regression loss. Gradients from this loss are not backpropagated to the part of Mutex that encodes the video modality, leaving them unchanged.
### Mutex Architecture
The training process delineated above requires a model architecture capable of propagating and encoding the cross-modal information from multiple modalities. Mutex's model consists of three main components (see Fig. 2): 1) Modality-Specific Encoders that map input modalities to task-specific tokens, 2) a Policy Encoder that takes in the task-specific tokens and robot observations and outputs hidden states, and 3) a Policy Decoder that takes in the hidden states along with decoder queries and outputs features corresponding to the queries.
Mutex's **modality-specific encoders** extract tokens from input task specification using a fixed, pre-trained large model, which helps us to extract semantically meaningful representation from the input modality. To learn representations that are grounded to the observation and action space, these features are passed through a projection layer consisting of a simple MLP or single attention block
before passing it to the policy encoder. Mutex's **policy encoder** effectively fuses information obtained from multiple task specification modalities and robot observations, employing a transformer-based architecture with stacked cross- and self-attention layers. In the cross-attention layers, queries are derived from robot observations, whereas the keys and values are from task specification tokens. The encoder's output is then passed to the policy decoder. Although the policy encoder's output is enriched with information obtained from different task specification modalities, Mutex requires the policy to output features for predicting action values and a variable number of masked tokens. This motivates us to adopt a Perceiver Decoder [35] architecture as Mutex's **policy decoder** to leverage learnable queries and output only the information corresponding to input queries. The decoder features for action prediction are passed through an MLP to estimate a Gaussian Mixture Model for continuous action values. Similarly, separate MLPs are used to predict token values or features for the masked token queries. More details can be found in Appendix 6.3.
### Multimodal Task Specification Dataset
As part of our efforts to develop a unified multi-task imitation learning policy, we construct a new dataset of tasks with multiple task specifications per modality in both simulated and real-world tasks (see Fig. 3). In the **simulation**, we extend the LIBERO-100 benchmark [60] entailing diverse object interactions and versatile (\(n=100\)) tasks like "turn on the stove and put the frying pan on it." Each task is annotated with \(m=50\) human trajectories (provided by the authors) and \(k=11\) task specifications per modality. Due to the inevitable sim2real gap, video demonstrations in the simulation are collected by teleoperating the simulated robot instead of directly by a human performing the tasks. In the **real world**, we collect a novel dataset with \(n=50\) tasks (Figure 5) ranging from pick and place tasks such as "put the bread on the white plate", "pick up the bowl at the back of the scene and place it inside the top drawer" to more contact-rich tasks such as "open the air fryer and put the bowl with dogs in it", "take out the tray of the oven, and put the bread on it." All tasks are collected in the same environment but involve different objects from a set of 17 objects. Each task is demonstrated with \(m=30\) human-collected trajectories [61] using a 3D spacemouse teleoperation device. Each task is annotated with \(k=11\) different task specifications
Figure 3: **Mutex Multimodal Task Specification Dataset.** We provide a dataset comprising **100** simulated tasks (example in the first column) based on LIBERO-100 [60] and **50** real-world tasks (examples in the second and third columns), annotated with **50** and **30** demonstrations per task, respectively. We annotate each task with **11** alternative task specifications in each of the **six** following modalities (rows from top to bottom): video demonstration, image goal, text instructions, text goal, speech instructions, and speech goal.
for each modality. A human performs video demonstrations specifying the tasks with their hand. To generate \(11\) diverse text goal descriptions and instructions, we make use of ChatGPT with prompts to generate alternative descriptions that we manually filter to avoid using synonyms that do not match the right task-relevant objects (_e.g._, using _plate_ as a synonym for _bowl_ when there are other plates in the scene). We request speech signals from several characters of the Amazon Polly service to generate diverse speech descriptions.
## 4 Experimental Evaluation
**Experimental Setup:** We conduct our evaluations on our newly constructed dataset of multimodal task specifications, with a split of 80%/20% (8/3) tasks for training and testing. For evaluation, we subject the robot to both unseen task specifications (from the test split) and new initial conditions (i.e., positions of objects). For each tested task, we evaluated 20 trials in simulation and 10 trials in the real world. Reported results for each evaluation modality are average for all \(20\) trials \(\times\)\(100\) tasks \(\times\)\(3\) seeds in the simulation and \(10\) trials \(\times\)\(5\) tasks in the real world. In our evaluation, we compare Mutex to models trained specifically for each modality using only task specifications in that modality. This resembles most existing prior works in video-based imitation learning [11; 12], and goal-conditioned imitation learning [6; 9; 33; 4]. Please refer to Appendix 6.5 for details.
**Experiments:** In our experimental evaluation, we aim to answer the following questions:
_1) Does a unified policy capable of executing tasks across multiple modalities outperform methods trained specifically from and for each individual modality?_ Table 1 and Table 2 summarize the results of our evaluation in simulation and the real world, respectively. In both cases, we observe a significant improvement (\(+\mathbf{10.3}\%\), \(+\mathbf{14.3}\%\) in simulation and real-world respectively) from using our unified policy Mutex compared to modality-specific models, indicating that the cross-modal learning procedure from Mutex is able to leverage more information from other modalities.
Additionally, we analyze the errors in our real-world evaluation and find that representations learned using Mutex generalize better to new task specifications than unimodal representations. Specifically, when dealing with unseen task specifications, failed trials that were not attributed to BC compounding error, where the policy correctly completed a semantically meaningful task in the environment, albeit not the intended task (e.g., picking and placing another object). We find that in the unimodal baselines, \(\mathbf{85}\) out of \(\mathbf{240}\) trials (\(35.4\%\)) are due to failed understanding of tasks specified, whereas it reduces to \(\mathbf{40}\) out of \(\mathbf{240}\) (\(16.7\%\)) with Mutex.
_2) What is the importance of the two stages of the_ Mutex _training procedure? Is it important to perform the stages consecutively?_ Table 1 includes a comparison of the results obtained with the two-staged training procedure consecutively (Mutex) compared to training with both stages simultaneously (joint training), training without the first stage (no masked modeling) or without the second stage (no cross-modal matching). We observe the largest drop in performance comes from training without masked modeling, indicating that this step is critical to learning cross-modal
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Method & Text & Text & Image & Video & Speech & Speech \\ & Goal & Instructions & Goal & Demonstration & Goal & Instructions \\ \hline Modality-Specific & \(41.7\pm 8.0\) & \(39.9\pm 2.8\) & \(58.7\pm 4.9\) & \(62.0\pm 5.2\) & \(22.3\pm 1.2\) & \(28.4\pm 5.5\) \\ Mutex (joint training) & \(39.2\pm 9.1\) & \(38.3\pm 7.4\) & \(48.6\pm 14.8\) & \(50.7\pm 16.0\) & \(32.1\pm 9.9\) & \(38.2\pm 13.6\) \\ Mutex (no masked modeling) & \(34.8\pm 6.0\) & \(38.7\pm 6.5\) & \(43.8\pm 6.0\) & \(46.0\pm 8.0\) & \(24.6\pm 4.2\) & \(29.1\pm 6.1\) \\ Mutex (no cross-modal matching) & \(43.5\pm 8.9\) & \(39.4\pm 2.2\) & \(60.1\pm 6.4\) & \(\mathbf{63.2\pm 6.3}\) & \(36.7\pm 4.3\) & \(\mathbf{46.8\pm 5.5}\) \\ Mutex & \(\mathbf{50.1\pm 7.8}\) & \(\mathbf{53.0\pm 2.2}\) & \(\mathbf{61.6\pm 6.4}\) & \(\mathbf{63.2\pm 6.3}\) & \(\mathbf{40.9\pm 8.1}\) & \(46.0\pm 5.2\) \\ \hline \end{tabular}
\end{table}
Table 1: Success Rate on the Multimodal Task Specification Dataset in Simulation.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Method & Text & Text & Image & Video & Speech & Speech \\ & Goal & Instructions & Goal & Demonstration & Goal & Instructions \\ \hline Modality-Specific & 52 & 48 & 42 & 52 & 32 & 46 \\ Mutex & **64** & **58** & **62** & **64** & **50** & **60** \\ \hline \end{tabular}
\end{table}
Table 2: Success Rate on the Multimodal Task Specification Dataset in Real-World.
information from different task specifications. Cross-modal matching provides a boost except for video demonstrations (we match to that modality) and observes a small drop in speech specifications.
_3) Does the performance increase significantly when tasks are specified with multiple modalities?_ One of the advantages of Mutex is that it can execute tasks with a single specification in any of the learned modalities or with multiple specifications in several of the modalities. We evaluated using combinations of _Text Goal + Speech Goal_, _Text Goal + Image Goal_ and _Speech Instructions + Video Demonstration_ and obtain \(50.1,59.2\), and \(59.6\) success rates, respectively. These values are close to the performance using one single specification in the best of the two modalities, indicating that the additional modality is not providing any extra information. We hypothesize it is because all possible cross-modal information has already been learned by Mutex. Interestingly, when using specifications in _all modalities_, the success rate is \(60.1\), lower than when using _Image goal_ or _Video Demonstration_, possibly due to the increased complexity of interpreting multiple task specifications.
_4) Are the task specification representations learned by Mutex better than state-of-the-art task-specification models?_ To further evaluate the efficacy of the Mutex representations, we compare it with other task specification models in Table 3. Although the unimodal models, T5 and R3M, achieve better results than CLIP (Table 1) in Text Instructions (\(+4.1\%\)) and Image Goals (\(+2\%\)), respectively, Mutex consistently outperforms these models across all modalities. The consistent improvement across modalities demonstrates the value of leveraging multiple modalities during training. Moreover, Mutex significantly outperforms VIMA, a recent method that employs both text goals and object images for task specification, highlighting that Mutex is not only more performant than its unimodal counterparts but also can effectively use multiple modalities during inference.
## 5 Conclusion, Limitations, and Future Work
We demonstrate with comprehensive experiments in simulation and the real world that multi-task learning, when trained on task specifications across multiple modalities, produces a more robust and versatile policy in each modality. We are highly encouraged by the empirical results and the potential of Mutex for designing a more versatile multimodal human-robot communication interface.
We aim to improve Mutex in future work by addressing several limitations. Mutex assumes paired access to all the task specification modalities, which may be difficult to obtain in a scalable fashion. Mutex uses of clean speech signals synthesized by Amazon Polly may not accurately represent real-world speech, which is often noisier and more difficult to understand. Mutex uses video and image goals are provided from the same workspace as the task to be executed. Having a policy that can execute task specified "in the wild" visual goal or demonstration will invite additional challenges. We also plan to explore how to foster stronger generalization by training across diverse environments, which could open the door to the use of larger human video datasets. Lastly, Mutex uses vanilla behavior cloning to learn policies, which possesses problems like covariate shifts and compounding errors. To mitigate this limitation, incorporating interactive imitation learning and reinforcement learning techniques is an exciting direction for future research.
#### Acknowledgments
We thank Yifeng Zhu and Huihan Liu for real robot system infrastructure development. We thank Zhenyu Jiang, Jake Grigsby, and Hanwen Jiang for providing helpful feedback for this manuscript. We acknowledge the support of the National Science Foundation (1955523, 2145283), the Office of Naval Research (N00014-22-1-2204), UT Good Systems, and the Machine Learning Laboratory.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline Method & Text Goal & Text Instruction & Image Goal & Video Demonstration & Image Goal + Text Goal \\ \hline TS [62] & 40.0 & \(44.0\) & - & - & - \\ R3M [63] & - & - & \(59.5\) & \(44.7\) & - \\ VIMA [30] & - & - & - & \(47.0\) \\ Mutex & \(\mathbf{50.1}\) & \(\mathbf{53.0}\) & \(\mathbf{61.6}\) & \(\mathbf{63.2}\) & \(\mathbf{59.2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Success Rate on the Multimodal Task Specification Dataset in Simulation. |
2310.00236 | Half precision wave simulation | In recent years, half precision floating-point arithmetic has gained wide
support in hardware and software stack thanks to the advance of artificial
intelligence and machine learning applications. Operating at half precision can
significantly reduce the memory footprint comparing to operating at single or
double precision. For memory bound applications such as time domain wave
simulations, this is an attractive feature. However, the narrower width of the
half precision data format can lead to degradation of the solution quality due
to larger roundoff errors. In this work, we illustrate with carefully designed
numerical experiments the negative impact caused by the accumulation of
roundoff errors in wave simulations. Specifically, the energy-conserving
property of the wave equations is employed as a convenient diagnosis tool. The
corresponding remedy in the form of compensated sum is then provided, with its
efficacy demonstrated using numerical examples with both acoustic and elastic
wave equations on hardware that support half precision arithmetic natively. | Longfei Gao, Kevin Harms | 2023-09-30T03:06:13Z | http://arxiv.org/abs/2310.00236v2 | # Compensated sum and delayed update for time dependent wave simulations at half precision
###### Abstract
On modern hardware, the speed of memory operation is often the limiting factor for execution time for many scientific applications, particularly for those related to PDE discretizations. This motivates us to explore the possibility of operating at half precision to reduce memory footprint and hence utilize the memory bandwidth more effectively. Specifically, we study the viability of half precision simulations for time dependent wave equations in this work. Potential pitfalls when naively switching to half precision in these simulations are illustrated. We then demonstrate that replacing the standard floating point sum with the compensated sum for solution updates can significantly improve the quality of the simulation results.
## 1 Introduction
In the early days of digital computing, there were many activities on the topic of mitigating the round-off errors in floating point arithmetic operations, sum in particular (see, e.g., [1, 2, 3, 4, 5, 6]), largely motivated by the lack of universal support for high precision arithmetic operations in hardware. Some later interests were motivated by the insufficient accuracy of double precision floating point operations for certain applications (see [7] for a collection of scientific applications that benefit from arithmetic operations at higher precision and [8] for a particular use case in the context of mesh triangulation). These algorithms received a newer round of interest thanks to the advancement of hardware accelerators (see, e.g., [9, 10]).
The gap between processor speed and memory speed has become wider and wider since the 80s. On modern hardware, the limiting factor for execution time is often memory operations for a large body of scientific applications, particularly for those involving simulations of partial differential equations (PDEs). In such a scenario, reducing memory footprint is critical for achieving efficient simulations. One simple approach to reducing memory footprint is lowering the operating precision, hence reducing the storage and movement requirement per datum.
For the particular application considered here, i.e., time dependent wave simulations, replacing double precision (64 bit) with single precision (32 bit) is common practice and, empirically, often gives satisfactory simulation results without the need of additional correction procedures (see [11, p.14] and [12, 13] for some evidence). On the other hand, we will demonstrate with numerical experiments that lowering to half precision (16 bit) without correction will lead to unsatisfactory simulation results.
We will further illustrate that with a simple fix using a technique often referred to as the Kahan summation, one can restore the simulation results to a satisfactory level. This idea has already been proposed in Gill's 1950 work [14], years before the widespread availability of computing machinery capable of floating point arithmetic. It has also been examined in [15] in the context of simulations of ordinary differential equations (ODEs). In this work, we examine its applicability in the context of time dependent PDE simulations at half precision.
The remainder of this work is organized as follows. In section 2, we briefly outline the underlying physical problem and its numerical discretization used in our numerical experiments. In section 3, we describe the technique of compensated summation, from which we derive the improvements in our half precision simulations. In section 4, we discuss the benefits of half precision simulation on modern hardware, using the problem described in section 2 as a concrete example. Numerical experiments are shown in section 5 to illustrate the impact of compensated sum in half precision simulations. We make a few remarks in section 6 and finally, draw our conclusions in section 7. |
2309.06917 | Continual Learning with Dirichlet Generative-based Rehearsal | Recent advancements in data-driven task-oriented dialogue systems (ToDs)
struggle with incremental learning due to computational constraints and
time-consuming issues. Continual Learning (CL) attempts to solve this by
avoiding intensive pre-training, but it faces the problem of catastrophic
forgetting (CF). While generative-based rehearsal CL methods have made
significant strides, generating pseudo samples that accurately reflect the
underlying task-specific distribution is still a challenge. In this paper, we
present Dirichlet Continual Learning (DCL), a novel generative-based rehearsal
strategy for CL. Unlike the traditionally used Gaussian latent variable in the
Conditional Variational Autoencoder (CVAE), DCL leverages the flexibility and
versatility of the Dirichlet distribution to model the latent prior variable.
This enables it to efficiently capture sentence-level features of previous
tasks and effectively guide the generation of pseudo samples. In addition, we
introduce Jensen-Shannon Knowledge Distillation (JSKD), a robust logit-based
knowledge distillation method that enhances knowledge transfer during pseudo
sample generation. Our experiments confirm the efficacy of our approach in both
intent detection and slot-filling tasks, outperforming state-of-the-art
methods. | Min Zeng, Wei Xue, Qifeng Liu, Yike Guo | 2023-09-13T12:30:03Z | http://arxiv.org/abs/2309.06917v1 | # Continual Learning with Dirichlet Generative-based Rehearsal
###### Abstract
Recent advancements in data-driven task-oriented dialogue systems (ToDs) struggle with incremental learning due to computational constraints and time-consuming issues. Continual Learning (CL) attempts to solve this by avoiding intensive pre-training, but it faces the problem of catastrophic forgetting (CF). While generative-based rehearsal CL methods have made significant strides, generating pseudo samples that accurately reflect the underlying task-specific distribution is still a challenge. In this paper, we present Dirichlet Continual Learning (DCL), a novel generative-based rehearsal strategy for CL. Unlike the traditionally used Gaussian latent variable in the Conditional Variational Autoencoder (CVAE), DCL leverages the flexibility and versatility of the Dirichlet distribution to model the latent prior variable. This enables it to efficiently capture sentence-level features of previous tasks and effectively guide the generation of pseudo samples. In addition, we introduce Jensen-Shannon Knowledge Distillation (JSKD), a robust logit-based knowledge distillation method that enhances knowledge transfer during pseudo sample generation. Our experiments confirm the efficacy of our approach in both intent detection and slot-filling tasks, outperforming state-of-the-art methods.
## 1 Introduction
Large Language Models (LLMs) excel in many natural language processing (NLP) tasks, but they require significant resources, time, and data to train from scratch. Additionally, retraining them for each new task is often not feasible. To address these issues, continual learning (CL) is introduced. CL enables LLMs to learn new information from sequential tasks or datasets without losing their original performance. The process of CL is depicted in Figure 1. Although CL performs better than direct fine-tuning, experiments consistently reveal the unavoidable problem of catastrophic forgetting (CF) McCloskey and Cohen (1989). CF occurs when the performance on prior tasks deteriorates upon learning a new one, primarily due to shifts in data distribution between current and previous tasks.
Traditionally, the CL methods can be mitigated through three strategies: _regularization_, _architectural_, and _rehearsal_. _Regularization_ Kirkpatrick et al. (2017); Zenke et al. (2017); Aljundi et al. (2018) minimally updates important parameters from previous tasks to retain performance, but the accumulating regularizers can over-constrain network parameters, hindering new task learning. _Architectural_Madotto et al. (2021); Zhang et al. (2022) modifies the network structure for better task-specific feature extraction. Nevertheless, their individual task-focused approach may neglect knowledge transfer between old and new tasks. _Rehearsal_Lopez-Paz and Ranzato (2017); Sun et al. (2019); Mi et al. (2020) utilizes episodic memory for task recall, with "store-based rehearsal" using stored real samples and "generative-based rehearsal" generating pseudo samples. The latter, being more memory-efficient, has received greater attention.
Generative-based rehearsal, including Prompt Conditioned VAE (CVAE) Zhao et al. (2017) for Lifelong Learning (PCLL) Zhao et al. (2022), has achieved state-of-the-art (SOTA) performance. Effective rehearsal hinges on the generative model's
Figure 1: In this example of Continual Learning, the LM first trains on the _banking_ dataset, resulting in parameter \(\theta_{1}\). The LM then trains on _hwu_, followed by _snips_, and so on. The parameters are updated sequentially.
ability to closely mimic real samples from previous tasks. PCLL uses a symmetric Gaussian distribution to model discrete latent variables, often inaccurately capturing task-specific distributions. Additionally, VAE's Kullback-Leibler (KL) vanishing problems Fu et al. (2019) lead to generating generic, similar samples.
In this paper, we target solving the CF problem of CL caused by distribution shifts in LLMs, aiming to preserve the domain-specific knowledge from streaming pre-training corpus distributions Chen et al. (2023). Our method builds on PCLL, L2KD Chuang et al. (2020), and Prompt Tuning Zhu et al. (2022), demonstrating that generative-based rehearsal mitigates CF without extra computational costs, while knowledge distillation and prompting enhance performance. We propose Dirichlet Continual Learning (DCL), a new generative-based rehearsal method that combines task distribution modeling and knowledge distillation. Inspired by Latent Dirichlet Allocation (LDA) in topic modeling Blei et al. (2003), we treat NLP tasks as topics and employ a Dirichlet distribution-based CVAE for generating pseudo samples. Additionally, we introduce a logit-based knowledge distillation Hinton et al. (2015) method called Jensen-Shannon Knowledge Distillation (JSKD) under the CL knowledge distribution framework. We evaluate our method in ToDs, and the results consistently show its superiority over the baselines. To summarize, our main contributions are:
* We propose a Dirichlet distribution-based method to model the latent variable in CVAE, which achieves better rehearsal for continual learning of ToDs and mitigates catastrophic forgetting without access to the old data.
* We develop Jensen-Shannon Knowledge Distillation (JSKD), a new logit-based knowledge distillation strategy for knowledge transfer between teacher and student models.
* Experimental results in ToDs demonstrate that DCL improves baselines by a large margin.
## 2 Related Works
### Continual Learning
Continual learning involves three categories: _regularization_, _architectural_, and _rehearsal_.
_Regularization_ method, unlike L2 normalization that assigns the same weight to all model parameters, reinforces earlier knowledge by constraining crucial parameters and inserting a regularization term. Elastic Weight Consolidation (EWC) Kirkpatrick et al. (2017) identifies and avoids updating important parameters, preserving previous task performance. Such strategy is seen in works like ARPER Mi et al. (2020), Progress & Compress Schwarz et al. (2018), MAS Aljundi et al. (2018), and LwM Dhar et al. (2019).
_Architectural_ approaches modify the network structure to reduce CF by adding task-specific parameters to the base models to effectively model task-specific features. Typical works include Progressive Neural Network Rusu et al. (2016), Pathnet Fernando et al. (2017), AdapterCL Madotto et al. (2021), Piggyback GAN Zhai et al. (2020), and Semi-Supervised Lifelong Learning (SSLL) Zhao et al. (2022). For example, AdapterCL parameterizes each task using residual adapters.
_Rehearsal_ methods, which maintain performance by using samples from previous tasks, are divided into store-based and generative-based types. Store-based methods like ICaRL Rebuffi et al. (2017) and Gradient Episodic Memory (GEM) Lopez-Paz and Ranzato (2017) use stored samples. ICaRL applied a herding-based step to choose representative samples, while GEM uses memory to avoid forgetting and encourage positive backward transfer. Generative-based methods like ReMix Mi et al. (2020), the method by Shin et al. (2017), and Prompt Tuning Zhu et al. (2022) create pseudo samples. ReMix and Shin et al. (2017) create samples with Mixup and GAN respectively, while Prompt Tuning uses task-specific prompts. PCLL Zhu et al. (2022), similarly, uses a CVAE to create samples of past tasks.
### Task-Oriented Dialogue Modelling
Dialogue systems are split into Task-Oriented Dialogue Systems (ToDs Williams and Young (2007); Wen et al. (2017) and Chit-Chat Dialogue Systems (CcDs) Shang et al. (2015); Serban et al. (2016). ToDs perform specific tasks (like hotel booking), while CcDs offer non-targeted dialogue for psychological support. ToDs modules include Speech Recognition (SR), Natural Language Understanding (NLU), Dialogue Management (DM), and Natural Language Generation (NLG). NLU, a key module, interprets utterance knowledge and includes domain identification, intent detection, and slot filling. For a hotel booking task, domain identification identifies the topic (i.e., the hotel), intent detection
identifies the booking request by recognizing the user's intention in utterance and slot filling returns hotel information like name and address. The DM module updates the global dialogue state using Dialogue State Tracking (DST) and applies Dialogue Policy to determine system actions. NLG (Press et al., 2017) then generates dialogue responses.
### Latent Dirichlet Allocation
Latent Dirichlet Allocation (LDA) (Blei et al., 2003), a popular topic model, uses the Dirichlet distribution to model topic and word distributions in documents. It serves as the conjugate prior to the multinomial distribution. LDA-based document models for ad-hoc retrieval were proposed in (Wei and Croft, 2006). An online variational bayes (VB) algorithm for LDA was developed by Hoffman et al. (2010), and Foulds et al. (2013) propose a stochastic algorithm for collapsed VB inference in LDA. The Embedded Topic Model (ETM) (Dieng et al., 2020) combines LDA and word embeddings to identify interpretable topics with large vocabularies, including rare and stop words. In addition, Li et al. (2020) introduce a Dirichlet graph VAE for graph generation and clustering.
## 3 Methodology
### Task Definition
In this paper, we focus on the NLU of ToDs which learns the feature and knowledge from the utterance and generally includes domain identification, intent detection, and slot filling. Here following the common practice (Zhao et al., 2022; Madotto et al., 2021; Mi et al., 2020; Zhu et al., 2022) and to facilitate comparison, we mainly consider intent detection and slot filling tasks. For learning the tasks of ToDs in the CL manner, we assume that there is a sequence of tasks \(T=\{T_{1},\cdots,T_{N}\}\), and an LM model which is expected to solve the tasks by gradually training on the samples from the sequence of tasks. Examples of intent detection and slot filling have been explained in Section 2.2.
### Overview
In CL, the quality of exemplars is critical for preserving previous task performance, as emphasized in (Mi et al., 2020). Thus, generating representative and diverse pseudo utterances is important. Among commonly used generative models, VAE and GAN are prominent. VAE, in particular, is known for generating meaningful, diverse dialogues (Serban et al., 2017). Therefore, we propose a CVAE-based DCL that utilizes latent variables.
DCL, shown in Figure 2, has two main modules: pseudo-rehearsal and Language Model (LM) training. The pseudo-rehearsal module employs CVAE to generate pseudo samples from previous tasks. Then, the LM is updated using both these samples and those from the current task, enabling CL. The total training loss of DCL combines the CVAE loss for pseudo-rehearsal and the LM loss for updating LM parameters.
The structure of the CVAE and LM models is outlined as follows. In CVAE, both encoder and decoder employ GPT-2 (Radford et al., 2019) with distinct parameters to encode information and generate pseudo samples for tasks. In a continual learning context, the LM is expected to generate samples for task sequences, making the decoder of the CVAE also function as the LM as depicted in the LM training module of Figure 2. Consequently, the same model updating and cross-task knowledge distillation processes are applied to both the CVAE and LM models.
The key of the DCL is to generate pseudo samples. In the proposed DCL model, we replace the Gaussian latent with the Dirichlet latent and employ reject sampling (Jankowiak and Obermeyer, 2018) to reparametrize the Dirichlet latent variable.
### Dirichlet-guided Pseudo-rehearsal Module
This module aims to generate the pseudo samples for rehearsal based on the task ID. Exploiting the modeling flexibility of the Dirichlet distribution, we treat different NLP tasks as topics and propose a Dirichlet-guided pseudo-rehearsal module.
For the task \(T_{n}\), we expect to generate \(y_{i}\) given the input utterance \(x_{i}\). To achieve task-dependent generation, a specific prompt \(Pr_{n}\) for \(T_{n}\) is first defined, and then concatenated to the input utterance, yielding the augmented input \(\tilde{x}_{i,n}=Pr_{n}\oplus x_{i}\). Further, the CVAE can be utilized to generate the pseudo samples for \(T_{n}\) based on \(\tilde{x}_{i,n}\)(Zhao et al., 2017). The key idea of CVAE is to reconstruct the input \(x\) through the latent variable \(z\), which is normally modeled through the Gaussian distribution.
The CVAE is trained to maximize the log-likelihood \(\log p_{\theta}(x)\), where \(\theta\) is the model parameter. However, \(\log p_{\theta}(x)\) is intractable (Kingma and Welling, 2013), the lower bound ELBO \(\mathcal{L}(\theta,\phi;x,c)\) is used for tractable optimization:
\[\log p_{\theta}(x)\geq\mathcal{L}(\theta,\phi;x,c)\]
\[=-\lambda KL(q_{\phi}(z|x,c)||p_{\theta}(z|c))\] \[\quad+\mathbb{E}_{q_{\phi}(z|c,x)}[\log p_{\theta}(x|z,c)]\] \[\leq\log p(x|c), \tag{1}\]
where \(p_{\theta}(z|c)\) is the prior distribution of \(z\), \(q_{\phi}(z|x,c)\) approximates the intractable true posterior distribution, \(c\) defines the task ID, and \(\lambda\) is the dynamic KL weight to mitigate the KL-vanishing, as proposed by Bowman et al. (2016).
However, as illustrated in Shen et al. (2018); Zeng et al. (2019), although the weighting scheme can be used, KL-vanishing can not be essentially tackled. The main reason is that a symmetric Gaussian from continuous space is not flexible and sufficient enough to express the latent \(z\) originating from discrete space. Here, we introduce the Dirichlet distribution, which uses a more flexible structure to approximate the prior distribution of \(z\). The versatile forms of the Dirichlet distribution, which can be concave, convex, symmetrical, or asymmetrical, make it an appealing choice for our model.
The CVAE loss denoted as \(\mathcal{L}_{\rm CVAE}\) is the negative of ELBO. Following (1), we have
\[\mathcal{L}_{\rm CVAE}=\mathcal{L}_{\rm KL}^{\prime}+\mathcal{L}_{\rm Rec}, \tag{2}\]
and \(\mathcal{L}_{\rm KL}^{\prime}\) can be expressed as follows after derivation Zeng et al. (2019):
\[KL(q_{\phi}(z|x,c)||p(z|c))=\] \[\log\Gamma(\sum_{k=1}^{K}\alpha_{k})-\sum_{k=1}^{K}\log\Gamma( \alpha_{k})\] \[-\log\Gamma(\sum_{k=1}^{K}\beta_{k})+\sum_{k=1}^{K}\log\Gamma( \beta_{k})\] \[+\sum_{k=1}^{K}(\alpha_{k}-\beta_{k})(\psi(\alpha_{k})-\psi(\sum_ {k=1}^{K}\alpha_{k})), \tag{3}\]
where \(\alpha\) and \(\beta\) represent the parameters of the Dirichlet distributions \(q_{\phi}(z|x,c)\) and \(p_{\theta}(z|c)\), respectively, \(K\) denotes the dimension of \(z\), and \(\psi\) is the Digamma function.
### LM Training Module
LM module uses GPT-2 for training. Taking task \(T_{n}\) for example, the pseudo-rehearsal module first generates pseudo samples of previous tasks \(T_{1},\cdots,T_{n-1}\), then we combine the generated pseudo samples with the samples for the task \(T_{n}\) to update the model. Hence, training dataset for task \(T_{n}\) becomes \(\mathcal{D}_{cup}=\mathcal{D}_{curr}\cup\mathcal{D}_{pseu}\). The training loss is defined as:
\[\mathcal{L}_{\rm LM}(\theta)\] \[=-\sum_{(x_{i},y_{i})\in\mathcal{D}_{cup}}\log p_{\theta}(x_{i}, y_{i})+\log p_{\theta}(y_{i}|x_{i}). \tag{4}\]
### Jensen-Shannon Knowledge Distillation
We further propose Jensen-Shannon Knowledge Distillation (JSKD) to help the model remember previous tasks. Many CL studies Chuang et al. (2020); Mi et al. (2020); Zhao et al. (2022); Chen et al. (2023) use knowledge distillation to lessen CF. Like L2KD Chuang et al. (2020), as illustrated in Figure 3, DCL starts training on a new task with a teacher model, then transfers the knowledge to a student model. In our case, the teacher model is trained on the old task and the student model on the new task. This helps the CL model adapt to the new task while retaining the knowledge from previous tasks simultaneously. We now explain JSKD and compare it with the traditional KL-based knowledge distillation method.
Figure 2: Overview of the proposed DCL model. DCL consists of two modules: the pseudo-rehearsal module and the LM training module. The pipeline of DCL can be summarized as follows: (1) In task \(N\) training, the pseudo-rehearsal module uses a CVAE to produce pseudo samples from tasks \(1\) to \(N-1\). (2) These pseudo samples are then mixed with the dataset of task \(N\) and used in the current task training in the LM training module.
**Knowledge Distillation** Given a training sample \((x,y)\), we want to minimize the cross-entropy between the output distribution of the teacher and student models. The training objective is:
\[\mathcal{L}_{\mathrm{KD}}=\alpha\cdot\mathcal{L}_{\mathrm{KL}}(S,T)\cdot\tau^{2 }+(1-\alpha)\cdot\mathcal{L}_{\mathrm{CE}}(S,Y), \tag{5}\]
where \(T\) and \(S\) are teacher and student predictions, respectively. \(\tau\) is the temperature to soften the teacher's predictions, while \(\mathcal{L}_{\mathrm{CE}}(S,Y)\) quantifies the cross-entropy loss between student predictions and the ground truth labels \(Y\). \(\mathcal{L}_{\mathrm{KL}}\) implicitly prevents the student's model parameters from straying too far from the teacher's model parameters. Also, the first term denotes a soft target while the second term is the hard target. \(\alpha\in[0,1]\) balances the soft and hard target evaluations.
**JS Divergence vs KL Divergence** For distributions \(p\) and \(q\), JS divergence (Lin, 1991) is defined by :
\[\mathcal{L}_{\mathrm{JS}}(p\parallel q)= \frac{1}{2}\mathcal{L}_{\mathrm{KL}}\left(p\parallel\frac{1}{2}( p+q)\right)\] \[+\frac{1}{2}\mathcal{L}_{\mathrm{KL}}\left(q\parallel\frac{1}{2 }(p+q)\right). \tag{6}\]
JS divergence symmetrically measures the similarity between two probability distributions, in contrast to the asymmetric KL divergence, with values ranging from 0 (identical distributions) to 1(no shared support). JS divergence offers advantages over KL divergence: a) Its symmetry ensures consistent values regardless of comparison order, making it ideal for measuring distribution similarities. b) JS provides bounded value in \([0,1]\), while KL divergence spans \([0,+\infty)\). The above properties make JS divergence more suitable for knowledge distillation than KL since the KL divergence will be infinite when one sample appears only in one task distribution.
**Knowledge Distillation via JS Divergence** Motivated by the above discussions, we propose a JS divergence-based knowledge distillation (JSKD) to more accurately measure the distance between teacher and student models, enhancing model robustness. The JSKD loss is defined as:
\[\mathcal{L}_{\mathrm{KD}}=\alpha\cdot\mathcal{L}_{\mathrm{JS}}(S,T)\cdot\tau^{ 2}+(1-\alpha)\cdot\mathcal{L}_{\mathrm{CE}}(S,Y). \tag{7}\]
Specifically, we use the preceding task for the teacher model and the current task for the student model. As mentioned, DCL optimizes \(\mathcal{L}_{\mathrm{CVAE}}+\mathcal{L}_{\mathrm{LM}}\). For CVAE, the training loss \(\mathcal{L}_{\mathrm{CVAE}}=\mathcal{L}_{\mathrm{Rec}}+\mathcal{L}_{\mathrm{KL}} ^{\prime}\). Incorporating knowledge distillation, the \(\mathcal{L}_{\mathrm{Rec}}\) and \(\mathcal{L}_{\mathrm{LM}}\) for task \(T_{n}\) is defined as:
\[\mathcal{L}_{\mathrm{Rec}}=\alpha\mathcal{L}_{\mathrm{JS}}(l_{c},l_{c}^{*})\tau^{2}+(1-\alpha)\mathcal{L}_{\mathrm{CE}}(l_{c},Y),\] \[\mathcal{L}_{\mathrm{LM}}=\alpha\mathcal{L}_{\mathrm{JS}}(l_{l},l_{l}^{*})\tau^{2}+(1-\alpha)\mathcal{L}_{\mathrm{CE}}(l_{l},Y), \tag{8}\]
where \(l_{c}\) and \(l_{l}\) are the logits output of CVAE and LM of task \(T_{n}\), respectively. \(l_{c}^{*}\) and \(l_{l}^{*}\) represent the logits output of task \(T_{n-1}\), and \(Y\) signifies the ground truth. We emphasize that \(\mathcal{L}_{\mathrm{KL}}^{\prime}\) is the KL loss in (3) to evaluate the distance between the assumed Dirichlet data distribution and the real distribution. It is different from the \(\mathcal{L}_{\mathrm{KL}}\) for evaluating the distance between the student and teacher models in cross-task knowledge distillation.
## 4 Experiments
### Datasets
We evaluate our proposed model by using distinct datasets for two separate tasks: intent detection and slot filling. For intent detection, we employ HWU (Liu et al., 2021), BANKING (Casanueva et al., 2020), CLINC (Larson et al., 2019), SNIPS (Coucke et al., 2018), AITS (Hemphill et al., 1990), and TOP (Gupta et al., 2018) datasets. Consistent with previous works (Zhao et al., 2022), we divide the TOP dataset into three separate subsets: TOP-S1, TOP-S2, and TOP-S3. Each is treated as an individual task to expand the number of tasks for CL evaluation. For slot filling, SNIPS, AITS, DSTC (Rastogi et al., 2020), MIT-MOVIE 1, and MIT-RESTaurRANT 1 datasets are used.
Footnote 1: [https://groups.csail.mit.edu/sls/downloads/](https://groups.csail.mit.edu/sls/downloads/)
For a fair comparison, these tasks are learned in six different orders, and the average performances across these orders are reported.
### Baselines
To demonstrate the effectiveness of our approach, we compare it with eleven robust baselines. We
Figure 3: Knowledge Distillation of DCL.
note that we **Fine-tune** the pre-trained language models GPT2 (Radford et al., 2019) on the stream of tasks without any strategy to prevent CF. We use multi-task (**Multi**) learning as the upper bound.
_Regularization:_**EWC**(Kirkpatrick et al., 2017) is a regularization method that mitigates catastrophic forgetting by constraining crucial parameters while enabling less significant ones adapted to new-task data. **MAS**(Aljundi et al., 2018) quantifies parameter importance in the network based on task memory contributions, aiding in mitigating CF.
_Rehearsal:_**LAMOL** is a rehearsal method that utilizes the language model as both learner and generator, facilitating the creation of pseudo samples for current training. Its variations, **LAMOL-g** and **LAMOL-t**, diverge in terms of the incorporation of global or task-specific tokens. **L2KD**(Chuang et al., 2020) is built upon LAMOL which is proposed to introduce knowledge distillation into LAMOL. **ER**(Rolnick et al., 2019) uses on-policy learning for quick adaptation to new tasks and off-policy learning with behavioral cloning to enhance the performance of past tasks. **PCLL**(Zhao et al., 2022) is a CVAE-based generative replay method that reaches the SOTA performance in this setting.
_Architectural:_**HAT**(Serra et al., 2018) proposes a task-based hard attention mechanism that preserves information from previous tasks without affecting the learning of the current task. **CTR**(Ke et al., 2021) inserts a continual learning plug-in module in two locations in BERT (Devlin et al., 2019) to achieve both CF mitigation and knowledge transfer. **AdapterCL**(Madotto et al., 2021) leverages task-specific residual adapters in a frozen GPT-2 backbone, thereby reducing parameter number and promoting efficient continual learning.
### Experimental Settings
All experiments are conducted on NVIDIA A100 GPU. Experimental settings are summarized as: (1) In intent detection, the batch size is 32 with a learning rate of 5e-5 and a pseudo sample rate of 0.2. The dimension of \(z\) is 128, and we use the Adam optimizer. We set the maximum context length as 256 and train it for 5 epochs. (2) The differences between slot filling with the above intent detection settings include a) the dimension of \(z\) is 512, b) the maximum context length is 50, and c) we train it for 10 epochs.
### Evaluation Metrics
**Average Joint Goal Accuracy (JGA):** Average JGA denotes the average accuracy on all tasks after the final task has been learned, which is defined as: \(\mathrm{Avg.JGA}=\frac{1}{T}\sum\limits_{i=1}^{T}R_{T,i}\), where \(R_{i,j}\) denotes the evaluation metric on task \(t_{j}\) after training on task \(t_{i}\). Since intent detection and slot filling can be viewed as classification and sequence labeling tasks, we usually adopt accuracy (ACC) and F1 score (F1) for intent detection and slot filling, respectively.
**Learning Curve Area (LCA):** We also use LCA (Chaudhry et al.), computed as the area under a learning curve, to indicate a model's CL performance over a sequence of tasks. LCA is defined as: \(\mathrm{LCA}=\int_{0}^{T}P(t)dt\), where \(P(t)\) is the average model performance at step \(t\) across all already-learnt tasks, and \(T\) is the total number of steps. Higher LCA values suggest efficient CL.
## 5 Results and Analysis
### Overall Evaluation Results
Table 1 summarizes the performances of different methods, providing compelling evidence that our model outperforms all baselines. Superior results from our method suggest better pseudo sample generation and more effective knowledge transfer. In particular, compared to the SOTA model PCLL, DCL shows a significant increase of \(3.48\%\) in accuracy and \(4.22\%\) in LCA for intent detection. Moreover, we observe an improvement of \(2.89\%\) in F1 and \(6.07\%\) in LCA for slot filling. These results indicate that using Dirichlet-guided pseudo-rehearsal and JSKD can mitigate CF. Notably, the performance of our model is near the upper bound (multi-task learning), with only a small gap of \(2.52\%\) in accuracy for intent detection, and a \(3.43\%\) difference in F1 for slot filling.
To further understand these trends, we plot the learning curve of the average scores for DCL and PCLL in intent detection tasks in Figure 4. It is evident that our model alleviates the CF problem more effectively than the current SOTA in the continual learning process. The observed drop in accuracy is due to task switching. We further interpret the result from two perspectives:
* DCL significantly outperforms PCLL, suggesting that the Dirichlet latent variable approximates the data distribution more accurately. This leads to higher quality pseudo
samples and overall better performance.
* Even though DCL generates fewer pseudo samples compared to the real samples used in multi-task learning (our upper bound), it still delivers strong performance. This implies that the pseudo samples generated by DCL are diverse and representative enough to capture the information present in the real samples.
### Ablation Study
**Dirichlet or Gaussian-guided Rehearsal Module.** To understand the impact of using a Dirichlet-guided rehearsal module, we compared the performance of DCL and PCLL in intent detection and slot filling tasks, both using KL knowledge distillation. The distinction lies in the choice of either Dirichlet or Gaussian latent variables. Table 2 reveals that DCL with a Dirichlet-guided rehearsal module surpasses PCLL which uses a Gaussian-guided module, suggesting that the Dirichlet distribution is more effective at approximating the true data distribution.
**JS or KL Knowledge Distillation.** Next, we evaluate the impact of JS Knowledge Distillation. Table 3 shows the performance differences between DCL implementations that use either KL or JS knowledge distillation in the slot-filling task for various task learning orders. The results indicate that the model using JS Knowledge Distillation outperforms the one using KL knowledge distillation. This suggests that JS divergence is more effective for knowledge transfer.
**Number of Pseudo Samples.** We conducted further analysis to understand how the number of pseudo samples impacts the performance of the proposed approach, by testing various pseudo sample ratios in DCL. Table 4 presents the experimental results for pseudo sample ratios of 0.1, 0.2, 0.4, and 0.5. Even when fewer pseudo samples are added to the training, DCL with a ratio of 0.1 still outperforms PCLL with a ratio of 0.2. Moreover, we found that increasing the number of pseudo samples further improves performance. This is expected because more data samples contain more information, enhancing the model's capabilities.
lish that the quality of DCL's pseudo samples surpasses that of the baselines, as evidenced by the higher accuracy and LCA. Notably, our method outperforms PCLL, which differs only in its use of Gaussian latent variable as opposed to our proposed Dirichlet variables. This suggests that Dirichlet latent variable are more effective than Gaussian ones in generating pseudo samples. We utilize **Dist-n**(Li et al., 2015) to assess the quality of pseudo samples. Dist-n measures the proportion of distinct n-grams in the generated pseudo samples. A higher Dist-n value, indicating greater pseudo-sample diversity, is generally preferred as it shows the samples are more varied. Given the limited number of pseudo samples included, the quality of our exemplars is crucial in preserving the performance of previous tasks. Our aim is to carefully select representative and diverse utterances, rather than settling for generic and closely similar ones.
Table 5 summarizes the Dist-n results. Notably, DCL achieves higher distinct scores compared to other methods, indicating that DCL-generated pseudo samples exhibit greater diversity. This suggests that pseudo samples created using DCL more closely resemble real samples.
**Dimension of Latent Variable.** We also examine the impact of the latent variable \(z\)'s dimension, as displayed in Table 6. It shows that DCL using JSKD with a latent dimension of \(8\) outperforms DCL using KL with a dimension of \(128\). This suggests that the Dirichlet latent is superior to the Gaussian latent. Even with smaller dimensions and less information encoded, the model can generate high-quality pseudo samples, resulting in improved accuracy. However, it's worth noting that DCL using JSKD with a latent dimension of \(8\) doesn't perform as well as DCL with a dimension of \(128\). This can be attributed to the reduced information capacity of the smaller \(z\) dimension, which may lead to a decrease in performance.
### Case Study
Table 7 presents a comparison between the pseudo samples generated by both DCL and PCLL and real samples, and examples generated on CLINC from the intent detection task are shown. A pseudo sample includes the input utterance (middle column) as well as the intent (right column). It is obvious that PCLL struggles to generate the intent of specific sentences correctly. For instance, PCLL wrongly generates the intent _"mpg"_ (miles per gallon) for the utterance _"Do they have a lot of miles on this road"_, showing that PCLL fails to capture the actual meaning of the utterance. In addition, for the utterance _"Do you know how much my new credit card is worth?"_, PCLL also wrongly detects the intent as the _"expiration date"_ which is actually not relevant to the input.
## 6 Conclusions
In this paper, we propose DCL, a generative-based rehearsal method to alleviate catastrophic forgetting for continual learning in ToDs. A Dirichlet distribution-based CVAE is developed to exploit the flexibility of Dirichlet distribution to model the utterance-level characteristics, improving pseudo sample generation compared to the traditional Gaussian-based CVAE. We also proposed a more robust JS divergence-based knowledge distillation method to facilitate knowledge transfer between tasks. Comprehensive experiments show the superiority of the proposed method.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Models** & **ACC**\(\uparrow\) & **LCA**\(\uparrow\) \\ \hline DCL (with KL, z = 128) & 92.83 & 91.32 \\ DCL (with JS, z = 8) & 93.51 & 93.11 \\ DCL (with JS, z = 128) & 93.73 & 93.04 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Intent detection result of DCL (with JS) and DCL (with KL) for different latent variable dimensions.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Models** & **Input Utterance** & **Output y** \\ \hline Golden & 1. What’s the fuel economy of & 1. mpg \\ & my car. & 2. What is the expiration date & 2. expiration date \\ & on my card? & 2. Our model & 1. mpg \\ \hline PCLL & 1. Do they have a lot of miles & 1. mpg \\ & on this road? & 2. Do you know how much my & 2. expiration date \\ & new credit card is worth? & 2. Our model & 1. mpg \\ \hline DCL & 1. What is the mpg of this car? & 1. mpg \\ & 2. Can you check my & 2. expiration \\ & expiration month? & 2. Our model & 2. expiration date \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of Generated Pseudo Samples by PCLL and DCL against the Ground Truth (Golden).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Models** & **Dist-1** & **Dist-2** & **Dist-3** & **Dist-4** \\ \hline LAMOL-g & 0.0602 & 0.2466 & 0.4489 & 0.6178 \\ LAMOL-t & 0.1758 & 0.4733 & 0.6837 & 0.8090 \\ PCLL & 0.2836 & 0.6566 & 0.8369 & 0.9221 \\
**DCL** & **0.3092** & **0.7019** & **0.8708** & **0.9389** \\ \hline Real Sample & 0.4000 & 0.7972 & 0.9255 & 0.9717 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Distinct scores for generated pseudo samples.
## 7 Limitations
The limitations of our work include:
* Our model can be further mixed with the architectural method for better performance. For example, a Dirichlet latent variable can be introduced to grasp the global characteristics for a specific task. Consequently, task-specific residual adapters in the LM training module can be designed to capture each task's local features.
* We infer from Table 6 that many dimensions of \(z\) are inactive for the final performance. The interpretable relation between the dimensions and the final performance is not investigated, which could potentially help to achieve controlled generation.
|
2307.16716 | Pole-skipping as order parameter to probe a quantum critical point | The holographic system described by Einstein-Maxwell-Chern-Simons dynamics in
the bulk of AdS exhibits a chiral magnetic effect and a quantum critical point.
Through numerical calculations, we find that the butterfly velocity can serve
as a new identifier for the quantum critical point in this system. We show that
the critical point is the point at which the butterfly velocity is equal to the
speed of light in the direction of the magnetic field, while in the opposite
direction the butterfly propagation vanishes. Furthermore, by studying the
pole-skipping points of the response function of the operator dual to the
tensor part of the metric perturbation in the bulk, we discover a set of order
parameters that distinguish the two states of the system near the quantum
critical point. Each of these order parameters is the sum of the absolute
values of the real parts of momentum at all pole-skipping points associated
with a particular frequency. This quantity vanishes in the disordered state
while taking a positive value in the ordered state. In addition, our results
confirm the idea that the chiral magnetic effect can manifest macroscopically
through quantum chaos. | Navid Abbasi, Karl Landsteiner | 2023-07-31T14:35:15Z | http://arxiv.org/abs/2307.16716v1 | # Pole-skipping as order parameter to probe a quantum critical point
###### Abstract
The holographic system described by Einstein-Maxwell-Chern-Simons dynamics in the bulk of AdS exhibits a chiral magnetic effect and a quantum critical point. Through numerical calculations, we find that the butterfly velocity can serve as a new identifier for the quantum critical point in this system. We show that the critical point is the point at which the butterfly velocity is equal to the speed of light in the direction of the magnetic field, while in the opposite direction the butterfly propagation vanishes. Furthermore, by studying the pole-skipping points of the response function of the operator dual to the tensor part of the metric perturbation in the bulk, we discover a set of order parameters that distinguish the two states of the system near the quantum critical point. Each of these order parameters is the sum of the absolute values of the real parts of momentum at all pole-skipping points associated with a particular frequency. This quantity vanishes in the disordered state while taking a positive value in the ordered state. In addition, our results confirm the idea that the chiral magnetic effect can manifest macroscopically through quantum chaos.
###### Contents
* 1 Introduction
* 2 Setup
* 2.1 Near horizon solution
* 2.2 Bulk solution
* 3 Pole-skipping in energy correlator and the butterfly velocity
* 3.1 Longitudinal
* 3.2 Transverse
* 3.3 Numerical results
* 4 Pole-skipping in the tensor channel
* 4.1 High temperature limit
* 4.2 Scanning the pole-skipping points at lower temperatures
* 4.3 \(\hat{T}\to 0\): Pole-skipping as the order parameter
* 5 Conclusion and outlook
* A How to find the upper-half plane pole-skipping point of the energy density correlator?
* B Relation to the chiral magnetic effect
* C Magnetized Chirally Charged RN
* C.1 Right-handed case: A single \(U(1)\) axial current
* C.2 A \(U(1)\) axial current together with a \(U(1)\) vector current
* D Near horizon data in the tensor channel
## 1 Introduction
In large N holographic systems, quantum chaos is quantified by the so-called _quantum chaos points_, namely \((\omega_{c},k_{c})\equiv(i\lambda,\frac{i\lambda}{v_{B}})\), where \(\lambda\) is the Lyapunov exponent and \(v_{B}\) is the velocity of butterfly propagation in the system. These two quantities are basically [1, 2] encoded in an out-of-time-order correlator (OTOC). However, OTOC calculations are generally difficult; in holography we should study it via shock wave geometry [3]. In quantum field theory, there are only a few well-known analytical results; for example, in 2d CFT [4], in the SYK model [5], and in the weak coupling [6].
Based on the fact that quantum chaos is related to energy dynamics in holographic systems, another approach is introduced: _pole-skipping_. The effective field theory argument [7] corroborates [8]'s interesting holographic result that the dispersion of the energy density correlator, i.e. the line of poles of the energy density response function \(G^{R}_{\mathcal{E}\mathcal{E}}(\omega,k)\), does not exist at the chaos point! The latter is equivalent to saying that at the chaos point, the numerator and denominator of \(G^{R}_{\mathcal{E}\mathcal{E}}(\omega,k)\) will disappear; this is the _pole-skipping phenomenon_1. Since computing \(G^{R}_{\mathcal{E}\mathcal{E}}(\omega,k)\) is easier than OTOC in holography, this gives us another way to find \(\lambda\) and \(v_{B}\)[11].
Footnote 1: Such points have first been observed for diffusive and shear channels in holographic models in [9, 10]. They appear at real momenta and are related to causality rather than chaos.
In this work we want to apply the above idea to a 5-dimensional Einstein-Maxwell theory with Chern-Simons term which is dual to a 4-dimensional gauge theory. The Chern-Simons term is the holographic dual of a 't-Hooft anomaly and the Chern-Simons coupling \(\kappa\) determines the strength of the anomaly. At the specific value \(\kappa=-2/\sqrt{3}\) this is the holographic dual of \(\mathcal{N}=4\) SYM theory. We study the system in the presence of a constant background magnetic field \(B\) and a uniform electric charge density \(\rho\)[12, 13, 14]. Butterfly velocities have been investigated for this system first in [15] by analytically computing the pole-skipping point for small \(B/T^{2}\) and \(\rho/T\). The main result of [15] is that the anomaly splits the butterfly propagation velocity along the magnetic field. This is actually different from the anisotropy of the butterfly's velocity due to the magnetic field in the non-anomalous field theory. In the latter case, the butterfly speed depends on the angle between the magnetic field and the measurement axis 2. Therefore, the same butterfly velocity can be found in both directions parallel to the axis of the magnetic field. However, ref. [15] shows that in the presence of anomaly, the butterfly speed splits in two directions along the magnetec field; the value in the direction of magnetic field is larger than that in the opposite direction (see figure 1). It was noted in [15] that the latter is reminiscent of the chiral magnetic effect [16].
Footnote 2: More precisely, it depends on \(|\cos\theta|\), where \(\theta\) is the mentioned angle.
As discussed in ref. [12], the Einstein-Maxwell theory with Chern-Simons term exhibits another interesting feature; it has a quantum critical point. In order to investigate how much information about the quantum critical point is encoded in the pole-skipping point(s), in this work, we will extend the study of [15] to finite magnetic field and finite density. In the language of [15] we want to numerically study the bulk solution in terms of two parameters \(B/\mu^{2}\) and \(T/\mu\) (\(\mu\) is the chemical potential associated with \(\rho\)). The case of small \(B/\mu^{2}\) and large \(T/\mu\) corresponds to [15]. However, by performing high-precision numerical calculations, we will be able to study the pole-skipping phenomenon in the energy density correlator for arbitrary values of these two parameters. In particular, when \(T/\mu\) is small, we vary \(B/\mu^{2}\) to see how the butterfly velocities behave when the system
approaches the quantum critical point 3.
Footnote 3: In ref. [17], the butterfly effect has been proposed as the diagnostic of quantum phase transition in an anisotropic holographic model exhibiting metal-insulator transitions.
What we need to calculate is the pole-skipping points of the correlation function of energy momentum tensor, \(T^{\mu\nu}\). Such a correlator is related to a certain scalar perturbation in the bulk 4. On the other hand, a full analysis of such perturbations of the Einstein-Maxwell-Chern-Simons background has some problems [18]. Instead, we focus on \(E_{vv}\) component of the Einstein's equations, with \(v\) being the Eddington-Finkelstein time [11]. This gives us exclusively the upper-half plane pole-skipping points in the scalar channel 5, which is sufficient to find the butterfly velocity in the system.
Footnote 4: By scalar we mean spin zero under the \(SO(2)\) rotations which is defined as the following. Setting the momentum of the correlator in a specific direction, e.g., the third direction, \(T^{\mu\nu}\) components can then be classified according to the representation of the \(SO(2)\) group of rotations perpendicular to that Fourier momentum. In the bulk, we take the components of metric perturbation \(\delta g_{\mu\nu}\) under the same considerations.
Footnote 5: By upper-half plane we mean upper half of the “\(\mathop{\rm Im}\omega-\mathop{\rm Im}k\)” plane.
On the other hand, turning on the magnetic field \(\hat{B}\) in the bulk corresponds to perturbing the boundary theory by a relevant operator 6. When \(\hat{T}\to 0\), the boundary theory reduces in the IR to a \((1+1)\) dimensional system, where the operator dual to \(\hat{B}-\hat{B}_{c}\) has the scaling dimension \(\Delta=2\). In ref. [13], the critical magnetic field associated with this
Figure 1: Butterfly propagation after perturbing the system at point \(y\). (a) Isotropic butterfly propgation in an uncharged holographic plasma in the absence of magnetic field. (b) Anisotropic butterfly propgation in an electrically charged system in the presence of an exterml mgnteic field. (c) The butterfly propagation in the system studied in this paper. \(\kappa\) is proportional to the strenght of the anomaly. When \(\kappa=0\), the two butterflies in opposite directions have the same speed (panel b). The anomaly, however, breaks this symmetry. The difference in the speeds of the two butterflies along each axis is a manifestation of the chiral magnetic effect (panel c).
relevant perturbation was found to be \(\hat{B}_{c}\approx 0.499\).
The first result of our calculations is that the magnitude of butterfly speeds for \(\hat{T}\to 0\) carries the same information about the phase transition in the system. In this limit, for large \(\hat{B}\), we find that the speed of the butterfly in the direction of the magnetic field is equal to the speed of light. For small \(\hat{B}\), the butterfly speed in the opposite direction turns out to be zero. We find that the critical point is determined by the magnetic field where the latter two occur simultaneously; it is actually \(\hat{B}_{c}\approx 0.499\).
To gain a deeper understanding of the relationship between pole-skipping and quantum phase transitions in the system, we then investigate the tensor part of the metric perturbation in the bulk. The phenomenon of pole-skipping in correlators for such operators has been extensively discussed in the literature (see [19] and also [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50] for an incomplete list of related works). However, none of the previous studies have investigated it near a quantum critical point.
Our results show that phase transitions can also be tracked in this channel; that is, this time pole-skipping introduces an order parameter too! We find that for \(\hat{T}\to 0\) and for large \(\hat{B}\), the spectrum of pole-skipping points consists of \(2n\) points at \(\omega=-n(i\,2\pi T)\), with pure imaginary frequencies. This result is known for many holographic systems [19]. Except for \(\hat{T}\gg 1\), the spectrum is asymmetric with respect to \(\mathop{\rm Im}k\) axis, indicating the chiral anomaly in this system [15].
However, our surprising results arises at finite \(\hat{B}\) and low \(\hat{T}\). We find that the spectrum develops points with "complex" momentum. The "nonzero" real part of pole-skipping points is reminiscent of an ordered state below the quantum critical point, identified by a nonzero order parameter. Therefore, we are tempted to consider an order parameter determined by the real part of pole-skipping points. _However, as highlighted in ref. [13], quantum phase transitions in our system occur with no change in symmetry_. In other words, this is a quantum phase transition that preserves symmetry. Interestingly, it turns out that transitions between these states occur at \(\hat{B}_{c}\).
In summary, the structure of pole-skipping points in the "\(\mathop{\rm Im}\omega-\mathop{\rm Re}k\)" plane suggests to define an order parameter; it determines state of the system; when all points are located at \(\mathop{\rm Re}k=0\) on this plane, the system behaves as a Fermi liquid [13]. On the other hand, a point-symmetric distribution about \(\mathop{\rm Re}k=0\) on this plane corresponds to a non-Fermi liquid.
In the remainder of this paper, we first present the gravity settings for the boundary system in SS 2. In particular we will explain how to find the bulk solution numerically. In SS 3 we compute the butterfly velocity in the system. The study of pole-skipping in the tensor channel is the subject of SS 4. Finally, we conclude in SS 5 by reviewing and discussing the results.
## 2 Setup
The bulk action is given by
\[S=\frac{1}{16\pi G_{5}}\int_{\cal M}d^{5}x\ \sqrt{-g}\left(R+\frac{12}{L^{2}}-F^{MN }F_{MN}\right)+S_{CS}+S_{bdy} \tag{2.1}\]
with the Chern-Simons action being as the following
\[S_{CS}=\frac{\kappa}{12\pi G_{5}}\int A\wedge F\wedge F=\ \frac{\kappa}{48\pi G_{5}} \int d^{5}x\sqrt{-g}\ \epsilon^{\rho\mu\nu\alpha\beta}A_{\rho}F_{\mu\nu}F_{ \alpha\beta} \tag{2.2}\]
and \(S_{bdy}\) is the boundary counter term. The equations of motion are given by:
\[\nabla_{\nu}F^{\nu\mu}+\frac{\kappa}{4}\epsilon^{\mu\nu\rho\alpha \beta}F_{\nu\rho}F_{\alpha\beta} = 0 \tag{2.3}\] \[R_{\mu\nu}+4g_{\mu\nu}+\frac{1}{3}F^{\alpha\beta}F_{\alpha\beta }\ g_{\mu\nu}+2F_{\mu\rho}F^{\rho}_{\ \nu} = 0 \tag{2.4}\]
from which one can find the following magnetized brane solution in the bulk
\[ds^{2}=\frac{dr^{2}}{f(r)}-f(r)dt^{2}+e^{2W_{T}(r)}(dx_{1}^{2}+dx_{2}^{2})+e^{ 2W_{L}(r)}(dx_{3}+C(r)dt)^{2} \tag{2.5}\]
\[F=E(r)dr\wedge dt+Bdx_{1}\wedge dx_{2}+P(r)dx_{3}\wedge dr \tag{2.6}\]
We follow [12] and numerically solve equations (2.3) and (2.4) to find the bulk fields \({\cal G}(r)=\{f(r),w_{T}(r),w_{L}(r),C(r),E(r),P(r)\}\).
### Near horizon solution
Let us start with the near horizon expansion of \({\cal G}\)
\[{\cal G}(r)=\,\sum_{n=0}{\cal G}^{(n)}\,(r-r_{h})^{n} \tag{2.7}\]
It should be noted that we can always exploit the background symmetry to set \(r_{h}=1\).
The boundary conditions for the the whole bulk solution can be imposed on the leading order coefficients \({\cal G}^{(n)}\); to this end we take the horizon data as7
Footnote 7: Note that \(f^{(1)}=1\) is equivalent to take \(T=1/4\pi\).
\[f^{(0)}=0\,,\ f^{(1)}=1\,,\ w_{T}^{(0)}=w_{L}^{(0)}=0\,,\ E^{(0)}=q\,,\ C^{(0)}=0 \,,\ P^{(0)}=p \tag{2.8}\]
This corresponds to taking
\[\begin{split} ds_{H}^{2}=&\,dx_{1}^{2}+dx_{2}^{2}+dx_{3 }^{2}\\ F_{H}=&\,q\,dr\wedge dt+\,\epsilon_{ijk}b_{k}dx_{i} \wedge dx_{j}+p_{i}\,dx_{i}\wedge dr\end{split} \tag{2.9}\]
where \(\vec{b}=(0,0,b)\) and \(\vec{p}=(0,0,p)\). Using the horizon data, the Einstein and Maxwell equations can be solved perturbatively to find the higher order coefficients in (2.7). In particular, we find:
\[C^{(1)} = p+2q\,\kappa\,b\] \[w_{T}^{(1)} = \frac{2}{3}(6-2b^{2}-q^{2}) \tag{2.10}\] \[w_{L}^{(1)} = \frac{2}{3}\bigg{(}6+b^{2}-q^{2}-3\big{(}\kappa b+\frac{p}{2q} \big{)}^{2}\bigg{)} \tag{2.11}\]
Then for any set of values of the three parameters \(q\), \(b\), and \(p\), one can numerically integrate the bulk dynamic equations and read the corresponding boundary quantities.
### Bulk solution
The physical parameters on the boundary are temperature, chemical potential and the magnetic field. However, due to the scale invariance, only dimensionless combinations of quantities have physical meaning. We choose to work with normalized dimensionless magnetic field \(\hat{B}\) and temperature \(\hat{T}\) as
\[\hat{B}=\,\frac{b}{\rho^{2/3}}\,,\hskip 14.226378pt\hat{T}=\,\frac{T}{(b^{3}+ \rho^{2})^{1/6}} \tag{2.12}\]
Here \(\rho\) is the charge density defined as
\[\rho=\,\lim_{r\rightarrow+\infty}\frac{E(r)}{r^{2}} \tag{2.13}\]
In this setting [12], any specific bulk solution is characterized with two quantities \(\hat{B}\) and \(\hat{T}\). These are actually the asymptotic data. What we will do is to take a fixed value for \(\hat{B}\) and then find the numerical solution to the bulk equations with varying \(\hat{T}\). As it was mentioned earlier, any numerical solution itself is specified with the values of the three horizon parameters \(q\), \(b\), and \(p\). For generic values of these horizon parameters we find a solution with a non-vanishing value of the metric function \(C(r)\). We chose to implement the additional boundary condition \(\lim_{r\rightarrow\infty}C(r)=0\). This give a co-dimension one slice of the horizon data. In other words, a specific bulk solution maps a set of \((q,b,p)\) on
this co-dimension one slce into a unique set of \((\hat{B},\hat{T})\) in which the boundary metric is the standard Minkowski metric.
## 3 Pole-skipping in energy correlator and the butterfly velocity
In order to find the pole-skipping points of energy density Green's function, it is convenient to work in the ingoing Eddington-Finkelstein coordinates.8 It is easy to show that in these coordinates the electromagnetic field strength is given by (2.6), as before. However, metric transforms to the following form
Footnote 8: In these coordinates, the regularity of solutions in the future event horizon is automatically satisfied.
\[ds^{2}=-\tilde{f}(r)dv^{2}+2g(r)drdv+2\left(j(r)dv+\;s(r)dr\right)dx_{3}+e^{2W _{T}(r)}(dx_{1}^{2}+dx_{2}^{2})+e^{2W_{L}(r)}dx_{3}^{2} \tag{3.1}\]
where \(v\) is the Eddington-Finkelstein time coordinate and
\[\tilde{f}(r) = f(r)-e^{2W_{L}(r)}C(r)^{2}\] \[g(r) = \left(1-e^{2W_{L}(r)}\frac{C(r)^{2}}{f(r)}\right)^{1/2} \tag{3.2}\] \[j(r) = C(r)e^{2W_{L}(r)}\] \[s(r) = -\frac{C(r)e^{2W_{L}(r)}}{\sqrt{f(r)\left(f(r)-C(r)^{2}e^{2W_{L}( r)}\right)}}.\]
Turning on \(\delta g_{vv}(r,v,x)=\delta g_{vv}(r)e^{-i\omega v+i\vec{k}\cdot\vec{x}}\) and other perturbations that couple it, we need to evaluate \(vv\) component of (2.4) at \(r=r_{h}=1\) (see Appendix for more details). We find
\[\begin{split}\bigg{[}k^{2}-i\vec{k}\cdot(\frac{\vec{p}}{q}+2 \kappa\vec{b})+\frac{i\omega}{2}\bigg{(}\big{(}\frac{\vec{p}}{q}+2\kappa\vec{ b}\big{)}^{2}+4\big{(}-6+q^{2}+b^{2}\big{)}\bigg{)}\bigg{]}\delta g_{vv}^{(0)} \\ +\bigg{(}\omega-\frac{i}{2}\bigg{)}\bigg{(}2k_{i}\delta g_{vx_{i} }^{(0)}+\omega\big{(}\delta g_{x_{1}x_{1}}^{(0)}+\delta g_{x_{2}x_{2}}^{(0)}+ \delta g_{x_{3}x_{3}}^{(0)}\big{)}\bigg{)}=\,0\end{split} \tag{3.3}\]
Two comments are in order
* We have presented the above equation in a fully \(SO(3)\) covariant form. Then this allows us to simply explore both longitudinal and transverse cases.
* At \(\omega_{p}=\frac{i}{2}\), the above equation becomes a decoupled equation for \(\delta g_{vv}^{(0)}\). This is in fact the same as the original calculation of [11] where the decoupling occurs at \(\omega=i2\pi T\). Here we are writing the equation in the horizon frame wherein the temperature was set to \(T=1/4\pi\).
### Longitudinal
When \(\vec{k}\parallel\vec{b}\), \(\vec{p}\,\) we find
\[\bigg{[}\bigg{(}k-i\big{(}\frac{p}{2q}+\kappa b\big{)}\bigg{)}^{2}+(6-b^{2}-q^{2}) \bigg{]}\delta g^{(0)}_{vv}=\,0 \tag{3.4}\]
Clearly, the above equation becomes ambiguous at the \(k=k_{p}\) where \(k_{p}\) is the root of the expression in the square brackets. As a result, we find the butterfly velocity in the horizon frame as
\[v^{(H)}_{B}=\,\frac{\omega_{p}}{k_{p}}=\,\bigg{(}\frac{p}{q}+2\kappa b\pm 2 \sqrt{6-b^{2}-q^{2}}\bigg{)}^{-1} \tag{3.5}\]
To proceed further, let's comment on the homogeneity of (3.3) in spatial directions. This is actually a direct consequence of taking the horizon frame as specified by the metric (2.9). On the other hand, we are interested in working in the asymptotic frame given by
\[\begin{split} ds^{2}&\sim\frac{dr^{2}}{r^{2}}+r^{2 }(-d\tilde{t}^{2}+d\tilde{x}_{1}^{2}+d\tilde{x}_{2}^{2}+d\tilde{x}_{3}^{2})\\ F&=Edr\wedge d\tilde{t}+Bd\tilde{x}_{1}\wedge d \tilde{x}_{2}+Pd\tilde{x}_{3}\wedge dr\end{split} \tag{3.6}\]
This is actually found by rescaling the solution (2.5) as
\[\tilde{t}=t\,,\quad\tilde{x}_{1,2}=\sqrt{w_{T}}\;x_{12}\,,\quad\tilde{x}_{13} =\sqrt{w_{L}}\,x_{3} \tag{3.7}\]
where
\[w_{T,L}=\,\lim_{r\rightarrow+\infty}\frac{e^{2W_{T,L}(r)}}{r^{2}} \tag{3.8}\]
In terms of boundary frame coordinates, the horizon metric takes the following form
\[ds_{H}^{2}=\,\frac{1}{w_{T}}\big{(}d\tilde{x}_{1}^{2}+d\tilde{x}_{2}^{2}\big{)} +\frac{1}{w_{L}}d\tilde{x}_{3}^{2} \tag{3.9}\]
Therefore a perturbation of the form \(e^{-i\omega t+ikx_{3}}\) in the horizon frame, takes the form \(e^{-i\omega t+i\frac{k}{\sqrt{w_{L}}}\tilde{x}_{3}}\) in the boundary frame. As a result, the butterfly velocity in the boundary frame reads:
\[v^{L}_{B}=\,\frac{\omega}{k/\sqrt{w_{L}}}=\,\sqrt{w_{L}}\,v^{(H)}_{B}\quad \rightarrow\quad\boxed{v^{L}_{B\pm}=\,\frac{\sqrt{w_{L}}}{\frac{p}{q}+2\kappa b \pm 2\sqrt{6-b^{2}-q^{2}}}} \tag{3.10}\]
For super-symmetric theories the value of \(\kappa\) is known: \(\kappa=-2/\sqrt{3}\)[51, 52]. On the other hand, any given set of \((\hat{B},\hat{T})\) on the boundary is associated with a unique set of \((q,b,p)\)
near the horizon. The latter is sufficient to find the whole bulk solution and consequently to read off \(w_{L}\) via (3.8). Thus (3.10) can be evaluated once \((\hat{B},\hat{T})\) is determined.
The important point of (3.10) is that we found two different butterfly velocities along the magnetic field. This is exactly the same finding in [15] for small magnetic field and small chemical potential. Thus, the split between butterfly velocities is a general effect across the range of quantities in the system.
### Transverse
When \(\vec{k}\perp\vec{b}\), \(\vec{p}\), (3.3) reduces to
\[\biggl{[}k^{2}-\biggl{(}\frac{p}{2q}+\kappa b\biggr{)}^{2}+(6-b^{2}-q^{2}) \biggr{]}\delta g^{(0)}_{vv}=\,0 \tag{3.11}\]
Finding the root of the square bracket, \(k_{p}\), and taking similar steps as what were done in the longitudinal case, we arrive at
\[\boxed{v_{B}^{T}=\,\pm\frac{1}{2}\left(\frac{w_{T}}{6-b^{2}-q^{2}-\left(\frac {p}{2q}+\kappa b\right)^{2}}\right)^{1/2}} \tag{3.12}\]
Again, once \((\hat{B},\hat{T})\) is given, \(v_{B}^{T}\) can be calculated from the above formula upon performing numerical calculation to find the bulk solution and consequently \(w_{T}\).
Again, in complete agreeent with the result of [15], here we find non-split speeds in the direction perpendicular to the magnteic field. This is another way of saying the anomaly is not detected in the transverse direction for the whole range of physical quntities in the system.
### Numerical results
The results have been given in figures 2 and 3. Let us elaborate on it in the following. In figure 2, we have shown how \(v_{B,+}^{L}\) (dark blue) and \(|v_{B,-}^{L}|\) (light blue) depend on \(\hat{T}\), when the external magnetic field \(\hat{B}\) is varied.
* At high \(\hat{T}\), regardless of the value of \(\hat{B}\), the two velocities become the same as the Schwarzschild result, i.e., \(v_{B}=\sqrt{2/3}\)[1], that is to say the anomaly effects totally disappear.
* At finite values of \(\hat{T}\) and \(\hat{B}\), the two longitudinal butterfly velocities are always split. This is actually a direct consequence of the anomaly in the system. The velocity in the direction of the magnetic field is always greater than the velocity in the opposite
direction. The same result was first found in ref. [15] for small magnetic field and small chemical potential. (See Appendix for details and a summary of [15] results).
* Interestingly, the splitting at \(\hat{T}=0\) encodes important information about the quantum critical point of the theory:
* At \(\hat{B}>\hat{B}_{c}\), \(v^{L}_{B,+}\) is always equal to 1 at \(\hat{T}=0\). However, by reducing \(\hat{B}\) from \(\infty\) to \(\hat{B}_{c}\), \(|v^{L}_{B,-}|\) is reduced from 1 to 0.
* It is known that \(\hat{B}_{c}\approx 0.499\) determines the quantum phase transition [12] in the system when the system is perturbed by the relevant operator of dimension \(\Delta=2\). Here we find that, under the same \(\hat{B}\), \(v^{L}_{B,+}=1\) and \(v^{L}_{B,-}=0\) (see the right panel of the middle row in the figure). This can be seen as another special feature by which the quantum critical point in this system can be identified.
Figure 2: Longitudinal butterfly speeds as functions of the dimensionless temperature in a holographic chiral system for a wide range of dimensionless magnetic field (\(0.14<\hat{B}<1.94\)). The dark blue shows the butterfly propagation parallel to the magnetic field, while the light blue curve is associated with the opposite direction. **The dashed plot corresponds to critical magnetic field associated with the quantum phase transition found in [12].** At the critical point, the butterfly propagates at the speed of light in the direction of magnetic while it is frozen in the opposite direction. The gray line points out to the Schwarzschild result; i.e., \(v_{B}=\sqrt{2/3}\).
* At \(\hat{B}<\hat{B}_{c}\), \(v^{L}_{B,-}\) remains \(0\), and \(v^{L}_{B,+}\) goes from \(1\) to \(0\) by reducing \(\hat{B}\) from \(\hat{B}_{c}\) to \(0\).
* In only two cases, the longitudinal velocity degenerates at \(\hat{T}=0\). First, \(\hat{B}=0\), which is actually expected; since in this case anomaly is not excited. Second, at large \(\hat{B}\). We comment on this point below.
Figure 3 illustrates how the transverse butterfly speed depends on temperature. Interestingly we find that at any value of magnetic field, it vanishes at \(\hat{T}=0\). This is reminiscent of weakly coupled field theory under strong magnetic fields; the system reduces to a \((1+1)d\) system at \(\hat{T}=0\), and the two transverse directions are completely decoupled. Here we see that although our system is strongly coupled, it exhibits similar behavior. Furthermore, due to the symmetry of the background, the system becomes \(2d\) CFT. This confirms why when \(\hat{T}=0\), we find two butterflies traveling at the speed of light.
## 4 Pole-skipping in the tensor channel
So far we focused on the scalar part of the energy momentum perturbations. Considering only \(E_{vv}=0\), we were able to find the upper-half plane pole-skipping points of the energy density correlation function. However, in order to find the full spectrum of pole-skipping points in that case the set of all metric perturbations in the scalar channel have to be considered. We do not elaborate on this issue anymore here. Instead, we choose to investigate the pole-skipping point in the tensor channel. In the latter case, the only involved dynamical fields are \(\delta g_{xy}\), \(\delta g_{xx}\), \(\delta g_{yy}\), and \(\delta F_{xy}\). It is easy to show that \(H_{xy}=e^{-2W_{T}}\,\delta g_{xy}\) behaves like a decoupled scalar field in this channel. Moreover, it sources an operator of weight \(\Delta=4\), say \(\mathcal{O}\), on the boundary. Our goal in this section is to find the pole-skipping points of the \(G^{R}_{\mathcal{O}\mathcal{O}}\). To this end we start with studying the dynamics \(H_{xy}(r,t,x_{i})\) near the horizon.
Similar to our earlier discussions in the scalar channel, we take \(H_{xy}(r,t,x_{i})\equiv H_{xy}(r)e^{-i\omega t+ikx_{3}}\)
Figure 3: Transverse butterfly speed in terms of the dimensionless temperature in a holographic chiral system for a wide range of dimensionless magnetic field (\(0.04<\hat{B}<1.94\)). The splitting of butterfly speeds in the longitudinal directions is not the case here.
9. It turns out that \(H_{xy}(r)\) obeys the following ordinary differential equation in the bulk
Footnote 9: It should be pointed out that here we only consider the case where the momentum is parallel to the magnetic field. It is easy to show that in the tranverse case, there is no pole-skipping phenomenon.
\[H^{\prime\prime}_{xy}(r)+a(\omega,{\bf k})H^{\prime}_{xy}(r)+b(\omega,{\bf k})H_ {xy}(r)=\,0 \tag{4.1}\]
We omit the explicit expression of the functions \(a(\omega,{\bf k})\) and \(b(\omega,{\bf k})\). In order to analyze this equation near the horizon, we take
\[H_{xy}(r)=\,\sum_{n=0}\varphi_{n}\,(r-r_{h})^{n} \tag{4.2}\]
and plug it into (4.1). The result is a set of coupled algebraic equations for the coefficients \(\varphi_{n}\). Defining \(\mathfrak{w}=\frac{\omega}{2\pi T}\) and \(\mathfrak{q}=\frac{{\bf k}}{2\pi T}\), the first four of these equations can be formally written as
\[0 = M_{11}\varphi_{0}+\,(i\mathfrak{w}-1)\varphi_{1}\,, \tag{4.3}\] \[0 = M_{21}\varphi_{0}+M_{22}\varphi_{1}+\,(i\mathfrak{w}-2)\varphi_ {2}\,,\] (4.4) \[0 = M_{31}\varphi_{0}+M_{32}\varphi_{1}+M_{33}\varphi_{2}+\,(i \mathfrak{w}-3)\varphi_{3}\,,\] (4.5) \[0 = M_{41}\varphi_{0}+M_{42}\varphi_{1}+M_{43}\varphi_{2}+M_{44} \varphi_{3}+\,(i\mathfrak{w}-4)\varphi_{4}\,, \tag{4.6}\]
where the coefficients, \(M_{rs}\), are in fact functions of \(\mathfrak{w}\) and \(\mathfrak{q}\). In Appendix, we present the explicit expression for the first three of these coefficients. Higher order coefficients have very complicated expressions and we avoid writing them in the paper. As it is obvious from the above equations that just at the frequency \(\mathfrak{w}_{\ell}=-i\ell\), the first \(\ell\) equations decouple from the rest of them and take the following form
\[0=\,{\cal M}_{\ell\times\ell}(\mathfrak{w}=-i\ell,\tilde{\mathfrak{q}}^{2}) \begin{pmatrix}\phi_{0}\\ \phi_{1}\\.\\.\\ \phi_{\ell-1}\end{pmatrix}. \tag{4.7}\]
The roots of the equation \(\det{\cal M}_{\ell\times\ell}(\mathfrak{w}=-i\ell,\tilde{\mathfrak{q}})=0\), are then those wave-numbers at which, for a given UV normalization constant, the ingoing boundary condition at the horizon is not sufficient to uniquely fix a solution for \(H_{xy}\) in the bulk. Let us call the roots \(\tilde{\mathfrak{q}}_{1},\tilde{\mathfrak{q}}_{2},\cdots,\tilde{\mathfrak{q} }_{2\ell}\). At these \(2\ell\) points, the response function of the boundary operator dual to \(H_{xy}\), namely \(G^{R}_{\cal O\cal O}\), is multi-valued at \(\mathfrak{w}=-i\ell\). These points are the so-called _level-\(\ell\) pole-skipping points_[7, 11].
Let us for the sake of clarity illustrate the expression of the pole-skipping points associated with \(\ell=1\). Solving the equation \(\det\mathcal{M}_{1\times 1}(\mathfrak{w}=-i,\tilde{\mathfrak{q}}^{2})=0\) gives (in the horizon frame)
\[\ell=1:\qquad\tilde{\mathfrak{q}}_{1,2}=\,\pm 2i\sqrt{6-b^{2}-q^{2}}-i\left( \frac{2p}{q}+\kappa b\right) \tag{4.8}\]
which is the same as the momentum of the upper-half plane pole-skipping in (3.5).
It is clear that in the absence of anomaly effects, i.e. when \(p=b=0\), the pole-skipping points are symmetrically located along \(\mathrm{Im}\,\mathfrak{q}\) axis. However, as was first found in ref. [15], anomaly makes this symmetry break. This was actually observed for the pole-skipping points of the lowest four levels, \(\ell=1,2,3,4\), and in the limit of small charge and small magnetic field. Here by use of numerical analysis we will discuss the first seven levels at finite value of magnetic field in a wide range of temperatures down to \(T=0\). Our goal is to investigate the behavior of pole-skipping points when the system is in the vicinity of the quantum critical point. Let us emphasize that the analytical expressions for higher level pole-skipping points, i.e., \(\ell=2,3,4,5,6,7\) are either extremely complicated. Therefore we omit their explicit expressions here and proceed with illustrating their numerical values.
We present the numerical results in three parts, in the following three subsections.
### High temperature limit
From the observation in the scalar channel, we refer to \(\hat{T}\gtrsim 0.6\) as the high temperature. As we found there, at such temperatures the magnitudes of the two butterfly speeds approaches the known value associated with an uncharged holographic system. Here we find a similar qualitative behavior. For all the magnetic field values we probed numerically, the distribution of pole-skipping points in \(\mathrm{Im}\,\mathfrak{w}-\mathrm{Im}\,\mathfrak{q}\) remains unchanged by varying the temperature within the range \(\hat{T}\gtrsim 0.6\). The results is illustrated in figure 4. Here are a few comments about the figure:
* For the reasons that are to be mentioned in next section, we have represented the pole-skipping points in a three dimensional plot. However, as in the uncharged system [19], all of them lie in \(\mathrm{Im}\,\mathfrak{w}-\mathrm{Im}\,\mathfrak{q}\) plane.
* There are two pole-skipping points at \(\mathfrak{w}_{1}=-i\) (red dots), four of them at \(\mathfrak{w}_{2}=-2i\) (orange dots), and \(\cdots\).
* The distribution is symmetric with respect to \(\mathfrak{q}=0\); as a result, no anomaly effect is detected.
The situation becomes interesting when the temperature is decreased.
### Scanning the pole-skipping points at lower temperatures
In this subsection, we scan the spectrum of pole-skipping points at two different values of \(\hat{B}\) on either side of \(\hat{B}_{c}\). In each case, we account for nine different values of \(\hat{T}\), illustrating pole-skipping points in both planes, starting at \(\hat{T}=0.6\) and approaching \(\hat{T}=0\).
The first case we consider is \(\hat{B}=0.54\), as shown in the figure 5. As one would expect, at high temperature, i.e. \(\hat{T}\gtrsim 0.6\), the spectrum is symmetric and lies in the \(\operatorname{Im}\mathfrak{w}-\operatorname{Im}\mathfrak{q}\) plane (see upper left panel). As the temperature decreases, the spectrum deviates from the symmetric form; it actually starts at a higher \(\ell\) level. As can be seen from the top middle panel, at \(\hat{T}=0.31\), the \(\ell=7\) level is more affected than the lower levels. In addition, two of the points belonging to \(\ell=7\) are close to each other at this \(\hat{T}\)10. By decreasing \(\hat{T}\), these two points eventually scattter from each other. After scattering, their imaginary parts of momentum become equal, however, they also obtain two non-zero and opposite real momenta. This can be seen at \(\hat{T}=0.3\) in the upper right panel.
Footnote 10: For temperatures between \(0.31\) and \(0.6\), this asymmetry occurs at higher order pole-skipping points, such as \(\ell=8,9,\cdots\).
By reducing \(\hat{T}\) further, more and more points from lower levels will collide with each other. At the same time, they will move away from the \(\mathfrak{q}=0\) axis on the \(\operatorname{Im}\mathfrak{w}-\operatorname{Re}\mathfrak{q}\) plane. It can be seen from the three figures in the middle row.
However, the opposite happens when \(\hat{T}\) is relatively small; the points start to approach the \(\mathfrak{q}=0\) axis on the \(\operatorname{Im}\mathfrak{w}-\operatorname{Re}\mathfrak{q}\) plane. It is illustrated in the bottom panel. Finally, when \(\hat{T}\) approaches zero, all real momenta vanish (see bottom right plot). Interestingly, in the latter case, the spectrum at level-\(\ell\) consists of \(\ell+1\) points. Note that at high \(\hat{T}\) (upper left panel), there were \(2\ell\) of them; where \(\ell\) of them had values of \(\operatorname{Im}\mathfrak{q}<0\) and the other \(\ell\) had values of \(\operatorname{Im}\mathfrak{q}>0\). We see that by decreasing the temperature, \(\ell-1\) points (out of the entire \(\ell\) points of \(\operatorname{Im}\mathfrak{q}<0\)) gradually move to the right of \(\mathfrak{q}=0\) and eventually coincide
Figure 4: High temperature limit (\(\hat{T}\gtrsim 0.6\)): The pole-skipping occurs at Matsubara frequencies and at special purely imaginary values of momenta. We have illustrated the pole-skipping points associated with first seven Matsubara frequencies. As many other studies in the literature in this limit, the spectrum is symmetric, in the sense that anomaly is not detected. Moreover, varying the magnetic field within the range \(0.01<\hat{B}<1.94\), the spectrum remains unchanged; as if there is no any magnetic field.
with \(\ell-1\) points of \(\operatorname{Im}\mathfrak{q}>0\). Finally, the only pole-skipping points with \(\operatorname{Im}\mathfrak{q}<0\) at low \(\hat{T}\) (i.e., in the lower right panel) are the points corresponding to the leftmost points in the upper left panel.
In figure 6, we scan the spectrum of pole-skipping points at \(\hat{B}=0.44\). Similar to the previous case, higher level points start scatttering from each other by lowering the temperature. Lower level points come into play at lower temperatures. However, In contrast to the spectum at \(\hat{B}=0.54\), we see here that the real parts of momenta, which is the result of collisins between pole-skipping points, never vanish. The lower the temperature, the farther they are from \(\mathfrak{q}=0\) on the \(\operatorname{Im}\mathfrak{w}-\operatorname{Re}\mathfrak{q}\) plane.
Finally as \(\hat{T}\) approaches zero, we find a spectrum developed on both \(\operatorname{Im}\mathfrak{w}-\operatorname{Im}\mathfrak{q}\) and \(\operatorname{Im}\mathfrak{w}-\operatorname{Re}\mathfrak{q}\) planes. On the lattre plane, it is symmetric and consists of \(\ell-1\) points at level-\(\ell\). On the former plane, however, it is assymetric and consists of \(\ell+1\) points at level-\(\ell\).
In summary, we have found two different behaviors for the pole-skipping fates on either side of \(\hat{B}_{c}=0.499\). This simply suggests that there may be some information about the quantum critical point in the pole-skipping spectrum. To explore this, in the next subsection we take a closer look at the spectrum at \(\hat{T}=0\).
### \(\hat{T}\to 0\): Pole-skipping as the order parameter
Based on the results of the previous two subsections, we now scan the spectrum of pole-skipping \(\hat{T}\to 0\) by scanning various values of \(\hat{B}\). For concreteness, we show the results for this case in a 3D plot consisting of \(\operatorname{Im}\mathfrak{w}\), \(\operatorname{Im}\mathfrak{q}\) and \(\operatorname{Re}\mathfrak{q}\). We show the spectra for six different values of \(\hat{B}\) in figure 7.
In the top panel we show the spectra for the three cases \(\hat{B}>\hat{B}_{c}\). The spectrum always lies on the \(\operatorname{Im}\mathfrak{w}-\operatorname{Im}\mathfrak{q}\) plane. In other words, the momentum of a pole-skipping point is purely imaginary. At sufficiently high \(\hat{B}\) the spectrum is perfectly symmetric. However, by reducing \(\hat{B}\), the spectrum produces an asymmetric behavior. It corresponds to the lower right panel in figure 5. An important observation here is that "_the lower the magnetic field, the wider the range of values that \(\operatorname{Im}\mathfrak{q}\) occupies_".
In the bottom panel of figure 7 we show the spectra for the three cases \(\hat{B}<\hat{B}_{c}\). The spectrum no longer lies only on the \(\operatorname{Im}\mathfrak{w}-\operatorname{Im}\mathfrak{q}\) plane 11. It finds some real momentum parts: "_The lower the magnetic field, the wider the range of values that \(\operatorname{Re}\mathfrak{q}\) occupies_".
Footnote 11: Pole-skipping points complex momentums were previously observed in [35] in systems with Lifshitz symmetry.
Considering the above observations, we conclude that there is a direct relationship
Figure 5: Scanning the pole-skipping structure of \(G^{R}_{\mathcal{O}\mathcal{O}}\) at \(\hat{B}=0.54>\hat{B}_{c}\). From the above left to the bottom right, temperature decreases from \(\hat{T}=0.6\) to \(\hat{T}=0.0001\). In each panel, the upper plot shows the pole-skipping points in \(\operatorname{Im}\boldsymbol{\texttt{w}}-\operatorname{Im}\boldsymbol{ \texttt{q}}\) plane, while the lower one the same in \(\operatorname{Im}\boldsymbol{\texttt{w}}-\operatorname{Re}\boldsymbol{ \texttt{q}}\) plane.
Figure 6: Scanning the pole-skipping structure of \(G^{R}_{\mathcal{O}\mathcal{O}}\) at \(\hat{B}=0.44<\hat{B}_{c}\). From the above left to the bottom right, temperature decreases from \(\hat{T}=0.6\) to \(\hat{T}=0.0002\). In each panel, the upper plot shows the pole-skipping points in \(\operatorname{Im}\mathfrak{w}-\operatorname{Im}\mathfrak{q}\) plane, while the lower one the same in \(\operatorname{Im}\mathfrak{w}-\operatorname{Re}\mathfrak{q}\) plane.
between the quantum phase transition at \(\hat{B}_{c}\) and the pole-hopping spectrum at \(\hat{T}\to 0\). For states \(\hat{B}>\hat{B}_{c}\), the spectrum does not contain any points with real momentum. Similar to the familiar phase transitions, we can call this state disordered. When \(\hat{B}<\hat{B}_{c}\), the momentum pole-skipping points becomes complex. This state can be viewed as an ordered state. Then to distinguish the ordered state from the disordered state, we introduce the following quantity:
\[\mathcal{M}_{\ell}=\,\sum_{j=1}^{2\ell}\,|\mathrm{Re}\,\tilde{\mathfrak{q}}_{ \ell,j}| \tag{4.9}\]
Here \(\ell\) is the level of pole-skipping point and \(j\) counts the the pole-skipping points of this level. By construction, this quantity is non-negative. We have found that for states well below the critical point, \(\mathcal{M}_{\ell}\) becomes positive for all \(\ell\geq 2\). For states close to the critical point, positivity starts to emerge at higher \(\ell\). On the other hand, \(\mathcal{M}_{\ell}\) is zero when none of the pole-skipping points have real momentum. In the latter case, it corresponds to a disordered state of the system. In other words, \(\mathcal{M}_{\ell}\) provides us with an infinite set of _order parameters_ for distinguishing the two sides of the quantum critical point at \(\hat{T}=0\) state.
In the figure 8 we show \(\mathcal{M}_{\ell}\) for \(\ell=2,3,\cdots,7\) as a function of the magnetic field when \(\hat{T}=0.0004\).
Figure 7: Pole-skippng points at low temperature limit (\(\hat{T}\to 0\)): The transition between ordered and disordered states corresponds to a change from the 2D spectrum to the 3D spectrum at \(\hat{B}_{c}\approx 0.499\).
## 5 Conclusion and outlook
We have studied pole-skipping and butterfly velocities in charged and magnetized asymptotically AdS black branes. Our work is a natural continuation of [15] which initiated this line of research in the limit of weak magnetic fields. The new ingredient here is that we allow strong magnetic fields with significant backreaction onto the geometry. These solutions have been studied some time ago in [12, 13, 14]. One of the main findings there was a quantum phase transition at a critical value of a suitable dimensionless measure of magnetic field strength of \(\hat{B}=0.499\). Below that value, the zero temperature limit has non-vanishing entropy whereas above that limit the entropy vanishes as the temperature goes to zero.
There are several important conclusions that can be drawn from our results. First we confirmed and elaborated further on the fact that the butterfly velocities are sensitive to the anomaly, which is represented by the Chern-Simons term. Not only the butterfly velocities are anisotropic but the butterfly velocity in direction of the magnetic field is different from the butterfly velocity in opposite direction to the magnetic field. Again, this can be attributed to the presence of the anomaly. Even more interesting is the fact that the quantum phase transition is visible in the behavior of the butterfly velocities. Approaching the limit \(\hat{T}\to 0\) both butterfly velocities approach the speed of light in the high magnetic field limit. The one in direction against the magnetic field decreases as the magnetic field is lowered and even vanishes at the quantum critical point. Below that also the forward butterfly velocity decreases with the magnetic field eventually vanishes also. For high temperatures in contrast the butterfly velocities approach the usual AdS
Figure 8: Plot of the order parameter \(\mathcal{M}_{\ell}\) of \(\ell=2,3,\cdots,7\) as a function of the magnetic field. Below the critical magnetic field \(\hat{B}_{c}\approx 0.499\), the order parameter is positive; under a fixed magnetic field, the higher the pole-skipping level, the greater the value of the related order parameter. Above \(\hat{B}_{c}\), all \(\mathcal{M}_{\ell}\to 0\) for \(\hat{T}\to 0\).
Schwarzschild value. Thus is it clear that the quantum phase transition can be detected by the behavior of the butterfly velocities as \(\hat{T}\to 0\).
We also studied that pole-skipping points in the tensor channel. There exists a hierarchy of pole-skipping points indexed by a level \(\mathfrak{w}=-i\ell\), \(\ell\in\mathbb{N}\) and with momenta \(\tilde{\mathfrak{q}}_{\ell,j}\), \(j=1,\cdots,2\ell\). Again we find a strong dependence on the magnetic field and temperature. The most important features being that at intermediate temperatures the pole-skipping points arrange asymmetrically around \(\operatorname{Im}\mathfrak{q}=0\) and they develop also a real part. As one approaches \(\hat{T}\to 0\), the behavior depends however drastically upon the field strength being above or below the critical one. Above the critical field strength, the real parts of the complex momenta go to zero such that the pole-skipping spectrum lies again in the plane \(\operatorname{Im}\mathfrak{w}-\operatorname{Re}\mathfrak{q}\) which we can call a \(2D\) spectrum. In contrast, for field strengths below the critical value \(\operatorname{Re}\tilde{\mathfrak{q}}_{\ell,j}\neq 0\) and therefore we find a \(3D\) spectrum. This motivates us to suggest the real part of the pole-skipping spectrum to take as an order parameter for the quantum phase transition. Since no symmetry is broken in this quantum phase transition there exists no conventional order parameter. But the real part of the pole-skipping points seem to be a good indicator of of being in a phase of low field strength, Since no symmetry is broken in this quantum phase transition, no conventional order parameter exists. But the real part of the pole jump seems to be a good indicator of being in a phase of low field strength, so we can call it an "ordered" phase.
Our study was limited to the holographic field theory with the supersymmetric value of the Chern-Simons coupling, i.e., the holographic dual of the maximally supersymmetric gauge theory in four dimensions. It is natural to ask how our findings translate to the weak coupling limit. While the calculation of butterfly velocities in weakly coupled field theories is complicated there are some hints we can extract from our results. It is tempting to interpret the fact that at large field strength both, the forward and backward, velocities approach the speed of light as a signature of lowest Landau Level physics. A similar behavior has been observed for the anomaly related chiral magnetic wave, which also approached the speed of light in this limit [53]. In addition there is a clear relation between the splitting of forward and backward butterfly velocities and the presence of the chiral magnetic effect. On the other hand it is not clear the moment what could correspond to the elaborate pattern of the pole skipping points in the tensor channel at weak coupling.
It will be interesting to test our results with some experimental setups. One possible place to test our results is the compound \(Sr_{3}Ru_{2}O_{7}\). This compound exhibits a series of first-order metamagnetic phase transitions at finite temperatures, ending at a finite temperature critical point [54]. Similarities to the system studied in this paper are discussed in [13]. This then suggests to experimentally study the speed of the butterfly near the critical point in this compound.
## Acknowledgment
We would like to thank Ali Davody, Matthias Kaminski and Dima Kharzeev for fruitful discussions. N.A. thanks the Instituto de Fisica Teorica, IFT-UAM/CSIC and the Institute for Theoretical Physics of Goethe University for their hospitality when this work was completed. N.A. was funded by Lanzhou University's "Double First-Class" start-up fund 561119208. The research of K.L. is supported through the grants CEX2020-001007-S and PID2021- 123017NB-100, PID2021-127726NB-I00 funded by MCIN/AEI/10.13039/501100011033, and by ERDF "A way of making Europe".
## Appendix A How to find the upper-half plane pole-skipping point of the energy density correlator?
Let us assume a static solution in the bulk in Eddington-Finkelstein coordinates: \((r,v,x_{1},x_{2},x_{3})\). In order to find the energy density two-point function, one must consider the perturbations of the \(vv\)-component of metric \(\delta g_{vv}(r,v,x)=\delta g_{vv}(r)e^{-i\omega v+i\vec{k}\cdot\vec{x}}\). Clearly, the \(vv\) component of the Einstein's equation is related to the dynamics of \(\delta g_{vv}\): \(E_{vv}=0\). There are other perturbations that may be coupled to \(\delta g_{vv}\) through the above equation: namely \(\delta g_{rr}\), \(\delta g_{rv}\), \(\delta g_{x^{i}x^{i}}\), \(\delta g_{x^{3}x^{3}}\), \(\delta g_{vx^{3}}\) and \(\delta g_{rx^{3}}\). We expand all the involved perturbations near the horizon as
\[\delta g_{MN}(r)=\delta g_{MN}^{(0)}+(r-r_{h})\delta g_{MN}^{(1)}+\cdots\] (A.1)
In the absence of perturbations, the equation \(E_{vv}=0\) is already satisfied to **leading order** of the near horizon expansion; when it comes to the perturbations, however, one finds
\[\mathcal{G}(\hat{k},\omega)\,\delta g_{vv}^{(0)}+\,\big{(}2\pi T+i\omega\big{)} \bigg{(}2\mathcal{A}\,k\delta g_{vv}^{(0)}+\mathcal{B}_{T}\,\omega(\delta g_ {11}^{(0)}+\delta g_{22}^{(0)})+\mathcal{B}_{L}\,\omega\delta g_{33}^{(0)} \big{)}\bigg{)}=\,0\] (A.2)
At \(\omega_{p}=i2\pi T\), the above equation becomes \(\mathcal{G}(\hat{k},\omega)\,g_{vv}^{(0)}=0\). Now there are two possibilities:
1. \(\delta g_{vv}^{(0)}=0\); we are not interested in this.
2. \(\mathcal{G}(\hat{k},\omega)=0\) causes \(g_{vv}^{(0)}\) to get decoupled from the rest of \(\delta g\) perturbations. Therefore, we cannot solve for the perturbation in the system of equations, since one of the equations (\(E_{vv}=0\)) is already satisfied, while the number of variables, \(\delta g_{MN}\), remains the same as before. This is equivalent to say that the line of poles of energy density correlator on the boundary skips at \((\omega_{p},k_{p})\), with \(k_{p}\) being the root of \(\mathcal{G}(\hat{k},\omega)=0\). That is the so-called pole-skipping point.
It turns out that the condition n \({\cal G}(\hat{k},\omega)=0\) together with \(\omega=i2\pi T\) precisely identifies the chaos point found from the shock-wave calculations [11]. The latter refers to the point \((\omega_{c},k_{c})\) at which OTOC \(\sim 1-\frac{1}{N^{2}}e^{-i\omega t+ikx}\) exponentially grows. Then the butterfly velocity is defined as
\[v_{B}=\,\frac{\omega_{c}}{k_{c}}\,.\] (A.3)
However, the general effective field theory argument shows that, at least in holographic systems [7]12
Footnote 12: This is another way of saying the quantum chaos has hydrodynamic origin is such systems [7].
\[(\omega_{p},k_{p})\equiv(\omega_{c},k_{c})\,.\] (A.4)
Then this simply tells us that \(v_{B}\) can also be computed by computing \((\omega_{p},k_{p})\). So all that is to be found is the function \({\cal G}(\hat{k},i2\pi T)\). It has been found in many cases, including the system [15] with a single chiral anomaly.
## Appendix B Relation to the chiral magnetic effect
Splitting of longitudinal butterfly velocities found in ref. [15] (and in this work) suggests that perhaps one can relate this to the chiral magnetic effect. To find rigorous results, we find it convenient to find this relationship by performing analytical calculations. In the small magnetic field limit, the perturbative solution of the bulk equations is well known [12]. Along the same lines, we parameterize the butterfly velocity in the perturbative expansion of \(B\). For the second-order magnetic field, we can write as
\[\boxed{\vec{v}_{B}=\,v_{B}^{(0)}\left(1+{\mathfrak{a}}_{1}\,{\mathfrak{b}} \cdot\hat{k}+{\mathfrak{a}}_{2}\,{\mathfrak{b}}^{2}+{\mathfrak{a}}_{3}\left({ \mathfrak{b}}\cdot\hat{k}\right)^{2}\right)\,\hat{k}}},\quad{\mathfrak{b}}= \frac{\vec{B}}{T^{2}}\] (B.1)
and \(v_{B}^{(0)}=\sqrt{2/3}\)[1]. Here are a few comments about the above formula:
* \(\hat{k}\) defines the measurement axis.
* There are three different structures constructed out of \({\mathfrak{b}}\) and \(\hat{k}\), to second order in \({\mathfrak{b}}\). The coefficients \({\mathfrak{a}}_{1}\), \({\mathfrak{a}}_{2}\), and \({\mathfrak{a}}_{3}\) can be calculated analytically (see Appendix C.1).
* Due to the term \({\mathfrak{b}}\cdot\hat{k}\), the speed of the butterfly in two opposite directions relative to a certain axis of measurement is different. The difference depends only on \({\mathfrak{a}}_{1}\). As it will be shown in the Appendix C, when \(\mu/T\) and \(B/T^{2}\) are both small \[\Delta v_{B}=\,2v_{B}^{(0)}{\mathfrak{a}}_{1}\,|{\mathfrak{b}}\cdot\hat{k}| \sim\kappa\left(\frac{\mu}{T}\right)^{2}\left(\frac{B}{T^{2}}\right)|\cos\theta|\] (B.2)
where \(\theta\) is the angle between \(\hat{k}\) and \(\mathfrak{b}\). Note that \(B\) is the axial magnetic field.
The discussion so far has been for systems with a single \(U(1)\) axial current. We can simply extend the discussion to the case of \(U_{\rm V}(1)\times U_{\rm A}(1)\). As shown in the appendix C.2, in this case we find that
\[\Delta v_{B}\sim\kappa\left(\frac{\mu_{\rm V}}{T}\right)\left(\frac{\mu_{\rm A }}{T}\right)\left(\frac{B}{T^{2}}\right)|\cos\theta|\] (B.3)
where \(B\) here is the vector magnetic field.
From (B.3) it is clear that detecting butterfly velocity difference requires both \(\mu_{A}\) and \(\mu_{V}\) to be non-zero. \(\Delta v_{B}\) is of the form (B.3), which is reminiscent of the chrial magnetic effect [55] in hydrodynamic energy flow:
\[T^{\mu\nu}\supset\frac{\mu_{\rm V}\mu_{\rm A}}{2\pi^{2}}\big{(}u^{\mu}B^{\nu} +u^{\nu}B^{\mu}\big{)}\,.\] (B.4)
A comparison between (B.3) and (B.4) shows that the observation of a non-zero \(\Delta v_{B}\) in experiments may be a sign of the presence of the chiral magnetic effect in the system.
## Appendix C Magnetized Chirally Charged RN
In this Appendix we want to analytically calculate the butterfly speed in the limit of small magnetic field. We start with the simple case of a \(U(1)\) axial current in C.1 and then extend it to the more realistic case of \(U(1)_{V}\times U(1)_{A}\) in C.2.
Figure 9: Illustration of the two butterfly velocities in an arbitrary direction \(\hat{k}\). The gray curve just shows that the colorful ellipse is not symmetric with respect to the vertical axis.
### Right-handed case: A single \(U(1)\) axial current
As discussed in the main text, the action in the bulk is given by
\[\begin{split} S=\frac{1}{16\pi G_{5}}\int d^{5}x\ \sqrt{-g}\left(R+\frac{12}{L^{2}}-F^{MN}F_{MN}\right)+S_{CS}+S_{ bdy}.\\ S_{CS}=&\frac{\kappa}{48\pi G_{5}}\int d^{5}x\ \sqrt{-g}\, \epsilon^{\rho\mu\nu\alpha\beta}A_{\rho}F_{\mu\nu}F_{\alpha\beta}\end{split}\] (C.1)
Let's take the magnetic field in the third direction. The metric and gauge field in the bulk then are parameterized as
\[\begin{split} ds^{2}=&-F(r)dv^{2}+2drdv+V(r)(dx_{1 }^{2}+dx_{2}^{2})+W(r)\big{(}dx_{3}+C(r)dv\big{)}^{2}\\ A(r)=& A_{v}(r)dv-\frac{1}{2}Bx_{2}dx_{1}+\frac{1} {2}Bx_{1}dx_{2}+A_{z}(r)dz\end{split}\] (C.2)
We also take the following ansatz for the metric and gauge field functions:
\[\begin{split} F(r)=& f_{1}(r-r_{h})+\big{(}f_{2}+f_ {\scalebox{0.5}{$\ref{eq:2}$}}B^{2}\big{)}(r-r_{h})^{2}\\ V(r)=& v_{0}+v_{0b}B^{2}+\big{(}v_{1}+v_{\scalebox{0.5}{$\ref{eq:2}$}}B^{2}\big{)}(r-r_{h})+\big{(}v_{2}+v_{\scalebox{0.5}{$\ref{eq:2 }$}}B^{2}\big{)}(r-r_{h})^{2}\\ W(r)=& v_{0}+w_{0b}B^{2}+\big{(}v_{1}+v_{\scalebox{0.5}{$\ref{eq:2}$}}B^{2}\big{)}(r-r_{h})+\big{(}v_{2}+w_{2b}B^{2}\big{)}(r-r_{h })^{2}\\ C(r)=& B\big{(}c_{1}(r-r_{h})+c_{\scalebox{0.5}{$ \ref{eq:2}$}}(r-r_{h})^{2}\big{)}\\ A_{v}(r)=& a_{v0}+a_{v1}(r-r_{h})+\big{(}a_{v2}+a_{ \scalebox{0.5}{$\ref{eq:2}$}}B^{2}\big{)}(r-r_{h})^{2}\\ A_{z}(r)=& B\big{(}a_{z0}+a_{z1}(r-r_{h})+\big{(}a_{ z2}+a_{z2b}B^{2}\big{)}(r-r_{h})^{2}\big{)}\end{split}\] (C.3)
Compared to the non-chiral case, two new metric and gauge field functions are introduced: \(C(r)\) and \(A_{z}(r)\). This is just due due to the CS term. Note that
* Since CS term breaks the parity, \(C(r)\) and \(A_{z}(r)\) start to contribute at linear order in \(B\).
* Charge conjugation implies that \(C(r)\) and \(A_{z}(r)\) have to be **even** and **odd** functions of \(\nu\), respectively.
* Since we have assumed \(F(r_{h})=0\), \(C(r)\) must vanish at \(r_{h}\) too.
Solving the equations, we get (we omit the expression of \(v_{2b}\))
\[\begin{split} f_{\text{\tiny 2b}}&=\frac{5}{3v_{0}^{2}}+ \frac{3v_{0}}{4}c_{1}^{2}\\ v_{\text{\tiny 1b}}&=-\frac{8}{3f_{1}v_{0}}+8v_{0}v_{0b} \big{(}1-\frac{a_{1}^{2}}{6}\big{)}\\ w_{\text{\tiny 1b}}&=\frac{4}{3f_{1}v_{0}}+8v_{0}v_{0b} \big{(}1-\frac{a_{1}^{2}}{6}\big{)}-\frac{v_{0}^{2}}{f_{1}}c_{1}^{2}\\ a_{\text{\tiny v2b}}&=\frac{a_{1}}{f_{1}v_{0}^{2}}- \frac{a_{1}v_{0}}{4f_{1}}\,c_{1}^{2}+\frac{2a_{1}}{f_{1}v_{0}^{2}}\,\kappa^{2 }\\ c_{\text{\tiny 2}}&=\left(\frac{11}{3}a_{1}^{2}-10 \right)\frac{c_{1}}{f_{1}}-\frac{4a_{1}^{2}}{f_{1}v_{0}^{3/2}}\,\kappa\\ a_{\text{\tiny 21}}&=-\frac{a_{1}}{f_{1}}\left(v_{0}\,c_{1}- \frac{2\kappa}{\sqrt{v_{0}}}\right)\end{split}\] (C.4)
Compared to the non-chiral case, the blue expressions have been found to contribute too.
Now let us turn on all the perturbations of the form \(\delta g_{MN}e^{-i\omega v+ikx_{3}}\) that appear in energy dynamics. The \(E_{vv}=0\) up to first order in perturbation at \(r=r_{h}\), and to second order in \(B\), gives:
\[\begin{split}\bigg{[}ik^{2}\left(1-\frac{w_{0b}}{v_{0}}B^{2} \right)+v_{0}c_{1}kB+\,\frac{12v_{0}}{f_{1}}\omega\big{(}1-\frac{a_{1}^{2}}{6 }-\frac{B^{2}}{6v_{0}^{2}}-\frac{v_{0}\,c_{1}^{2}\,B^{2}}{24}\big{)}\bigg{]} \delta g_{vv}^{(0)}+\\ \big{(}\frac{f_{1}}{2}+i\omega\big{)}\bigg{[}2k\left(1-\frac{w_{ 0b}}{v_{0}}B^{2}\right)\delta g_{vv}^{(0)}+\omega\bigg{(}(\delta g_{11}^{(0)}+ \delta g_{22}^{(0)})\left(1-\frac{v_{0b}}{v_{0}}B^{2}\right)+\delta g_{33}^{(0 )}\left(1-\frac{w_{0b}}{v_{0}}B^{2}\right)\bigg{)}\bigg{]}=\,0\,.\end{split}\]
Using \(f_{1}=4\pi T\), we find **two butterfly velocities**
\[v_{B,\pm}^{L}=\,\pm\frac{2\pi T}{\sqrt{v_{0}(6-a_{1}^{2})}}\left[1\mp c_{1}\, \frac{v_{0}^{1/2}}{2\sqrt{6-a_{1}^{2}}}B+\frac{1}{2}\left(\frac{4+v_{0}^{3}c_{ 1}^{2}}{4v_{0}^{2}\,(6-a_{1}^{2})}-\frac{w_{0b}}{v_{0}}\right)B^{2}\right]\,.\]
The difference between the two butterfly speeds is given by
\[\boxed{\Delta v_{B}^{L}=\,c_{1}\frac{4\pi T}{(6-a_{1}^{2})}B}\] (C.5)
This is actually similar to the result of ref. [15]. Using symmetry argument and dimensional analysis, we found that \(c_{1}\sim\kappa T^{3}a_{1}^{2}\) is needed under the small limit of \(a_{1}\sim\mu/T\). Then (C.5) simplifies to
\[\Delta v_{B}^{L}\sim\kappa\left(\frac{\mu}{T}\right)^{2}\left(\frac{B}{T^{2}} \right)\,.\] (C.6)
Notably, ref. [15] can specify the numerical coefficient on the right-hand side of the above equation as \(\frac{8\pi^{4}}{3^{3/2}}(\log 4-1)\). The reason is that the above reference uses a complete solution for metric and gauge fields. However, here we only solve the bulk equation near the horizon.
It is easy to show that if we take the perturbation as \(\delta g_{MN}e^{-i\omega v+ikx_{1}}\), which is equivalent to measuring the butterfly effect on the axis perpendicular to the magnetic field, we find two butterfly velocities same size
\[v_{B,\pm}^{T}=\,\pm\frac{2\pi T}{\sqrt{v_{0}(6-a_{1}^{2})}}\left[1+\frac{1}{2} \left(\frac{4+v_{0}^{3}c_{1}^{2}}{4v_{0}^{2}\left(6-a_{1}^{2}\right)}-\frac{v_ {0b}}{v_{0}}\right)B^{2}\right]\,.\]
### A \(U(1)\) axial current together with a \(U(1)\) vector current
In this case we consider two types of fermions on the boundary; right-handed and left handed. Then for each of them we consider a distinct gauge field in the bulk, say \(A_{M}^{R}(r,x^{\mu})\) and \(A_{M}^{L}(r,x^{\mu})\). By definition, the CS term has opposite sign for left and right handed fermions; therefore we write:
\[S_{CS}=\frac{\kappa}{48\pi G_{5}}\int d^{5}x\ \sqrt{-g}\,\epsilon^{\rho\mu\nu \alpha\beta}A_{\rho}^{R}F_{\mu\nu}^{R}F_{\alpha\beta}^{R}-\frac{\kappa}{48\pi G _{5}}\int d^{5}x\ \sqrt{-g}\,\epsilon^{\rho\mu\nu\alpha\beta}A_{\rho}^{L}F_{\mu\nu}^{L}F_{ \alpha\beta}^{L}\] (C.7)
To make the ansatz we have to be careful with the **magnetic field**. In practice, instead of **(right-handed, left-handed)** we work with **(axial A, vector V)** currents. Then in an experimental setup, we would have
\[B\equiv B_{\rm V}\,,\ \ \ \ B_{\rm A}=0\,,\] (C.8)
or equivalently we write in terms of gauge fields
\[A_{\rm V}=-\frac{1}{2}B_{\rm V}\,x_{2}dx_{1}+\frac{1}{2}B_{\rm V}\,x_{1}dx_{2 }\,,\ \ \ \ A_{\rm A}=\,0\,.\] (C.9)
Then by using \(A^{R,L}=A_{\rm V}\pm A_{\rm A}\), we can write (C.9) in terms of \(R\) and \(L\) gauge fields:
\[A^{R}=A^{L}=-\frac{1}{2}B_{\rm V}\,x_{2}dx_{1}+\frac{1}{2}B_{\rm V}\,x_{1}dx_ {2}\,.\] (C.10)
Using this, we can simply parameterize the metric and gauge fields in the bulk
\[\begin{split} ds^{2}=&-F(r)dv^{2}+2drdv+V(r)(dx_{1 }^{2}+dx_{2}^{2})+W(r)\big{(}dx_{3}+C(r)dv\big{)}^{2}\\ A^{R}(r)=& A^{R}_{v}(r)dv-\frac{1}{2}Bx_{2}dx_{1}+ \frac{1}{2}Bx_{1}dx_{2}+A^{R}_{z}(r)dz\\ A^{L}(r)=& A^{L}_{v}(r)dv-\frac{1}{2}Bx_{2}dx_{1}+ \frac{1}{2}Bx_{1}dx_{2}+A^{L}_{z}(r)dz\end{split}\] (C.11)
We also take the following ansatz for the metric and gauge field functions:
\[F(r)= f_{1}(r-r_{h})+\big{(}f_{2}+f_{2\!\!\!\!D}B^{2}\big{)}(r-r_{h})^{2}\] (C.12) \[V(r)= v_{0}+v_{0b}B^{2}+\big{(}v_{1}+v_{1b}B^{2}\big{)}(r-r_{h})+\big{(}v _{2}+v_{2b}B^{2}\big{)}(r-r_{h})^{2}\] \[W(r)= v_{0}+w_{0b}B^{2}+\big{(}v_{1}+w_{1b}B^{2}\big{)}(r-r_{h})+\big{(}v _{2}+w_{2b}B^{2}\big{)}(r-r_{h})^{2}\] \[C(r)= B\big{(}c_{1}(r-r_{h})+c_{2}(r-r_{h})^{2}\big{)}\] \[A^{R}_{v}(r)= a^{R}_{v0}+a^{R}_{v1}(r-r_{h})+\big{(}a^{R}_{v2}+a^{R}_{v2b}B^{ 2}\big{)}(r-r_{h})^{2}\] \[A^{R}_{z}(r)= B\big{(}a^{R}_{z0}+a_{z1}(r-r_{h})+\big{(}a^{R}_{z2}+a^{R}_{z2b} B^{2}\big{)}(r-r_{h})^{2}\big{)}\] \[A^{L}_{v}(r)= a^{L}_{v0}+a^{L}_{v1}(r-r_{h})+\big{(}a^{L}_{v2}+a^{L}_{v2b}B^{ 2}\big{)}(r-r_{h})^{2}\] \[A^{L}_{z}(r)= B\big{(}a^{L}_{z0}+a^{L}_{z1}(r-r_{h})+\big{(}a^{L}_{z2}+a^{L}_{ z2b}B^{2}\big{)}(r-r_{h})^{2}\big{)}\]
Equations of motion at \(r=r_{h}\) and at \(O(B^{0})\) give the blue coefficients
\[v_{1}= \frac{8v_{0}}{f_{1}}-\frac{4\bigg{(}(a^{R}_{v1})^{2}+(a^{L}_{v1} )^{2}\bigg{)}v_{0}}{f_{1}}\] (C.13) \[v_{2}= \frac{v_{1}^{2}}{4v_{0}}=\frac{16v_{0}}{f_{1}^{2}}\left(1-\frac{ (a^{R}_{v1})^{2}+(a^{L}_{v1})^{2}}{6}\right)^{2}\] \[f_{2}= -2+\frac{7}{3}\bigg{(}(a^{R}_{v1})^{2}+(a^{L}_{v1})^{2}\bigg{)}\] \[a^{R,L}_{v2}= -6\frac{a^{R,L}_{v1}}{f_{1}}\left(1-\frac{(a^{R}_{v1})^{2}+(a^{L} _{v1})^{2}}{6}\right)\]
And at \(O(B^{2})\), solving the equations at \(r=r_{h}\) give the orange coefficients
\[f_{2b}=\frac{5}{3v_{0}^{2}}+\frac{3v_{0}}{4}c_{1}^{2}\,,\qquad v _{2b}=\text{complicated}\] (C.14) \[v_{1b}=-\frac{8}{3f_{1}v_{0}}+8v_{0}v_{0b}\bigg{(}1-\frac{(a^{R} _{v1})^{2}+(a^{L}_{v1})^{2}}{6}\bigg{)}\] \[w_{1b}=\frac{4}{3f_{1}v_{0}}+8v_{0}v_{0b}\bigg{(}1-\frac{(a^{R}_ {v1})^{2}+(a^{L}_{v1})^{2}}{6}\bigg{)}-\frac{v_{0}^{2}}{f_{1}}c_{1}^{2}\] \[a^{R,L}_{2b}=\frac{a^{R,L}_{v1}}{f_{1}v_{0}^{2}}-\frac{a^{R,L}_{ v1}v_{0}}{4f_{1}}\,c_{1}^{2}+\frac{2a^{R,L}_{v1}}{f_{1}v_{0}^{2}}\,\kappa^{2}\] \[c_{2}=\left(\frac{11}{3}\big{(}(a^{R}_{v1})^{2}+(a^{L}_{v1})^{2} \big{)}-10\right)\frac{c_{1}}{f_{1}}-4\frac{(a^{R}_{v1})^{2}+(a^{L}_{v1})^{2}}{ f_{1}v_{0}^{3/2}}\,\kappa\] \[a^{R,L}_{z1}=-\frac{a^{R,L}_{v1}}{f_{1}}\left(v_{0}\,c_{1}-\frac{ 2\kappa}{\sqrt{v_{0}}}\right)\,.\]
The other coefficients either come from the input data or do not contribute to the final result.
Now let us turn on all the perturbations of the form \(\delta g_{MN}e^{-i\omega w+ikx_{3}}\) that arise in energy dynamics. The \(E_{vv}=0\), up to first order in perturbation at \(r=r_{h}\), and to second order in \(B\), gives:
\[\begin{split}\bigg{[}ik^{2}\left(1-\frac{w_{0b}}{v_{0}}B^{2} \right)+v_{0}c_{1}kB+\,\frac{12v_{0}}{f_{1}}\omega\big{(}1-\frac{(a_{v1}^{R}) ^{2}+(a_{v1}^{L})^{2}}{6}-\frac{B^{2}}{6v_{0}^{2}}-\frac{v_{0}\,c_{1}^{2}\,B^ {2}}{24}\big{)}\bigg{]}\delta g_{vv}^{(0)}+\\ \big{(}\frac{f_{1}}{2}+i\omega\bigg{)}\bigg{[}2k\left(1-\frac{w_{0 b}}{v_{0}}B^{2}\right)\delta g_{vv}^{(0)}+\omega\bigg{(}(\delta g_{11}^{(0)}+ \delta g_{22}^{(0)})\left(1-\frac{v_{0b}}{v_{0}}B^{2}\right)+\delta g_{33}^{(0 )}\left(1-\frac{w_{0b}}{v_{0}}B^{2}\right)\bigg{)}\bigg{]}=\,0.\end{split}\]
Using \(f_{1}=4\pi T\), we find **two butterfly velocities**:
\[\begin{split} v_{B,\pm}^{L}=&\,\pm\,\frac{2\pi T}{ \sqrt{v_{0}(6-\big{(}(a_{v1}^{R})^{2}+(a_{v1}^{L})^{2}\big{)}}}\\ &\times\,\left[1\mp c_{1}\,\frac{v_{0}^{1/2}}{2\sqrt{6-((a_{v1}^ {R})^{2}+(a_{v1}^{L})^{2}}}B+\frac{1}{2}\left(\frac{4+v_{0}^{3}c_{1}^{2}}{4v_ {0}^{2}\big{(}6-((a_{v1}^{R})^{2}+(a_{v1}^{L})^{2}\big{)}}-\frac{w_{0b}}{v_{0 }}\right)B^{2}\right]\,.\end{split}\]
**A comment:**
It can be seen that, in general, the two longitudinal butterfly velocities have different magnitudes. This is due to the linear term in \(B\). This term has a factor of \(c_{1}\) in the coefficient, which cannot be determined by near-horizon analysis. However, as we discuss (C.3) below, it is proportional to \(\kappa\) and must be an even function of \(\nu^{R}\) and \(\nu^{L}\). Also, it must disappear when \(\nu^{R}=\nu^{L}\). Taking these points into account, we can parameterize the general form of this quantity:
\[\begin{split} c_{1}&=\frac{\kappa}{v_{0}^{3/2}} \left((\nu^{R})^{2}-(\nu^{L})^{2}\right)\mathcal{J}\big{(}\nu^{R},\nu^{L} \big{)}\\ &=\frac{4\,\kappa}{v_{0}^{3/2}}\,\nu_{\rm A}\,\nu_{\rm V}\, \mathcal{J}\big{(}\nu_{\rm A},\nu_{\rm V}\big{)}\end{split}\] (C.15)
Where \(\mathcal{J}\) is an even function of its two parameters, and \(\mathcal{J}(0,0)=1\).
It is easy to show that if we take the perturbations as \(\delta g_{MN}e^{-i\omega v+ikx_{1}}\), which is equivalent to measuring the butterfly effect on the axis perpendicular to the magnetic field, we find two butterfly velocities with the same magnitude
\[v_{B,\pm}^{T}=\,\pm\frac{2\pi T}{\sqrt{v_{0}\big{(}6-((a_{v1}^{R})^{2}+(a_{v1 }^{L})^{2})\big{)}}}\left[1+\frac{1}{2}\left(\frac{4+v_{0}^{3}c_{1}^{2}}{4v_{0 }^{2}\big{(}6-((a_{v1}^{R})^{2}+(a_{v1}^{L})^{2})\big{)}}-\frac{v_{0b}}{v_{0}} \right)B^{2}\right]\,.\]
Near horizon data in the tensor channel
The near horizon information about the dynamics of \(H_{xy}\) is encoded in equations (4.3)-(4.6). The first three coefficients in these equations is as the following:
\[M_{11}= \left[k-\left(\frac{p}{q}+2\kappa b\right)\omega\right]^{2}+2i \omega\big{(}6-b^{2}-q^{2}\big{)}\,,\] \[M_{21}= \,8k^{2}\left(6-5b^{2}-q^{2}\right)+4i\omega\left[10b^{4}+20b^{2} \left(q^{2}-6\right)-3\left(p^{2}-2\left(q^{2}-6\right)^{2}\right)\right]-24\,p \,q\,k\,\omega\] \[+\,8\bigg{[}\left(10b^{2}+5q^{2}-12\right)\,k\,\omega+p\,q\left(3 \omega^{2}+3i\omega-2\right)\bigg{]}\left(\frac{p}{q}+2\kappa b\right)\,,\] \[M_{22}= \,6\,k^{2}+4i\omega\left(11\left(6-q^{2}\right)-19b^{2}\right)+ \,6\left(4b^{2}-24\right)\] \[-12k\omega\,\left(\frac{p}{q}+2\kappa b\right)+\,6\left(\omega^ {2}+i\omega-1\right)\,\left(\frac{p}{q}+2\kappa b\right)^{2}\,.\]
|
2309.08084 | Generalized multicategories: change-of-base, embedding, and descent | Via the adjunction $ - \boldsymbol{\cdot} 1 \dashv \mathcal V(1,-) \colon
\mathsf{Span}(\mathcal V) \to \mathcal V \text{-} \mathsf{Mat} $ and a
cartesian monad $ T $ on an extensive category $ \mathcal V $ with finite
limits, we construct an adjunction $ - \boldsymbol{\cdot} 1 \dashv \mathcal
V(1,-) \colon \mathsf{Cat}(T,\mathcal V) \to (\overline T, \mathcal
V)\text{-}\mathsf{Cat} $ between categories of generalized enriched
multicategories and generalized internal multicategories, provided the monad $
T $ satisfies a suitable condition, which is satisfied by several examples.
We verify, moreover, that the left adjoint is fully faithful, and preserves
pullbacks, provided that the copower functor $ - \boldsymbol{\cdot} 1 \colon
\mathsf{Set} \to \mathcal V $ is fully faithful. We also apply this result to
study descent theory of generalized enriched multicategorical structures.
These results are built upon the study of base-change for generalized
multicategories, which, in turn, was carried out in the context of categories
of horizontal lax algebras arising out of a monad in a suitable 2-category of
pseudodouble categories. | Rui Prezado, Fernando Lucatelli Nunes | 2023-09-15T00:56:44Z | http://arxiv.org/abs/2309.08084v2 | # Generalized multicategories: change-of-base, embedding, and descent
###### Abstract.
Via the adjunction \(-\cdot 1\dashv\mathcal{V}(1,-)\colon\mathsf{Span}(\mathcal{V})\to\mathcal{V} \mathsf{Mat}\) and a cartesian monad \(T\) on an extensive category \(\mathcal{V}\) with finite limits, we construct an adjunction \(-\cdot 1\dashv\mathcal{V}(1,-)\colon\mathsf{Cat}(T,\mathcal{V})\to(\overline{T}, \mathcal{V})\mathsf{Cat}\) between categories of generalized enriched multicategories and generalized internal multicategories, provided the monad \(T\) satisfies a suitable condition, which is satisfied by several examples.
We verify, moreover, the left adjoint is fully faithful, and preserves pullbacks, provided that the copower functor \(-\cdot 1\colon\mathsf{Set}\to\mathcal{V}\) is fully faithful. We also apply this result to study descent theory of generalized enriched multicategorical structures.
These results are built upon the study of base-change for generalized multicategories, which, in turn, was carried out in the context of categories of horizontal lax algebras arising out of a monad in a suitable 2-category of pseudodouble categories.
Key words and phrases:double category, equipment, lax algebra, generalized multicategory, effective descent morphisms, Beck-Chevalley condition, virtual equipment, internal category, enriched category, Grothendieck descent theory, extensive category, higher category theory 2
## Introduction
The systematic study of the dichotomy between enriched categories and internal categories can be traced as far back as [44, Section 2.2]. It was shown in [35, Theorem 9.10] that for a suitable base category \(\mathcal{V}\), the category \(\mathcal{V}\)-\(\mathsf{Cat}\) of enriched \(\mathcal{V}\)-categories can be fully embedded into the category \(\mathsf{Cat}(\mathcal{V})\) of categories internal to \(\mathcal{V}\), enabling us to view enriched \(\mathcal{V}\)-categories as _discrete_ categories internal to \(\mathcal{V}^{1}\). This observation is, for example, employed in the study of descent theory of enriched categories (see [35, Theorem 9.11] and [37]). The aim of this work is to construct such an embedding in the setting of generalized multicategories, which we recall below.
Multicategories, defined in [31, p. 103], are structures that generalize categories, by allowing the domains of morphisms to consist of a finite list of objects. The most quintessential example is the multicategory \(\mathsf{Vect}\), whose objects are vector spaces, and whose morphisms are multilinear maps. Their "multicomposition" and the description of the analogous notions of associativity and identity can succinctly be described via the free monoid monad on \(\mathsf{Set}\). More precisely, multicategories can be formalized by considering the equipment \(\mathsf{Span}(\mathsf{Set})\) of spans in \(\mathsf{Set}\) (see [4, p. 22]), and extending the free monoid monad to a suitable monad \((-)^{*}\) on \(\mathsf{Span}(\mathsf{Set})\) (see [22, Corollary A.4]).
_Generalized multicategories_ have since been developed in various contexts, abstracting the notion of ordinary multicategories by replacing the monad \((-)^{*}\) on \(\mathsf{Span}(\mathsf{Set})\) by a _suitable notion of monad on a pseudododuble category_.
_Enriched \(T\)-categories_ were first introduced in [12] with the terminology _\((T,\mathcal{V})\)-categories_. In this setting, the category of \((T,\mathcal{V})\)-categories is obtained out of the so-called _lax extension_ of a monad on \(\mathsf{Set}\) to a suitable monad on \(\mathcal{V}\)-\(\mathsf{Mat}\), the ubiquitous equipment of \(\mathcal{V}\)-matrices (see [12, Section 2]). For instance, when \(\mathcal{V}\) is a suitable quantale, the ultrafilter monad \(\mathfrak{A}\) on \(\mathsf{Set}\) admits a lax extension \(\overline{\mathfrak{A}}\) to \(\mathcal{V}\)-\(\mathsf{Mat}\)[12, Section 8]. In particular, when \(\mathcal{V}=2\), we have an equivalence \(\mathsf{Top}\simeq(\overline{\mathfrak{A}},2)\)-\(\mathsf{Cat}\) (first observed in [3]) and \((\overline{\mathfrak{A}},[0,\infty])\)-\(\mathsf{Cat}\) is equivalent to the category of approach spaces.
_Internal \(T\)-categories_ were introduced in [8] and [22]. For a category \(\mathcal{B}\) with pullbacks, the former defines \(T\)-categories for any monad \(T\) on \(\mathcal{B}\), while the latter considers \(T\) to be a _cartesian_ monad on \(\mathcal{B}\). A cartesian monad \(T\) on \(\mathcal{B}\) induces a strong monad on the equipment \(\mathsf{Span}(\mathcal{B})\) of spans in \(\mathcal{B}\). In this setting, we can obtain the category \(\mathsf{Cat}(T,\,\mathcal{B})\) of \(T\)-categories internal to \(\mathcal{B}\). As examples, we recover the category of ordinary multicategories by considering \(\mathsf{Cat}((-)^{*}.\mathsf{Set})\), and letting \(\mathfrak{F}\) be the free category monad on \(\mathsf{Grph}\), we obtain the category \(\mathsf{VDbCat}=\mathsf{Cat}(\mathfrak{F},\mathsf{Grph})\) of virtual double categories.
The main goal of this paper is to construct an embedding \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\to\mathsf{Cat}(T,\,\mathcal{V})\) from a category \(\mathcal{V}\), and a monad \(T\) on \(\mathcal{V}\), satisfying suitable properties. To this end, it is desirable to work in a general setting where these various notions of generalized multicategories can be uniformly studied and compared with one another. This was, in part, accomplished by the work of [15], where the notion of _\(T\)-monoids_ was introduced, unifying the several approaches to the theory of generalized multicategories. To be precise, these \(T\)-monoids are the _horizontal lax algebras_ induced by a monad \(T=(\mathbb{E},T,e,m)\) in the \(2\)-category \(\mathsf{VDbCat}\) of virtual double categories, lax functors and vertical transformations. These objects have a natural structure of a virtual double category, which we denote here by \(\mathbb{H}\,\mathsf{Lax}\)-\(T\)-\(\mathsf{Alg}\).
This general setting ought to provide us an "internalization" functor \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\to\mathsf{Cat}(T,\,\mathcal{V})\) obtained from the comparison \(\mathcal{V}\)-\(\mathsf{Mat}\to\mathsf{Span}(\mathcal{V})\), and the induced monad \(T\) on \(\mathsf{Span}(\mathcal{V})\). However, [15] does not provide a notion of _change-of-base_ induced by an appropriate notion of morphism \(S\to T\) of monads, where \(S=(\mathbb{D},S,e,m)\) is another monad in \(\mathsf{VDbCat}\). We remark this was left as future work in [15, 4.4].
It should be noted that the study of change-of-base functors has been studied in each specific setting of generalized multicategories. In [33, Section 6.7], the author provides such constructions for the internal case, and [12, Sections 5, 6] treats two particular families of monad morphisms for the enriched case. To establish a relationship between the enriched and internal structures, we expand on the work of these authors, with the goal of providing a convenient environment to produce and study such a functor \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\to\mathsf{Cat}(T,\,\mathcal{V})\) from simpler tools.
We must mention that our approach diverges from the techniques and tools developed in [15]. Firstly, we must restrict our scope from virtual double categories to pseudodoulle categories, as we need to work with _(op)lax horizontal transformations_, which require horizontal composition to be defined. Secondly, instead of using (op)cartesian \(2\)-cells and their universal properties, we opted to use a "mate theory" of conjoints and companions to prove our results, mostly to obtain explicit formulas. Lastly, this is far
from the full scope of the project mentioned in [15, 4.4], as we merely study the underlying categories and functors of the \(2\)-dimensional structures formed by these horizontal lax algebras. Instead, we fall back on an _ad-hoc_ approach for the natural transformations between the functors induced by monad (op)lax morphisms, leaving a treatment of the complete story for future work.
### Outline of the paper:
We begin by reviewing the notion of _pseudodouble category_ in Section 1, first introduced in [20], and the two dimensional structures formed by these, in Section 1. For pseudodouble categories \(\mathbb{D},\mathbb{E}\), the structures consisting of
* _lax functors_\(\mathbb{D}\to\mathbb{E}\) as \(0\)-cells,
* _vertical transformations_ as vertical2 \(1\)-cells,
Footnote 2: In accordance with [15], we take the vertical arrows to be non-strict ones instead.
* _(op)lax horizontal transformations_ as horizontal \(1\)-cells,
* _generalized modifications_ as \(2\)-cells.
are, by Proposition 1.8, pseudodouble categories \(\mathsf{Lax}_{\mathsf{lax}}(\mathbb{D},\mathbb{E})\) (\(\mathsf{Lax}_{\mathsf{opl}}(\mathbb{D},\mathbb{E})\)). We also have a third double category \(\mathsf{PsDbCat}\) (Proposition 1.4) consisting of
* _pseudodouble categories_ as \(0\)-cells,
* _(op)lax_ functors as (vertical) horizontal \(1\)-cells,
* _generalized vertical transformations_ as \(2\)-cells.
The pseudodouble categories that concern our study are the following:
* the pseudodouble category \(\mathcal{V}\)-Mat of \(\mathcal{V}\)-matrices for suitable monoidal categories \(\mathcal{V}\),
* the pseudodouble category \(\mathsf{Span}(\mathcal{B})\) of spans in \(\mathcal{B}\), for categories \(\mathcal{B}\) with pullbacks,
* the double category of lax \(T\)-algebras, for \(T\) a pseudomonad on a \(2\)-category \(\mathbb{B}\).
We will furthermore review the double categorical structure of the last item.
Let \(\mathcal{V}\) be a distributive category with finite limits. Section 2 is devoted to studying the pseudodouble categories \(\mathcal{V}\)-Mat and \(\mathsf{Span}(\mathcal{V})\), and the (op)lax functors induced by the adjunction \(\dashdot\mathsf{1}\dashdot\mathcal{V}(1,-)\colon\mathcal{V}\to\mathsf{Set}\). We confirm these functors induce \(\mathsf{Cat}\)-graph morphisms
\[\dashdot\mathsf{1}\colon\mathcal{V}\text{-Mat}\to\mathsf{Span}(\mathcal{V}) \quad\text{and}\quad\mathcal{V}(1,-)\colon\mathsf{Span}(\mathcal{V})\to \mathcal{V}\text{-Mat},\]
which give us an adjunction \(\dashdot\mathsf{1}\dashdot\mathcal{V}(1,-)\) in the \(2\)-category \(\mathsf{Grph}(\mathsf{Cat})\) (Lemma 2.1). We also prove that \(\dashdot\mathsf{1}\colon\mathcal{V}\text{-Mat}\to\mathsf{Span}(\mathcal{V})\) defines an oplax functor of pseudodouble categories (Proposition 2.2). Using techniques from the following couple of sections, we obtain the following
which is a generalized notion of adjunction - a _conjunction_ - in the double category \(\mathsf{PsDbCat}\).
Section 3 aims to recall the notions of "adjoint" in pseudodouble categories: _conjoints_ and _companions_. These were first introduced in [19], under different terminology. We provide an explicit description of "mate theory" for these objects (also studied in [19, 41]), analogous to the mate theory for adjunctions. We also take the opportunity to work out some known results for three reasons: first, to fix technical notation for subsequent sections; second, to serve as examples on their use; and finally, to keep this work self-contained. This Section culminates in our first contribution, crucial to construct functor between categories of lax horizontal algebras, Theorem 3.6. It states that, if \(\mathbb{E}\) is _conjoint closed_, then so is \(\mathsf{Lax}_{\mathsf{lax}}(\mathbb{D},\mathbb{E})\).
In Section 4, we explicitly establish an equivalence (Proposition 4.1) between the double category \(\mathsf{PsDbCat}\) and the double category of pseudo-algebras for the free internal \(\mathsf{Cat}\)-category \(2\)-monad on the \(2\)-category \(\mathsf{Grph}\), with the goal of making the tools of two-dimensional algebra [7, 28, 34] available to the theory of pseudodouble categories. In particular, via doctrinal adjunction [36, Theorem 1.4.11], we conclude that \(\mathcal{V}(1,-)\colon\mathsf{Span}(\mathcal{V})\to\mathcal{V}\text{-Mat}\) is a lax functor, and is the conjoint of \(\dashdot\mathsf{1}\colon\mathcal{V}\text{-Mat}\to\mathsf{Span}(\mathcal{V})\) in \(\mathsf{PsDbCat}\).
After recalling the notion of horizontal lax algebra from [15], in Section 5 we prove Theorem 5.2; it states that any monad lax morphism \((G,\psi)\colon T\to S\) in \(\mathsf{PsDbCat}_{\mathsf{lax}}\) induces a _change-of-base_ functor \(G_{\uparrow}\colon\mathbb{H}\operatorname{Lax}\text{-}T\text{-Alg}\to\mathbb{H} \operatorname{Lax}\text{-}S\text{-Alg}\), and any monad oplax morphism \((F,\phi)\colon S\to T\) satisfying a suitable
condition also induces a change-of-base functor \(F_{!}\colon\mathbb{H}\operatorname{\mathsf{L}ax}\_{\text{-}}S\text{-}\operatorname{ \mathsf{A}lg}\to\mathbb{H}\operatorname{\mathsf{L}ax}\_{\text{-}}T\text{-} \operatorname{\mathsf{A}lg}\). We close this section by comparing our constructions with the change-of-base functors for generalized multicategories considered in [33] and [12].
In Section 6, we consider a conjunction
\[S\xr@{\text{$\underset{(G,\,\psi)}{\longleftarrow}$}}{\text{$\underset{ (G,\,\psi)}{\longleftarrow}$}}T\]
in the double category \(\operatorname{\mathsf{M}nd}(\operatorname{\mathsf{P}SbCat}_{\operatorname{ \mathsf{l}ax}})\), and we proceed to the existence of an adjunction
between the induced change-of-base functors; this is Theorem 6.1. We also study the conditions for invertibility of unit and counit of such an adjunction, stated in Lemma 6.2 and Corollary 6.4. Finally, after instanciating these results to the settings considered both in [33] and [12, 24], we take the opportunity to point out some of obstacles to the double pseudofunctoriality of \(\mathbb{H}\operatorname{\mathsf{L}ax}\_{\text{-}}(\text{-})\text{-} \operatorname{\mathsf{A}lg}\).
We devote Section 7 to the study of extensive categories. When \(\mathcal{C}\) is an lerxensive category, we provide a description of \(\operatorname{\mathsf{F}am}(\mathcal{C})\) via Artin glueing (Lemma 7.1), from which we deduce that the coproduct functor \(\sum\colon\operatorname{\mathsf{F}am}(\mathcal{C})\to\mathcal{C}\) preserves finite limits. Studying limits of fibered categories, we obtain Theorem 7.2: it confirms that, in a lerxensive category, the coproduct of a "pullback-indexed" family of pullback diagrams is itself a pullback diagram. This result is extensively employed, as illustrated in the remaining results of this Section as well as subsequent ones.
The final groundwork is laid down in Section 8. Via a "structure transfer"-type of result (Proposition 8.1), we are able to construct a monad \(\overline{T}\) on \(\mathcal{V}\text{-}\operatorname{\mathsf{M}at}\) from a monad \(T\) on \(\operatorname{\mathsf{S}pan}(\mathcal{V})\), which is, in turn, induced by a cartesian monad \(T\) on a lerxensive category \(\mathcal{V}\)[22]. In fact, we obtain a conjunction
(0.1)
in the double category \(\operatorname{\mathsf{M}nd}(\operatorname{\mathsf{P}SbCat}_{\operatorname{ \mathsf{l}ax}})\). However, only under a suitable condition does this induce an adjunction
(0.2)
The goal of this Section is to study this extra condition. In the case \(-\cdot 1\colon\operatorname{\mathsf{S}et}\to\mathcal{V}\) is fully faithful, we obtain Theorem 8.6, characterizing this condition in terms of a notion of _fibrewise discreteness_ of a monad. Finally, we check that most of the commonly studied cartesian monads on lerxensive categories \(\mathcal{V}\) are fibrewise discrete, provided \(-\cdot 1\colon\operatorname{\mathsf{S}et}\to\mathcal{V}\) is fully faithful.
Section 9 contains our main results. Let \(\mathcal{V}\) be a lerxensive category such that \(-\cdot 1\colon\operatorname{\mathsf{S}et}\to\mathcal{V}\) is fully faithful, and let \(T\) be a fibrewise discrete, cartesian monad on \(\mathcal{V}\). We also denote the induced monad on \(\operatorname{\mathsf{S}pan}(\mathcal{V})\) by \(T\). Via Theorem 6.1, we obtain the (ordinary) adjunction (0.2) from the conjunction (0.1) in \(\operatorname{\mathsf{P}SbCat}_{\operatorname{\mathsf{l}ax}}\) (Theorem 9.2).
We then apply Theorem 9.2 to study effective descent morphisms for enriched categorical structures in Section 10. Under an additional technical condition (satisfied by most of the examples we provided), we confirm that \((\overline{T},\,\mathcal{V})\text{-}\operatorname{\mathsf{C}at}\) is precisely the full subcategory of \(\operatorname{\mathsf{C}at}(T,\,\mathcal{V})\) with a _discrete_ object-of-objects (Theorem 10.3), generalizing [35, 9.10 Theorem] and [13, Corollary 4.5]. Via this description, we confirm that \(-\cdot 1\colon(\overline{T},\,\mathcal{V})\text{-}\operatorname{\mathsf{C}at} \to\operatorname{\mathsf{C}at}(T,\,\mathcal{V})\) reflects effective descent morphisms (Lemma 10.4), and, with the results of [38] pertaining to effective descent morphisms in internal categorical structures, we provide criteria for an enriched \((\overline{T},\mathcal{V})\)-functor to be effective for descent (Theorem 10.5). We finalize the paper by studying the above examples.
## 1. Structure of double categories
Double categories were first defined in [18], and the more general pseudodouble categories were introduced in [20], allowing the vertical structure to be non-strict. Here, we will recall this notion of pseudodouble category, following the opposite convention of taking the horizontal structure to be non-strict, instead of the vertical, as in [15].
Furthermore, to fix notation, we also recall the notions of lax functor, and generalized vertical transformation [19, 2.2], and we provide definitions for (op)lax horizontal transformations, and the corresponding notion of modifications. For later reference, we also work out the (pseudo)double categorical structures formed by these objects.
### Pseudodouble categories:
A pseudodouble category \(\mathbb{D}\) consists of:
* A category \(\mathbb{D}_{0}\), denoting its objects as _0-cells_, its morphisms as _vertical 1-cells_, its composition and identities as _vertical_.
* A category \(\mathbb{D}_{1}\), denoting its objects as _horizontal 1-cells_, its morphisms as _2-cells_, its composition and identities as _vertical_.
* _Vertical_ domain and codomain functors \(\mathsf{dom}\), \(\mathsf{cod}\colon\mathbb{D}_{1}\to\mathbb{D}_{0}\),
* A _horizontal unit_ functor \(1\colon\mathbb{D}_{0}\to\mathbb{D}_{1}\),
* A _horizontal composition_ functor \(\cdot\colon\mathbb{D}_{2}\to\mathbb{D}_{1}\) for each triple \(x,y,z\) of 0-cells, denoted by _horizontal composition_, where \(\mathbb{D}_{2}\), given by pullback of \(\mathsf{dom}\) and \(\mathsf{cod}\), is the category of _composable pairs_ of horizontal 1-cells and 2-cells.
This data must satisfy \(\mathsf{dom}\circ 1=\mathsf{cod}\circ 1=\mathsf{id}\), and \(\mathsf{dom}(s\cdot r)=\mathsf{dom}(r)\), \(\mathsf{cod}(s\cdot r)=\mathsf{cod}(s)\). Furthermore, we say a natural transformation \(\phi\colon F\to G\) of functors \(\mathcal{C}\to\mathbb{D}_{1}\) is _globular_ if \(\mathsf{dom}\cdot\phi\) and \(\mathsf{cod}\cdot\phi\) are identities. We also have data
* Globular natural isomorphisms \(\lambda\colon 1_{\mathsf{cod}(-)}\cdot\dashrightarrow-\), \(\rho\colon-\cdot 1_{\mathsf{dom}(-)}\to-\) of functors \(\mathbb{D}_{1}\to\mathbb{D}_{1}\), the _left_ and _right unitors_, respectively.
* A globular natural isomorphism \(\alpha\colon(-_{1}\cdot-_{2})\cdot-_{3}\to-_{1}\cdot(-_{2}\cdot-_{3})\) of functors \(\mathbb{D}_{3}\to\mathbb{D}_{1}\), the _associator_, where \(\mathbb{D}_{3}\) is the category of _composable triples_.
These must also satisfy the following _coherence conditions_:
* We have \(\gamma_{1_{x}}=\mathsf{id}_{1_{x}\cdot 1_{x}}\), where we define \(\gamma=\rho^{-1}\circ\lambda\).
* The following diagram commutes for each pair of horizontal 1-cells \(r\colon x\to y\), \(s\colon y\to z\).
* The following diagram commutes for each pair of horizontal 1-cells \(r\colon x\to y\), \(s\colon y\to z\).
* The following diagram commutes for each pair of horizontal 1-cells \(r\colon x\to y\), \(s\colon y\to z\).
* The following diagram commutes for each pair of horizontal 1-cells \(r\colon x\to y\), \(s\colon y\to z\).
* The following diagram commutes:
for each quadruple of composable horizontal \(1\)-cells \(q,r,s,t\).
We will usually supress the subscripts, unless the need to disambiguate occurs. If \(\lambda,\rho\) and \(\alpha\) are the identity transformations, we say \(\mathbb{D}\) is a _double category_.
**Proposition 1.1**.: _The coherence conditions (a), (b) and (c) are redundant._
Proof.: First, observe that (b) is the horizontal dual of (c), so it is sufficient to verify (a) and (b).
We may obtain (a) from the remaining conditions: we have an equality of \(2\)-cells \((1_{x}\cdot 1_{x})\cdot 1_{x}\to 1_{x}\)
\[\lambda\circ(\lambda\cdot 1)=\lambda\circ\lambda\circ\alpha=\lambda\circ(1 \cdot\lambda)\circ\alpha=\lambda\circ(\rho\cdot 1)\]
by (b), naturality of \(\lambda\), and (d). We deduce that \(\lambda\cdot 1=\rho\cdot 1\), and since \(\rho\) is a natural isomorphism, we conclude that \(\lambda=\rho\).
To prove (b) given only (d) and (e), we consider the following diagram:
Except for the top left triangle, every inner polygon commutes either by (d) or by naturality of \(\alpha\). The outer pentagon is an instance of (e), so we conclude that the top left triangle commutes. Since \(\lambda\) is a natural isomorphism, the result follows.
### Lax functors:
Let \(\mathbb{D}\), \(\mathbb{E}\) be double categories. A lax functor \(F\colon\mathbb{D}\to\mathbb{E}\) consists of:
* A functor \(F_{0}\colon\mathbb{D}_{0}\to\mathbb{E}_{0}\).
* A functor \(F_{1}\colon\mathbb{D}_{1}\to\mathbb{E}_{1}\).
* A globular natural transformation \(\mathfrak{e}^{F}\colon 1\cdot F_{0}\to F_{1}\cdot 1\).
* A globular natural transformation \(\mathfrak{m}^{F}\colon F_{1}(-_{1})\cdot F_{1}(-_{2})\to F_{1}(-_{1}\cdot-_{2})\).
This data must satisfy the following properties:
* \(\mathsf{dom}\circ F_{1}=F_{0}\circ\mathsf{dom}\)
* \(\mathsf{cod}\circ F_{1}=F_{0}\circ\mathsf{cod}\),
* Comparison coherences for the unit: the following diagrams commute \[\begin{CD}1_{Fy}\cdot Fr@>{\mathfrak{e}^{F}\cdot F}>{}>F1_{y}\cdot Fr@>{Fr \cdot 1_{Fx}@>{\mathsf{id}\cdot e^{F}}>{}>Fr\cdot F1_{x}}>{}>\\ @V{\lambda}V{Fr}V@V{\mu^{F}}V{\mu^{F}}V@V{\mu^{F}}V{\mu^{F}}V\\ Fr@>{Fr}>{F\rho^{-1}}>F(r\cdot 1_{x})\end{CD}\]
* Comparison coherence for composition: the following diagram commutes \[\begin{CD}(Ft\cdot Fs)\cdot Fr@>{\mathfrak{m}^{F}\cdot\mathsf{id}}>{}>F(t \cdot s)\cdot Fr@>{\mathfrak{m}^{F}}>{}>F((t\cdot s)\cdot r)\\ @V{\alpha}V{}V@V{}V{F\alpha}V\\ Ft\cdot(Fs\cdot Fr)@>{}>{\mathfrak{id}\cdot m^{F}}>{}>Ft\cdot F(s\cdot r) \end{CD}\]
Dually, an _oplax functor_\(F\colon\mathbb{B}\to\mathbb{C}\) is the horizontally dual notion (reverse the \(2\)-cells). If the unit comparison transformation is an isomorphism, we say \(F\) is _normal_, and if both comparisons are isomorphisms, then we say \(F\) is a _strong functor_ (which can be seen both as a lax and oplax functor).
**Proposition 1.2**.: _Composition of (op)lax functors is well-defined, associative, and has identities. That is to say, \(\mathsf{PsDbCat_{\mathsf{lax}}}\) (\(\mathsf{PsDbCat_{\mathsf{opl}}}\)) with double categories as objects and (op)lax functors as morphisms forms a category._
Proof.: For a double category \(\mathbb{D}\), the identity functor is given by the identity function on objects and identity functor hom-categories, with coherence morphisms given by identities. The coherence conditions trivialize, thus we get a strong functor.
For lax functors \(F\colon\mathbb{C}\to\mathbb{D}\) and \(G\colon\mathbb{D}\to\mathbb{E}\), define the \(GF\) to be given by
* \((GF)_{0}=G_{0}F_{0}\),
* \((GF)_{1}=G_{1}F_{1}\),
* \(\mathbf{e}^{GF}=G\,\mathbf{e}^{F}\circ\mathbf{e}^{G}\),
* \(\mathsf{m}^{GF}=G\,\mathsf{m}^{F}\circ\mathsf{m}^{G}\).
To verify \(GF\) is a lax functor, first note that \(\mathbf{e}^{GF}\) and \(\mathsf{m}^{GF}\) are natural transformations, and globular, since \(G\) is a lax functor. Next, observe that the following diagrams
commute, since every inner polygon commutes: either by coherence (of both \(F\) and \(G\)) or by naturality (only in \(\mathsf{m}^{G}\)). Hence, the morphisms on the boundaries are equal, which give the coherences for \(GF\). If \(\mathbf{e}^{F}\), \(\mathsf{m}^{F}\), \(\mathbf{e}^{G}\), \(\mathsf{m}^{G}\) are isomorphisms, then so are \(\mathbf{e}^{GF}\) and \(\mathsf{m}^{GF}\).
Finally, note that the identity functors are the units for lax functor composition, and this operation is also associative. This is because all required compositions occur on categories: function composition on a category of sets, functor composition on \(\mathsf{Cat}\), and \(2\)-cell composition on the hom-categories (plus the composition preservation by the functors between them).
### Vertical transformations
We fix lax functors \(H\colon\mathbb{A}\to\mathbb{B}\), \(K\colon\mathbb{C}\to\mathbb{D}\) and oplax functors \(F\colon\mathbb{A}\to\mathbb{C}\) and \(G\colon\mathbb{B}\to\mathbb{D}\). A _generalized vertical transformation_\(\phi\), depicted as
so that the vertical domain, codomain are \(F\), \(G\) respectively, and horizontal domain, codomain given by \(H\), \(K\) respectively.
* A natural transformation \(\phi_{0}\colon G_{0}H_{0}\to K_{0}F_{0}\),
* A natural transformation \(\phi_{1}\colon G_{1}H_{1}\to K_{1}F_{1}\),
satisfying \(\mathsf{dom}\cdot\phi_{1}=\phi_{0}\cdot\mathsf{dom}\) and \(\mathsf{cod}\cdot\phi_{1}=\phi_{0}\cdot\mathsf{cod}\), subject to the following conditions:
We say \(\phi\) is a vertical transformation between lax functors if \(F=\mathsf{id}\) and \(G=\mathsf{id}\), in which case we denote it simply as \(\phi\colon H\to K\), and vertical transformations between oplax functors can be analogously defined.
**Proposition 1.3**.: _Lax functors and (generalized) vertical transformations form a category, and the vertical domain, codomain operations define functors to \(\mathsf{PsDbCat_{\mathsf{opl}}}\)._
Proof.: The identity vertical transformation on a lax functor \(F\) is given by \(\mathsf{id}_{F_{0}}\) and \(\mathsf{id}_{F_{1}}\), which trivially satisifies the conditions, and has identities functors as vertical domain and codomain.
To define composition, we consider the generalized vertical transformations \(\phi\) and \(\psi\) as given below:
We define \((\psi\circ\phi)_{i}=\psi_{i}F_{i}\circ K_{i}\phi_{i}\) for \(i=0,1\). Since we have
and
we conclude that \(\psi\circ\phi\) is a generalized vertical transformation, with vertical domain \(HF\) and codomain \(KG\). Hence, if \(\phi\) and \(\psi\) are globular, then so is \(\psi\circ\phi\), so we obtain a subcategory of lax functors and vertical transformations.
Associativity and identity are obtained via componentwise calculation on the underlying natural transformations.
**Proposition 1.4** ([19, 2.2]).: _We have a double category \(\mathsf{PsDbCat}\) with double categories, lax and oplax functors and generalized vertical transformations as 0-cells, horizontal and vertical 1-cells and 2-cells, respectively._
Proof.: The underlying \(\mathsf{Cat}\)-graph for this double category is described in Proposition 1.3.
For an oplax functor \(F\colon\mathbb{A}\to\mathbb{D}\), the identity natural transformation on \(F\) defines a generalized vertical transformation \(1_{F}\colon 1_{\mathsf{id}}\to 1_{\mathsf{id}}\), whose vertical domain and codomain is \(F\).
Given generalized vertical transformations \(\phi\) and \(\psi\) as given by the following diagrams
we denote their horizontal composite by \(\psi\cdot\phi\), and is defined by \((\psi\cdot\phi)_{i}=S_{i}\phi_{i}\circ\psi_{i}P_{i}\). Since
and
are commutative diagrams, we conclude \(\psi\cdot\phi\) is a generalized vertical transformation, with vertical domain \(F\) and codomain \(H\). Associativity and identity conditions hold, via componentwise calculation on the underlying natural transformations, so we may take the associator and unit isomorphisms to be identites.
### Horizontal transformations:
Let \(F,\,G\colon\mathbb{D}\to\mathbb{E}\) be lax functors. A _lax horizontal transformation_\(\phi\colon F\to G\) is given by data
* a functor \(\phi\colon\mathbb{D}_{0}\to\mathbb{E}_{1}\)
* a globular natural transformation \(\mathfrak{n}^{\phi}\colon G\cdot\phi_{\mathsf{dom}}\to\phi_{\mathsf{cod}}\cdot F\) of functors \(\mathbb{D}_{1}\to\mathbb{E}_{1}\).
satisfying the following coherence conditions:
* Comparison coherence for the unit: the following diagram commutes for all \(0\)-cells \(x\).
* Comparison coherence for the composition: the following diagram commutes \[\xy(-1,0)*{(Gs\cdot Gr)\cdot\phi_{x}}="{\xy(-1,0)*{(Gs\cdot Gr)\cdot\phi_{x}}="{ \xy(-1,0)*{(Gs\cdot r)\cdot\phi_{x}}="{\xy(-1,0)*{(Gs\cdot r)\cdot\phi_{x}}="{ \xy(-1,0)*{(Gs\cdot r)\cdot\phi_{x}}="{\xy(-1,0)*{(Gs\cdot\phi_{y})\cdot Fr}}="{ \xy(-1,0)*{(Gs\cdot\phi_{y})\cdot Fr}="{\xy(-1,0)*{(Gs\cdot\phi_{y})\cdot Fr} ="{\xy(-1,0)*{(Gs\cdot\phi_{y})\cdot Fr}="{\xy(-1
_defines a lax horizontal transformation \(\psi\cdot\phi\colon F\to H\)._
Proof.: Due to functoriality of horizontal composition and of the underlying functors of \(\phi\) and \(\psi\), it is enough to point out that \(\phi_{x}\)) and \(\psi_{x}\) are a composable pair of horizontal \(1\)-cells to make sure the data (i) and (ii) define a functor \(\mathbb{D}_{0}\to\mathbb{E}_{1}\).
Furthermore, note that the datum (iii) is a composite of globular natural transformations, so it is enough to verify the coherence conditions are satisfied.
We note the following diagram, in which we have supressed the horizontal \(1\)-cells,
commutes, by naturality of \(\alpha\) and unit comparison coherence for \(\psi\) and \(\phi\). By 1.1, the top composite is \(\gamma\), so this confirms unit comparsion coherence for \(\psi\cdot\phi\).
The next diagram verifies composition coherence for \(\psi\cdot\phi\): it is a pasting of composition coherences for \(\psi\) and \(\phi\), a naturality square from the functoriality of \(\cdot\), and the remaining diagrams are coherence and naturality of \(\alpha\).
Hence, we have confirmed composition coherence for \(\psi\cdot\phi\), concluding the proof.
### Modifications
Let \(F\), \(G\), \(H\), \(K\colon\mathbb{C}\to\mathbb{D}\) be lax functors and let \(\zeta\colon F\to H\), \(\xi\colon G\to K\) be oplax horizontal transformations, and let \(\phi\colon F\to G\), \(\psi\colon H\to K\) be vertical transformations. A _modification_
\(\Gamma\colon\zeta\to\xi\), depicted as
(1.2)
is a natural transformation \(\Gamma\colon\zeta\to\xi\) on the underlying functors \(\zeta,\xi\colon\mathbb{D}_{0}\to\mathbb{E}_{1}\) such that
(1.3)
commutes for all horizontal 1-cells \(r\colon x\to y\). We say \(\phi\) and \(\psi\) are respectively the vertical domain and codomain of \(\Gamma\).
**Proposition 1.7**.: _We have a category \(\mathsf{L}\mathsf{x}_{\mathsf{o}\mathsf{f}}(\mathbb{D},\mathbb{E})\) with polar horizontal transformations of lax functors \(\mathbb{D}\to\mathbb{E}\) as objects, and modifications as morphisms. Moreover, the vertical domain and codomain operations define functors to the category of lax functors and vertical transformations._
Proof.: Let \(\zeta\colon F\to G\) be an oplax transformation of lax functors \(\mathbb{D}\to\mathbb{E}\). We take the identity modification \(\mathsf{id}_{\zeta}\) on \(\zeta\) to be given by identity natural transformation on the underlying functor of \(\zeta\), whose vertical domain and codomain are taken to be the identity vertical transformations \(\mathsf{id}_{F}\) and \(\mathsf{id}_{G}\), respectively. The instance of the diagram (1.3) for \(\mathsf{id}_{\zeta}\) is trivially commutative.
Let \(\Gamma,\Xi\) be modifications given by
We define the composite \(\Gamma\circ\Xi\) to be the vertical composition of the underlying natural transformations. Since
commutes for all horizontal 1-cells \(r\colon x\to y\), we confirm \(\Xi\circ\Gamma\colon\phi\to\chi\) is a modification with vertical domain \(\theta\circ\phi\) and codomain \(\omega\circ\psi\).
Associativity and identity properties are inherited from natural transformations, and functoriality of vertical domain and codomain is an immediate consequence.
**Proposition 1.8**.: _Let \(\mathbb{D}\), \(\mathbb{E}\) be double categories. \(\mathsf{L}\mathsf{x}_{\mathsf{o}\mathsf{f}}(\mathbb{D},\mathbb{E})\) has the structure of a double category, with lax functors as 0-cells, vertical transformations as vertical 1-cells, oplax horizontal transformations as horizontal 1-cells, and modifications as 2-cells._
Proof.: The underlying categories of cells are provided in Propositions 1.2 and 1.7. moreover, the latter has provided the vertical domain and codomain functors.
We have defined the horizontal unit functor on objects in Proposition 1.5. For a vertical transformation \(\phi\colon F\to G\), we define \(1_{\phi}\) to be the modification with underlying natural transformation \(1\cdot\phi_{0}\), with vertical domain \(1_{F}\) and codomain \(1_{G}\); note that
commutes by naturality of \(\gamma\). Since this is just whiskering with \(1\colon\mathbb{E}_{0}\to E_{1}\), this describes a functor.
We have defined the horizontal composition functor on objects in Proposition 1.6. For modifications \(\Gamma\) and \(\Xi\) as depicted below
we define \(\Xi\cdot\Gamma\) to be the horizontal composition of the underlying natural transformations. This is a modification, since the following diagram commutes
(1.4)
and has vertical domain \(\phi\) and codomain \(\chi\).
Since horizontal composition in \(\mathbb{E}\) is functorial, we obtain functoriality of horizontal composition of modifications. Moreover, both the horizontal unit and horizontal composition have the required behaviour with respect to vertical domains and codomains.
We're left with providing the unitors and associator, and the respective proofs that these satisfy the required coherence conditions. Moreover, we define \(\lambda_{\zeta}\colon 1_{H}\cdot\zeta\to\zeta\) to be given by \(\lambda_{\zeta_{x}}\colon 1_{Hx}\cdot\zeta_{x}\to\zeta_{x}\), and \(\rho_{\zeta}\) is similarly defined. These are globular modifications, as the following diagrams commute
by naturality of \(\lambda,\rho\), and coherence.
Finally, we let \(\pi\colon L\to P\) be another oplax horizontal transformation. We define \(\alpha\colon(\pi\cdot\xi)\cdot\zeta\to\pi\cdot(\xi\cdot\zeta)\) to be given at \(x\) by \(\alpha\colon(\pi_{x}\cdot\xi_{x})\cdot\zeta_{x}\to\pi_{x}\cdot(\xi_{x}\cdot \zeta_{x})\). Also a natural isomorphism, and is a globular
modification since the following diagram commutes:
which is obtained by pasting coherence pentagons and naturality squares of \(\alpha\).
By checking componentwise, we find that \(\lambda,\rho\) and \(\alpha\) satisfy the desired coherence conditions.
### Examples:
The pseudodouble categories studied in this body of work are:
* The pseudodouble category \(\mathcal{V}\)-\(\mathsf{Mat}\) of \(\mathcal{V}\)-matrices, for distributive monoidal categories \(\mathcal{V}\) (that is, for \(\mathcal{V}\) with coproducts, which preserved by the tensor product; see [6, 12, 15]).
* The pseudodouble category \(\mathsf{Span}(\mathcal{B})\) of spans of morphsims in \(\mathcal{B}\), for \(\mathcal{B}\) a category with pullbacks (see [4, 22, 15]).
* The pseudodouble categories \(\mathsf{Lax}_{\mathsf{lax}}(\mathbb{D},\mathbb{E})\) and \(\mathsf{Lax}_{\mathsf{opl}}(\mathbb{D},\mathbb{E})\) for double categories \(\mathbb{D}\), \(\mathbb{E}\).
* \(T\)-Alg, \(\mathsf{Ps}\)
- \(T\)-Alg of lax and pseudo \(T\)-algebras, for \(T\) a pseudomonad on a \(2\)-category \(\mathbb{B}\).
* The double category \(\mathsf{Mnd}(\mathbb{B})=\mathsf{Lax}\)-id-Alg of monads in a \(2\)-category \(\mathbb{B}\).
We shall specify the double categorical structure of \(\mathsf{Lax}\) - \(T\)-Alg. First, recall that we have \(2\)-categories \(\mathsf{Lax}\) - \(T\)-Alg\({}_{\mathsf{lax}}\) and \(\mathsf{Lax}\) - \(T\)-Alg\({}_{\mathsf{opl}}\) whose \(0\)-cells are lax \(T\)-algebras, with their (op)lax morphisms and their respective \(2\)-cells [36]; however, there is a notion of generalized \(2\)-cell which subsumes both structures.
We will be taking the vertical \(1\)-cells to be the polar morphisms and the horizontal \(1\)-cells to be the lax morphisms. Let \((h,\phi)\colon(w,a,\eta,\mu)\to(x,b,\eta,\mu)\), \((k,\psi)\colon(y,c,\eta,\mu)\to(z,d,\eta,\mu)\) be lax \(T\)-algebra lax morphisms and \((f,\zeta)\colon(w,a,\eta,\mu)\to(y,c,\eta,\mu)\), \((g,\xi)\colon(x,b,\eta,\mu)\to(z,d,\eta,\mu)\) be lax \(T\)-algebra oplax morphisms. A _generalized lax \(T\)-algebra 2-cell_
\[\begin{CD}(w,a,\eta,\mu)@>{(h,\phi)}>{}>(x,b,\eta,\mu)\\ @V{(f,\zeta)}V{}V@V{}V{\omega}V{(g,\psi)}V\\ (y,c,\eta,\mu)@>{(k,\xi)}>{}>(z,d,\eta,\mu)\end{CD}\]
consists of a \(2\)-cell \(\omega\colon g\cdot h\to k\cdot f\) satisfying the following coherence condition
where we write \(\omega^{T}=(\mathsf{m}^{T})^{-1}\circ T\omega\circ\mathsf{m}^{T}\). Horizontal and vertical composition is defined as expected: to be explicit, we consider generalized lax \(T\)-algebra \(2\)-cells \(\theta,\sigma\) given by
and we define \(\theta\circ\omega=(\theta\cdot f)\circ(g^{\prime}\cdot\omega)\) and \(\sigma\cdot\omega=(k^{\prime}\cdot\omega)\circ(\sigma\cdot h)\). These provide a double categorical structure to \(\mathsf{Lax}\)-\(T\)-\(\mathsf{Alg}\), provided the coherence conditions are satisfied for \(\theta\circ\omega\) and \(\sigma\cdot\omega\), which are given by the commutativity of the following diagrams:
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/2-cell-2-cell-2-cell-2-2-cell-2-2-cell-2-2-cell-2-2-cell-2-2-cell-2-2-2-cell-2-2-2-cell-2-2-2-cell-2-2-2-cell-2-2-2-cell-2-2-2-2-cell-2-2-2-cell-2-2-2-2-cell-2-2-2-2-cell-2-2-2-2-cell-2-2-2-2-cell-2-2-2-2-2-cell-2-2-2-2-2-cell-2-2-2-2-2-2-cell-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2 -2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2 -2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2 -2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2--2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2-2--2-
with analogous definitions for \(\theta^{T}\) and \(\sigma^{T}\), plus a couple of omitted coherence conditions which confirm that \((\theta^{T}\cdot Tf)\circ(Tg^{\prime}\cdot\omega^{T})=(\theta\circ\omega)^{T}\) and \((Tk^{\prime}\cdot\omega^{T})\circ(\sigma^{T}\cdot Th)=(\sigma\cdot\omega)^{T}\).
## 2. Spans versus matrices
Let \(\mathcal{V}\) be a distributive, cartesian monoidal category with finite limits. Our starting point is the adjunction
(2.1)
whose unit and counit we denote by \(\hat{\eta},\hat{\varepsilon}\) respectively; here, \(-\cdot 1\) is the copower with the terminal object \(1\).
After fixing some notation regarding \(\mathcal{V}\)-Mat and \(\mathsf{Span}(\mathcal{V})\), we confirm that (2.1) induces an adjunction of internal \(\mathsf{Cat}\)-graph morphisms
\[-\cdot 1\colon\mathcal{V}\text{-}\mathsf{Mat}\to\mathsf{Span}(\mathcal{V}) \qquad\mathcal{V}(1,-)\colon\mathsf{Span}(\mathcal{V})\to\mathcal{V}\text{-} \mathsf{Mat}\]
and we will furthermore confirm that \(-\cdot 1\) defines an oplax functor.
Together with the tools and terminology provided in Sections 3 and 4, we will be able to deduce that \(\mathcal{V}(1,-)\) is a lax functor, and that we have a conjunction in the double category \(\mathsf{PsDbCat}\). The unit and counit may be depicted as follows
alluding to the fact that these are generalized vertical transformations in \(\mathsf{PsDbCat}\).
**Notation for \(\mathsf{Span}(\mathcal{V})\):** The \(\mathsf{Cat}\)-graph \(\mathsf{Span}(\mathcal{V})\) is succintly defined as \([l\gets m\to r,\mathcal{V}]\rightrightarrows\mathcal{V}\), whose underlying functors are the evaluations at \(l\) and \(r\). Throughout this work, we opt to denote spans \(p\colon X\nrightarrow Y\) in \(\mathcal{V}\) as the following diagram
and a \(2\)-cell \(\theta\) will be denoted as a morphism \(M_{p}\to M_{q}\) making both of the following squares commute:
The unit span \(1_{X}\colon X\nrightarrow X\) is defined on objects by \(M_{1_{X}}=X\) and \(l_{1_{x}}=r_{1_{x}}=\mathsf{id}_{X}\), and on morphisms \(f\colon X\to Y\) by \(l_{f}=r_{f}=f\).
Let \(q\colon Y\nrightarrow Z\) be another span in \(\mathcal{V}\). We write the pullback which defines \(q\cdot p\) as
so that we have \(l_{q\cdot p}=l_{p}\circ\pi_{1}\) and \(r_{q\cdot p}=r_{q}\circ\pi_{0}\). By abuse of notation, we may refer to instances of such pullback diagrams as \(M_{q\cdot p}\).
The unitors \(\lambda\colon 1\cdot p\to p\) and \(\rho\colon p\colon 1\to p\) in \(\mathsf{Span}(\mathcal{V})\) are given by the pullback projections \(\pi_{1}\colon M_{1\cdot p}\to M_{p}\) and \(\pi_{0}\colon M_{p\cdot 1}\to M_{p}\), respectively.
Given a third span \(r\colon Z\twoheadrightarrow W\), note that the universal property of the pullback \(M_{q\cdot p}\) guarantees the existence of a unique map \(\pi_{2}\colon M_{(r\cdot q)\cdot p}\to M_{q\cdot p}\) such that \(\pi_{1}\circ\pi_{2}=\pi_{1}\) and \(\pi_{1}\circ\pi_{0}=\pi_{0}\circ\pi_{2}\):
With this, the associator \(\alpha\colon(r\cdot q)\cdot p\to r\cdot(q\cdot p)\) may be defined as the unique map such that \(\pi_{1}\circ\alpha=\pi_{2}\) and \(\pi_{0}\circ\alpha=\pi_{0}\circ\pi_{0}\), via the universal property of the pullback \(M_{r\cdot(q\cdot p)}\):
**Notation for \(\mathcal{V}\)-Mat:** Let \(p\colon U\twoheadrightarrow V\) be a \(\mathcal{V}\)-matrix. We denote by \(p(u,v)\in\mathcal{V}\) the value of \(p\) at the pair \((u,v)\in U\times V\). A 2-cell of \(\mathcal{V}\)-matrices
consists of a family of morphisms \(\theta_{u,v}\colon p(u,v)\to q(fu,gv)\) in \(\mathcal{V}\), for \(u\in U\) and \(v\in V\). Given another 2-cell
the composite \(\omega\circ\theta\) is given at \(u,v\) by the composite of
\[p(u,v)\xrightarrow{\theta_{u,v}}q(fu,gv)\xrightarrow{\omega_{fu,gv}}r(hfu, kgv),\]
exhibiting the structure of \(\mathcal{V}\)-Mat as an internal Cat-graph.
Given \(u,u^{\prime}\in U\), we write \([u=u^{\prime}]\) for the set that is a singleton if \(u=u^{\prime}\) and empty otherwise. Note that if we have a function \(f\colon U\to V\), then there is a unique morphism \([u=u^{\prime}]\to[fu=fu^{\prime}]\). With this, the unit \(\mathcal{V}\)-matrix \(\operatorname{1}_{U}\colon U\twoheadrightarrow U\) is defined by \(\operatorname{1}_{U}(u,u^{\prime})=[u=u^{\prime}]\mathbin{\boldsymbol{\cdot}}1\) for a set \(U\), and \(1_{f}\) is given by \(1_{f}(u,u^{\prime})\colon[u=u^{\prime}]\mathbin{\boldsymbol{\cdot}}1\to[fu=fu ^{\prime}]\mathbin{\boldsymbol{\cdot}}1\) for a function \(f\colon U\to V\).
Recall that if \(t\colon V\twoheadrightarrow W\) is another \(\mathcal{V}\)-matrix, we have
\[(t\cdot s)(u,w)=\sum_{v\in V}t(v,w)\times s(u,v)\]
which is the composition of \(\mathcal{V}\)-matrices. This is likewise defined for 2-cells.
The unitors and associators are then given by taking coproducts over the unitors and associators for the cartesian monoidal structure of \(\mathcal{V}\).
**Lifting the adjunction to \(\operatorname{\mathsf{Grph}}(\operatorname{\mathsf{Cat}})\):** For a \(\mathcal{V}\)-matrix \(p\colon X\twoheadrightarrow Y\), we define \(M_{p\cdot 1}=\sum_{x,y}p(x,y)\), and we define \(p\mathbin{\boldsymbol{\cdot}}1\colon X\mathbin{\boldsymbol{\cdot}}1 \twoheadrightarrow Y\mathbin{\boldsymbol{\cdot}}1\) to be the span given by taking the coproduct of \(p(x,y)\to 1\) indexed by \(X\times Y\); this gives a morphism \(M_{p\cdot 1}\to X\mathbin{\boldsymbol{\cdot}}1\times Y\mathbin{ \boldsymbol{\cdot}}1\) (see Diagram (2.2) below, which is commutative by the universal property of the coproduct), whose composite with the projections determine \(l_{p\cdot 1}\) and \(r_{p\cdot 1}\).
(2.2)
We write \(\hat{\pi}_{0}\colon(t\cdot s)\mathbin{\mathchoice{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{\scalebox{0.6 }{$\bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{ \hbox{\scalebox{0.6}{$\bullet$}}}}}1\to t\mathbin{\mathchoice{\vbox{ \hbox{\scalebox{0.6}{$\bullet$}}}{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{ \scalebox{0.6}{$\bullet$}}}}}1}\) (respectively, \(\hat{\pi}_{1}\colon(t\cdot s)\mathbin{\mathchoice{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{ \scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}1\to s\mathbin{\mathchoice{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{ \scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}1}\)) for the coproducts of the projections indexed by \(U\times V\times W\to V\times W\) (respectively, \(t(v,w)\times s(u,v)\to s(u,v)\) indexed by \(U\times V\times W\to U\times V\)).
For a span \(p\colon V\twoheadrightarrow W\) in \(\mathcal{V}\), we define the \(\mathcal{V}\)-matrix \(\mathcal{V}(1,p)\colon\mathcal{V}(1,V)\twoheadrightarrow\mathcal{V}(1,W)\) to be given at \(v,w\) by the following pullback:
(2.3)
and if we have a 2-cell of spans \(\theta\):
then \(\mathcal{V}(1,\theta)\) is the 2-cell uniquely determined by pullback as follows:
We observe that \(l_{q}\circ\theta=f\circ l_{p}\) and \(r_{q}\circ\theta=f\circ r_{p}\).
We extend \(\hat{\eta}\), \(\hat{\varepsilon}\) to \(\mathcal{V}\)-\(\operatorname{\mathsf{Mat}}\) and \(\operatorname{\mathsf{Span}}(\mathcal{V})\): for a \(\mathcal{V}\)-matrix \(p\colon X\twoheadrightarrow Y\), we define \(\hat{\eta}_{p}\colon p\to\mathcal{V}(1,p\mathbin{\mathchoice{\vbox{\hbox{ \scalebox{0.6}{$\bullet$}}}{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{ \scalebox{0.6}{$\bullet$}}}}}1})\) at \(x,y\) to be given by the dashed arrow
(2.4)
For a span \(p\colon V\twoheadrightarrow W\), we let \(\hat{\varepsilon}_{p}\colon\mathcal{V}(1,p)\mathbin{\mathchoice{\vbox{ \hbox{\scalebox{0.6}{$\bullet$}}}{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}{\vbox{\hbox{ \scalebox{0.6}{$\bullet$}}}}}1}\to p\) to be given by taking the coproduct of (2.3) indexed by
which yields a commutative square
(2.5)
By taking the coproduct of (2.4) over the diagram
we conclude that \(\hat{\varepsilon}_{p\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}}\circ\hat{\eta}_{p\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6 }{$\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}}1=\mathsf{id}_{p\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6 }{$\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}1}\). Moreover, by considering the following diagram
we conclude \(\mathcal{V}(1,\hat{\varepsilon}_{s})\circ\eta_{\mathcal{V}(1,s)}=\mathsf{id}_{ p\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{ \mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{\mathbin{\vbox{ \hbox{\scalebox{0.6}{$\bullet$}}}}}1}\). Hence, we have confirmed that
**Proposition 2.1**.: _We have an adjunction \(-\cdot 1\dash\mathcal{V}(1,-)\colon\mathsf{Span}(\mathcal{V})\to\mathcal{V}\text{-Mat}\) in \(\mathsf{Grph}(\mathsf{Cat})\)._
**Coherence for - \(\cdot\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6}{$ \bullet$}}}}}1\):** Now, we check that \(-\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{ \mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{\mathbin{\vbox{ \hbox{\scalebox{0.6}{$\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6 }{$\bullet$}}}}}1\) is a normal oplax functor: for a set \(X\), consider the pullback diagram
(2.6)
so that \(\delta_{x,x}=x\) and \(\delta_{x,y}\) is uniquely determined when \(x\neq y\). Now, we consider the image of the diagram (2.6) under \(-\cdot 1\), and take its coproduct indexed by:
(2.7)
This yields us \(\mathbf{e}_{X}^{-1}\); since \([x=y]\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{ \mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{\mathbin{\vbox{ \hbox{\scalebox{0.6}{$\bullet$}}}}}1\cong 0\) for \(x\neq y\), we conclude that this 2-cell is invertible.
Moreover, given a function \(f\colon X\to Y\), we observe that the following diagram
commutes, as it is the image via \(-\mathchoice{\mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{ \mathbin{\vbox{\hbox{\scalebox{0.6}{$\bullet$}}}}}{\mathbin{\vbox{ \hbox{\scalebox{0.6}{$\bullet$}}}}}{\mathbin{\vbox{\hbox{\scalebox{0.6 }{$\bullet$}}}}}1\) of a commutative diagram in \(\mathsf{Set}\). Taking the coproduct over (2.7) confirms naturality of \(\mathbf{e}^{-1}\).
For \(\mathcal{V}\)-matrices \(p\colon X\twoheadrightarrow Y\), \(q\colon Y\twoheadrightarrow Z\), \(\mathsf{m}^{-1}\) is depicted in Diagram (2.8) below by a dashed arrow, and is uniquely determined by the universal property of the pullback square:
(2.8)
We consider the following horizontally composable 2-cells of \(\mathcal{V}\)-Mat:
We have
\[\pi_{j}\circ\mathfrak{m}_{q_{0},q_{1}}^{-1}\circ((\zeta_{1}\cdot \zeta_{0})\mathbin{\boldsymbol{\cdot}}1) =(\pi_{j}\mathbin{\boldsymbol{\cdot}}1)\circ((\zeta_{1}\cdot\zeta_ {0})\mathbin{\boldsymbol{\cdot}}1)\] \[=(\zeta_{j}\mathbin{\boldsymbol{\cdot}}1)\circ(\pi_{j}\mathbin{ \boldsymbol{\cdot}}1)\] \[=(\zeta_{j}\mathbin{\boldsymbol{\cdot}}1)\circ\pi_{j}\circ \mathfrak{m}_{p_{0},p_{1}}^{-1}\] \[=\pi_{j}\circ((\zeta_{1}\mathbin{\boldsymbol{\cdot}}1)\cdot( \zeta_{0}\mathbin{\boldsymbol{\cdot}}1))\circ\mathfrak{m}_{p_{0},p_{1}}^{-1}\]
for \(j=0,1\). We obtain naturality via the universal property of the pullback \(M_{(q_{1}\mathbin{\boldsymbol{\cdot}}1)\cdot(q_{0}\mathbin{\boldsymbol{\cdot}}1)}\).
To verify the unit comparison coherence of \(-\mathbin{\boldsymbol{\cdot}}1\), we let \(p\colon X\nrightarrow Y\) be a span in \(\mathcal{V}\), and we consider the composite
\[\sum_{x,y,z}1_{Y}(y,z)\times p(x,y)\xrightarrow{\mathfrak{m}^{-1}}M_{(1_{Y} \mathbin{\boldsymbol{\cdot}}1)\cdot(p\mathbin{\boldsymbol{\cdot}}1)} \xrightarrow{\operatorname{\mathsf{e}}^{-1\mathbin{\cdot}\mathsf{id}}}M_{1_{Y \mathbin{\boldsymbol{\cdot}}1}\cdot(p\mathbin{\boldsymbol{\cdot}}1)} \xrightarrow{\lambda}\sum_{x,y}p(x,y).\]
By definition, \(\lambda\colon M_{1_{Y\mathbin{\boldsymbol{\cdot}}1}\cdot(p\mathbin{ \boldsymbol{\cdot}}1)}\to M_{p\mathbin{\boldsymbol{\cdot}}1}\) is simply the pullback projection, thus \(\lambda\circ(\operatorname{\mathsf{e}}^{-1\mathbin{\cdot}\mathsf{id}})=\pi_{1}\) is the pullback projection \(M_{(1_{Y}\mathbin{\boldsymbol{\cdot}}1)\cdot(p\mathbin{\boldsymbol{\cdot}}1)} \to M_{p\mathbin{\boldsymbol{\cdot}}1}\), and therefore \(\lambda\circ(\operatorname{\mathsf{e}}^{-1\mathbin{\cdot}\mathsf{id}}) \circ\mathfrak{m}^{-1}=\pi_{1}\circ\mathfrak{m}^{-1}=\hat{\pi}_{1}\) by (2.8). But \(\hat{\pi}_{1}\) itself is the coproduct of \(1_{Y}(y,z)\times p(x,y)\to p(x,y)\) indexed by the projection \(X\times Y\times Z\to Y\times Z\), which is just \(\lambda\mathbin{\boldsymbol{\cdot}}1\). A similar argument confirms the right unitor case.
Now, we're left with verifying the composition comparison coherence of \(-\mathbin{\boldsymbol{\cdot}}1\). For the remainder of the section, we will denote horizontal composition simply by concatenation. For an easier understanding of the calculations, we provide the following diagram:
First, we verify that \(\mathfrak{m}^{-1}\circ(\pi_{1}\mathbin{\boldsymbol{\cdot}}1)\circ(\alpha \mathbin{\boldsymbol{\cdot}}1)=\pi_{1}\circ\alpha\circ(\mathfrak{m}^{-1} \mathbin{\cdot}\mathsf{id})\circ\mathfrak{m}^{-1}\) as \(2\)-cells \(((ts)r)\mathbin{\boldsymbol{\cdot}}1\to(s\mathbin{\boldsymbol{\cdot}}1)(r \mathbin{\boldsymbol{\cdot}}1)\). We have
\[\pi_{0}\circ\mathfrak{m}^{-1}\circ\hat{\pi}_{1}\circ(\alpha \mathbin{\boldsymbol{\cdot}}1) =\hat{\pi}_{0}\circ\hat{\pi}_{1}\circ(\alpha\mathbin{\boldsymbol{ \cdot}}1)\] \[=\hat{\pi}_{1}\circ\hat{\pi}_{0}\] \[=\pi_{1}\circ\pi_{0}\circ(\mathfrak{m}^{-1}\mathbin{\cdot} \mathsf{id})\circ\mathfrak{m}^{-1}\] \[=\pi_{0}\circ\pi_{1}\circ\alpha\circ(\mathfrak{m}^{-1}\mathbin{ \cdot}\mathsf{id})\circ\mathfrak{m}^{-1}\] \[\pi_{1}\circ\mathfrak{m}^{-1}\circ\hat{\pi}_{1}\circ(\alpha \mathbin{\boldsymbol{\cdot}}1) =\hat{\pi}_{1}\circ\hat{\pi}_{1}\circ(\alpha\mathbin{\boldsymbol{ \cdot}}1)\] \[=\hat{\pi}_{1}\] \[=\pi_{1}\circ\mathfrak{m}^{-1}\] \[=\pi_{1}\circ(\mathfrak{m}^{-1}\mathbin{\cdot}\mathsf{id})\circ \mathfrak{m}^{-1}\] \[=\pi_{1}\circ\pi_{1}\circ\alpha\circ(\mathfrak{m}^{-1}\mathbin{ \cdot}\mathsf{id})\circ\mathfrak{m}^{-1},\]
and then we apply the universal property of the pullback \(M_{(\text{$s$-$1})(\text{$r$-$1})}\). With this, we finish our proof: note that
\[\pi_{0}\circ\alpha\circ(\mathsf{m}^{-\!-1}\cdot\mathsf{id})\circ \mathsf{m}^{-\!-1} =\pi_{0}\circ\pi_{0}\circ(\mathsf{m}^{-\!-1}\cdot\mathsf{id})\circ \mathsf{m}^{-\!-1}\] \[=\pi_{0}\circ\mathsf{m}^{-\!-1}\circ\pi_{0}\circ\mathsf{m}^{-\!-1}\] \[=(\pi_{0}\cdot 1)\circ(\pi_{0}\cdot 1)\] \[=(\pi_{0}\cdot 1)\circ(\boldsymbol{\alpha}\cdot 1)\] \[=\pi_{0}\circ\mathsf{m}^{-\!-1}\circ(\boldsymbol{\alpha}\cdot 1)\] \[=\pi_{0}\circ(\mathsf{id}\cdot\mathsf{m}^{-\!-1})\circ\mathsf{m}^{ -\!-1}\circ(\boldsymbol{\alpha}\cdot 1)\] \[\pi_{1}\circ(\mathsf{id}\cdot\mathsf{m}^{-\!-1})\circ\mathsf{m}^{ -\!-1}\circ(\boldsymbol{\alpha}\cdot 1) =\mathsf{m}^{-\!-1}\circ\pi_{1}\circ\mathsf{m}^{-\!-1}\circ( \boldsymbol{\alpha}\cdot 1)\] \[=\mathsf{m}^{-\!-1}\circ(\pi_{1}\cdot 1)\circ(\boldsymbol{\alpha} \cdot 1)\] \[=\pi_{1}\circ\alpha\circ(\mathsf{m}^{-\!-1}\cdot\mathsf{id}) \circ\mathsf{m}^{-\!-1}\]
then we apply the universal property of \(M_{(\text{$t$-$1})((\text{$s$-$1})(\text{$r$-$1}))}\). This concludes the proof of
**Proposition 2.2**.: \(\boldsymbol{-\cdot}\,1\colon\mathcal{V}\text{-}\mathsf{Mat}\to\mathsf{Span}( \mathcal{V})\) _is a normal oplax functor._
## 3. Conjoints and companions
As introduced in [19], and studied in [42, 15, 16, 41], there exist two notions of "adjunction" between vertical and horizontal \(1\)-cells in a pseudodouble category; these were introduced as _orthogonal adjunctions_.
To be precise, let \(\mathbb{D}\) be a pseudodouble category, and let \(f\colon a\to b\) be a vertical \(1\)-cell, \(r\colon b\nrightarrow a\) be a horizontal \(1\)-cell. Following the terminology from [42, 41], we say that \(r\) is the _conjoint_ of \(f\) if there exist \(2\)-cells
such that \(\varepsilon\circ\eta=1_{f}\) and \(\eta\cdot\varepsilon=\rho^{-1}\circ\lambda\). We say \(\eta\), \(\varepsilon\) are the unit, counit of the conjoint, respectively. Also denote by _companion_ the horizontally dual notion of conjoint; we denote the unit and counit \(2\)-cells of a companion as \(\nu\colon 1\to r\) and \(\delta\colon r\to 1\), respectively.
In any pseudodouble category \(\mathbb{D}\), the identity vertical \(1\)-cell on any \(0\)-cell \(x\) always has both a companion and a conjoint; in both cases, it is given by the horizontal unit \(1_{x}\), with unit and counit given by \(\mathsf{id}_{1_{x}}=\mathsf{id}_{\mathsf{id}_{x}}\), which trivially satisfies all four conditions. Unless otherwise specified, \(1_{x}\) will be our fixed choice of companion/conjoint for \(\mathsf{id}_{x}\).
We say that \(\mathbb{D}\) is _conjoint (companion) closed_ if every vertical \(1\)-cell of \(\mathbb{D}\) has a conjoint (companion). For instance, _equipments_ may be defined as the pseudodouble categories which are both conjoint and companion closed (see [42, Theorem A.2]), of which \(\mathsf{Span}(\mathcal{V})\) and \(\mathcal{V}\text{-}\mathsf{Mat}\) are examples.
Let \(T\) be a pseudomonad on a \(2\)-category \(\mathbb{B}\), and consider the double category of lax \(T\)-algebras as described in Section 1. Our next result, Proposition 3.1, given in [36, Theorems 1.4.11 and 1.4.14], originally stated for strict \(T\)-algebras in [28], may be used to characterize conjoints and companions in \(\mathsf{Lax}\text{-}\text{T}\text{-}\mathsf{Alg}\). Since this is just a restatement of the results of [36, Chapter 1], we omit the argument.
**Proposition 3.1** (Doctral adjunction).: _Let \((f,g,\eta,\varepsilon)\) be an adjunction in a 2-category \(\mathbb{B}\). There is a bijection between 2-cells \(\zeta\) making \((f,\zeta)\) into an lax \(T\)-algebra oplax morphism and 2-cells \(\xi\) making \((g,\xi)\) into a lax \(T\)-algebra lax morphism._
_Moreover, \((f,\zeta)\) is the conjoint of \((g,\xi)\) in \(\mathsf{Lax}\text{-}\text{T}\text{-}\mathsf{Alg}\) if and only if \(\zeta\) and \(\xi\) correspond to each other via the aforementioned bijection, and \(f\) has a companion if and only if \(\zeta\) is invertible; in which case, its companion is \((f,\zeta^{-1})\)._
As is the case with ordinary adjunctions in a \(2\)-category, there is also a notion of _mate theory_ for conjoints (and dually, companions), which we present in Lemma 3.2. Results along these lines were already present in [19, 1.6], as well as [15, Corollary 7.21] and [41, Propositions 5.13 and 5.19]. We have decided to provide a slightly different statement and proof: our goal is to provide explicit formulas as an aid for calculations involving conjoints and companions, abundant in this work.
**Lemma 3.2**.: _Let \((f,f^{*},\eta,\varepsilon)\) and \((g,g^{*},\eta,\varepsilon)\) be conjoints, and consider 2-cells_
(3.1)
_Then the following are equivalent:_
1. \(\xi=\rho\circ\left(\left((\eta\circ 1_{h})\cdot\zeta\right)\cdot(1_{k}\circ \varepsilon)\right)\circ\alpha^{-1}\circ\lambda^{-1}\)_,_
2. \(\zeta=\lambda\circ(\varepsilon\cdot\mathsf{id}_{s})\circ\xi\circ(\mathsf{ id}_{r}\cdot\eta)\circ\rho^{-1}\)_,_
3. \(\lambda\circ(\varepsilon\cdot\mathsf{id}_{s})\circ\xi=\rho\circ\left(\zeta \cdot(1_{k}\circ\varepsilon)\right)\)_,_
4. \(\xi\circ(\mathsf{id}_{r}\cdot\eta)\circ\rho^{-1}=\left((\eta\circ 1_{h})\cdot\zeta \right)\circ\lambda^{-1}\)__
_In particular, the sets of 2-cells as given in (3.1) are in pairwise correspondence, explicitly given by the formulas (a) and (b). Pairs of such 2-cells are said to be mates or under mate correspondence._
Proof.: We will prove that (c) \(\to_{\mathsf{i}}\) (b) \(\to_{\mathsf{ii}}\) (d) \(\to_{\mathsf{iii}}\) (a) \(\to_{\mathsf{iv}}\) (c).
1. Since \(\varepsilon\circ\eta=1_{f}\), we have \[\lambda\circ(\varepsilon\cdot\mathsf{id}_{\mathsf{q}})\circ\xi \circ(\mathsf{id}_{p}\cdot\eta)\circ\rho^{-1} =\rho\circ\left(\zeta\cdot(1\circ\varepsilon)\right)\circ( \mathsf{id}_{p}\cdot\eta)\circ\rho^{-1}\] \[=\rho\circ\left((\zeta\circ\mathsf{id}_{p})\cdot(1\circ \varepsilon\circ\eta)\right)\circ\rho^{-1}=\zeta\]
2. Since \(\eta\cdot\varepsilon=\rho^{-1}\circ\lambda\), we have \[\left((\eta\circ 1)\cdot\zeta\right)\circ\lambda^{-1} =\left((\eta\circ 1)\cdot\left(\lambda\circ(\varepsilon\cdot \mathsf{id}_{q})\circ\xi\circ(\mathsf{id}_{q}\cdot\eta)\circ\rho^{-1})\right) \right)\circ\lambda^{-1}\] \[=(\mathsf{id}_{s}\cdot\lambda)\circ\alpha\circ(\rho^{-1}\cdot \mathsf{id}_{q})\circ(\lambda\cdot\mathsf{id})\circ\alpha^{-1}\circ\lambda^{-1 }\circ\xi\circ(\mathsf{id}_{p}\cdot\eta)\circ\rho^{-1}\] and coherence guarantees \[(\mathsf{id}\cdot\lambda)\circ\alpha\circ(\rho^{-1}\cdot\mathsf{id})\circ( \lambda\cdot\mathsf{id})\circ\alpha^{-1}\circ\lambda^{-1}=\mathsf{id},\] as desired.
3. Since \(\eta\cdot\varepsilon=\rho^{-1}\circ\lambda\), we have \[\left((\eta\circ 1)\cdot\zeta\right)\cdot(1\circ\varepsilon) =\left(\xi\circ(\mathsf{id}_{p}\cdot\eta)\circ\rho^{-1}\circ \lambda\right)\cdot\varepsilon\] \[=(\xi\cdot 1)\circ((\mathsf{id}_{p}\cdot\eta)\cdot\varepsilon)\circ( (\rho^{-1}\circ\lambda)\cdot\mathsf{id}_{g})\] \[=(\xi\cdot 1)\circ\alpha^{-1}\circ(\mathsf{id}_{p}\cdot(\rho^{-1} \circ\lambda))\circ\alpha\circ((\rho^{-1}\circ\lambda)\cdot\mathsf{id}_{g^{*}})\] and coherence guarantees and coherence guarantees \[\rho\circ\alpha^{-1}\circ(\mathsf{id}_{p}\cdot(\rho^{-1}\circ\lambda))\circ \alpha\circ((\rho^{-1}\circ\lambda)\cdot\mathsf{id}_{g^{*}})\circ\alpha^{-1} \circ\lambda^{-1}=\mathsf{id},\] as desired.
4. Since \(\varepsilon\circ\eta=1_{g}\), we have \[(\varepsilon\cdot\mathsf{id}_{q})\circ\xi =(\varepsilon\cdot\mathsf{id}_{q})\circ\rho\circ\left(((\eta \circ 1)\cdot\zeta)\cdot(1\circ\varepsilon)\right)\circ\alpha^{-1}\circ\lambda ^{-1}\] \[=\rho\circ((\varepsilon\cdot\mathsf{id}_{q})\cdot\mathsf{id}_{1}) \circ(((\eta\circ 1)\cdot\zeta)\cdot(1\circ\varepsilon))\circ\alpha^{-1}\circ \lambda^{-1}\] \[=\rho\circ((1\cdot\zeta)\cdot\varepsilon)\circ\alpha^{-1}\circ \lambda^{-1}\] \[=\rho\circ\alpha^{-1}\circ\lambda^{-1}\circ(\zeta\cdot\varepsilon)\] and coherence guarantees \[\rho\circ\alpha^{-1}\circ\lambda^{-1}=\lambda^{-1}\circ\rho,\] as desired.
**Remark 3.3**.: Once more, we consider the pair of 2-cells \(\zeta,\xi\) given in (3.1). We will consider the following specialized instances of the mate correspondence:
1. For \(k=\mathsf{id}\), (c) becomes \((\varepsilon\cdot\mathsf{id})\circ\xi=\gamma^{-1}\circ(\zeta\cdot\varepsilon)\).
2. For \(h=\mathsf{id}\), (d) becomes \(\xi\circ(\mathsf{id}\cdot\eta)=(\eta\cdot\zeta)\circ\gamma^{-1}\).
3. For \(s=1\), (c) becomes \(\varepsilon\circ\theta=\rho\circ(\zeta\cdot(1_{\mathsf{k}}\circ\varepsilon))\), where \(\theta=\rho\circ\xi\).
4. For \(r=1\), (d) becomes \(\theta\circ\eta=((\eta\circ 1_{h})\cdot\zeta)\circ\lambda^{-1}\), where \(\theta=\xi\circ\lambda^{-1}\).
5. For \(f=\mathsf{id}\), both (b) and (c) become \(\zeta=\lambda\circ(\varepsilon\cdot\mathsf{id})\circ\theta\), where \(\theta=\xi\circ\rho^{-1}\).
6. For \(g=\mathsf{id}\), both (b) and (d) become \(\zeta=\theta\circ(\mathsf{id}\cdot\eta)\circ\rho^{-1}\), where \(\theta=\lambda\circ\xi\).
And by combining these, we may obtain simpler forms. For example, (v) and (iii) (respectively, (vi) and (iv))) provide the result that the counit (unit) of a conjunction is a cartesian (opcartesian) 2-cell in the sense of [42, 15].
The combination of (iii) and (iv) is mainly used under the hypothesis that we have a commutative square \(k\circ f=g\circ h\) of vertical 1-cells, that is, \(\zeta=\mathsf{id}\). In this case, the unit \(1_{goh}=1_{kof}\) has two mates; they are said to be the _mates of the commutative square_\(k\circ f=g\circ h\), and are the unique 2-cells \(\theta\), \(\omega\), respectively satisfying
\[\varepsilon\circ\theta=1_{k}\circ\varepsilon\quad\text{and}\quad\theta\circ \eta=\eta\circ 1_{h}\]
\[\varepsilon\circ\omega=1_{g}\circ\varepsilon\quad\text{and}\quad\omega\circ \eta=\eta\circ 1_{f}\]
In practice, we will consider "the" mate of a commutative square \(k\circ f=g\circ h\), and we let context determine which mate is being considered.
We proceed to review well-known [15, 19, 16, 42, 41], yet fundamental results about companions and conjoints. Our aim is to demonstrate the applications of their mate theory, while fixing notation to use for later reference in Sections 5 and 6.
Let \(F\colon\mathbb{D}\to\mathbb{E}\) be a lax functor of conjoint closed pseudodouble categories, and let \(f\) is a vertical 1-cell in \(\mathbb{D}\). We denote the mate of \(F\eta\circ\mathsf{e}^{F}\) obtained via (vi) by \(\sigma^{F}_{f}\colon(Ff)^{*}\to F(f^{*})\):
We say that \(F\)_preserves the conjoint_ of \(f\) if \(\sigma^{F}_{f}\) is an invertible 2-cell; we say \(F\)_preserves conjoints_ if \(\sigma^{F}_{f}\) is invertible for all vertical 1-cells \(f\). We can show that:
**Lemma 3.4**.: _Let \(F\colon\mathbb{D}\to\mathbb{E}\) be a lax functor of conjoint closed pseudodouble categories. The following are equivalent:_
1. \(F\) _preserves conjoints of identities,_
2. \(F\) _preserves all conjoints,_
3. \(F\) _is normal._
Proof.: We begin by showing that any lax functor \(F\) satisfies the identity \(F\varepsilon\circ\sigma^{F}=\mathsf{e}^{F}\circ\varepsilon\), for we have
\[F\varepsilon\circ\sigma^{F}\circ\eta=F\varepsilon\circ F\eta\circ\mathsf{e}^{F }=F1_{f}\circ\mathsf{e}^{F}=\mathsf{e}^{F}\circ 1_{Ff}=\mathsf{e}^{F}\circ \varepsilon\circ\eta,\]
so the desired equation follows by (vi).
Moreover, whenever \(\sigma^{F}_{f}\) is invertible, the following relations hold:
\[\varepsilon\circ(\sigma^{F})^{-1}\circ F\eta\circ\mathsf{e}^{F} =\varepsilon\circ\eta=1_{f},\] \[\mathsf{e}^{F}\circ\varepsilon\circ(\sigma^{F})^{-1}\circ F\eta =F\varepsilon\circ F\eta=F1_{f}.\]
Hence, if \(\sigma^{F}_{\mathsf{id}}\) is invertible for all 0-cells, we conclude that \(\mathsf{e}^{F}\) is invertible; this confirms (a) \(\to\) (c).
Now, if we assume \(F\) is normal, we let \(\chi^{F}\) be the unique 2-cell such that \(\varepsilon\circ\chi^{F}=(\mathsf{e}^{F})^{-1}\circ F\varepsilon\), obtained via (v). From this, it is clear that \(\chi^{F}\circ\sigma^{F}=\mathsf{id}\), since
\[\varepsilon\circ\chi^{F}\circ\sigma^{F}=(\mathsf{e}^{F})^{-1}\circ F \varepsilon\circ\sigma^{F}=\varepsilon,\]
\[\rho^{-1}\circ\sigma^{F}\circ\chi^{F}\circ\lambda =(\sigma^{F}\cdot\mathsf{id})\circ\rho^{-1}\circ\lambda\circ( \mathsf{id}\cdot\chi^{F})\] \[=(\sigma^{F}\cdot\mathsf{id})\circ(\eta\cdot\varepsilon)\circ( \mathsf{id}\cdot\chi^{F})\] \[=(\sigma^{F}\circ\eta)\cdot(\varepsilon\circ\chi^{F})\] \[=(F\eta\circ\mathsf{e}^{F})\cdot((\mathsf{e}^{F})^{-1}\circ F\varepsilon)\] \[=(\mathsf{id}\cdot(\mathsf{e}^{F})^{-1})\circ(F\eta\cdot F \varepsilon)\circ(\mathsf{e}^{F}\cdot\mathsf{id})\] \[=\rho^{-1}\circ F\rho\circ\mathsf{m}^{F}\circ(F\eta\cdot F \varepsilon)\circ(\mathsf{e}^{F}\cdot\mathsf{id})\] \[=\rho^{-1}\circ F\rho\circ F(\eta\cdot\varepsilon)\circ\mathsf{ m}^{F}\circ(\mathsf{e}^{F}\cdot\mathsf{id})\] \[=\rho^{-1}\circ\lambda\]
confirms that \(\chi^{F}\) is the inverse of \(\sigma^{F}\). We have shown that (c) \(\to\) (b), and of course, (a) is a particular case of (b).
For the case of companions, we write \(\tau^{F}_{f}\colon(Ff)_{!}\to F(f_{!})\) for the mate of \(F\nu\circ\mathsf{e}^{F}\), and we say that \(F\)_preserves the companion_ of \(f\) if \(\tau^{F}\) is invertible. The horizontally dual result states that \(F\) preserves companions iff \(F\) is normal. Thus, we obtain the result that these three notions are equivalent for lax functors between pseudodouble categories ([16, Proposition 3.8]).
**Lemma 3.5**.: _Let \(F\colon\mathbb{D}\to\mathbb{E}\) be a lax functor between conjoint closed pseudodouble categories, and let \(r\colon x\nrightarrow y\), \(f\colon z\to y\) be horizontal, vertical 1-cells respectively. Then the 2-cell_
\[(Ff)^{*}\cdot Fr\xrightarrow{\sigma^{F}\cdot\mathsf{id}}F(f^{*})\cdot Fr \xrightarrow{m^{F}}F(f^{*}\cdot r)\]
_is invertible. In particular, \(\mathsf{m}^{F}\colon F(f^{*})\cdot Fr\to F(f^{*}\cdot r)\) is invertible for all such \(r,f\) if and only if \(F\) is normal._
Proof.: We claim the inverse \(l^{F}\) is given by the mate of \(F\theta\) via
Note that \(l^{F}\) is the mate of \(F\theta\), and \(\theta\) is the mate of \(\mathsf{id}_{f^{*}\cdot r}\), via (iv) and (v), respectively. Now, note that
\[(\varepsilon\cdot\mathsf{id})\circ l^{F}\circ\mathsf{m}^{F} \circ(\sigma^{F}\cdot\mathsf{id}) =\lambda^{-1}\circ F\theta\circ\mathsf{m}^{F}\circ(\sigma^{F} \cdot\mathsf{id})\] \[=\lambda^{-1}\circ F\lambda\circ\mathsf{m}^{F}\circ(F\varepsilon \cdot\mathsf{id})\circ(\sigma^{F}\cdot\mathsf{id})\] \[=\lambda^{-1}\circ F\lambda\circ\mathsf{m}^{F}\circ(\mathsf{e}^{F }\cdot\mathsf{id})\circ(\varepsilon\cdot\mathsf{id})=\varepsilon\cdot\mathsf{id}\] \[\mathsf{m}^{F}\circ(\sigma^{F}\cdot\mathsf{id})\circ l^{F} =\mathsf{m}^{F}\circ(F\eta\cdot F\theta)\circ(\mathsf{e}^{F}\cdot \mathsf{id})\circ\lambda^{-1}\] \[=F(\eta\cdot\theta)\circ\mathsf{m}^{F}\circ(\mathsf{e}^{F}\cdot \mathsf{id})\circ\lambda^{-1}=\mathsf{id}\]
So, the result follows by the mate correspondence.
In a conjoint closed pseudodouble category \(\mathbb{D}\), let \(f,g\) be composable vertical 1-cells with conjoints \(f^{*}\) and \(g^{*}\), and let \(\pi\colon f^{*}\cdot g^{*}\to(g\circ f)^{*}\) be the mate of \(1_{g}\circ\varepsilon\colon f^{*}\to 1\). Via (i) and (iii), we obtain:
(3.2)
We can also define a \(2\)-cell \((g\circ f)^{*}\to f^{*}\cdot g^{*}\) as the mate of \(\eta\circ 1_{f}\), which can be shown to be the inverse of \(\pi\), using a method similar to the proof of Lemma 3.5; as this is not needed, we omit the details.
Now that we have fixed the notation we will need for the rest of the paper, we end the section with Theorem 3.6, to justify the utility of conjoints and companions.
**Theorem 3.6**.: _Let \(\mathbb{D},\mathbb{E}\) be pseudodouble categories. If \(\mathbb{E}\) is conjoint closed, then so is \(\mathsf{Lax}_{\mathsf{lax}}(\mathbb{D},\mathbb{E})\). Dually, if \(\mathbb{E}\) is companion closed, then so is \(\mathsf{Lax}_{\mathsf{opl}}(\mathbb{D},\mathbb{E})\)._
Proof.: Fix a vertical transformation \(\phi\colon F\to G\) where \(F,G\colon\mathbb{D}\to\mathbb{E}\) are lax functors. For each \(0\)-cell \(x\), we write
for the \(2\)-cells satisfying \(\varepsilon_{x}\circ\eta_{x}=1_{\phi_{x}}\) and \(\eta_{x}\cdot\varepsilon_{x}=\rho^{-1}\circ\lambda\), so that \(\phi_{x}^{*}\) is the conjoint of \(\phi_{x}\) for all \(x\).
Define \(\phi_{f}^{*}\colon\phi_{x}^{*}\to\phi_{y}^{*}\) to be the mate of \(1_{Ff}\) via (ii), so that \(\phi_{f}^{*}\circ\eta_{x}=\eta_{y}\circ 1_{Ff}\) and \(\varepsilon_{y}\circ\phi_{f}^{*}=1_{Gf}\circ\varepsilon_{x}\). Moreover, note that
\[\phi_{g}^{*}\circ\phi_{f}^{*}\circ\eta_{x}=\phi_{g}^{*}\circ\eta_{y}\circ 1_{Ff }=\eta_{z}\circ 1_{Gf}\circ 1_{Ff}=\eta_{z}\circ 1_{G(g\circ f)},\]
so we conclude that \(\phi_{g\circ f}^{*}=\phi_{g}^{*}\circ\phi_{f}^{*}\) by mate correspondence. Similarly, we have \(\phi_{\mathsf{id}_{\mathsf{z}}}^{*}=\mathsf{id}_{\phi_{x}}^{*}\).
Next, we consider the map \(r\mapsto Fr\cdot\phi_{x}^{*}\), where \(r\colon x\nrightarrow y\) is a horizontal \(1\)-cell. It is functorial: for \(2\)-cells
(3.3)
we have
\[F(\theta\circ\chi)\cdot\phi_{hof}^{*}=(F\theta\circ F\chi)\cdot(\phi_{h}^{*} \circ\phi_{f}^{*})=(F\theta\cdot\phi_{h}^{*})\circ(F\chi\cdot\phi_{f}^{*}),\]
and \(F(\mathsf{id})\cdot\phi_{\mathsf{id}}^{*}=\mathsf{id}\), as desired. Analogously, \(r\mapsto\phi_{y}^{*}\cdot Gr\) is also functorial.
We define \(\mathsf{n}_{r}^{\phi^{*}}\colon Fr\cdot\phi_{x}^{*}\to\phi_{y}^{*}\cdot Gf\) to be the mate of
via (i). We claim this data makes \(\phi^{*}\) into a lax horizontal transformation \(G\nrightarrow F\).
Given a \(2\)-cell \(\theta\) as in the left diagram of (3.3), we have \(\phi_{s}\circ F\theta=G\theta\circ\phi_{r}\), since \(\phi\) is a vertical transformation. The following pairs
\[G\theta\circ\phi_{r}\quad\text{and}\quad\mathsf{n}_{s}^{\phi^{*}} \circ(F\theta\cdot\phi_{f}^{*}),\] \[\phi_{s}\circ F\theta\quad\text{and}\quad(\phi_{g}^{*}\cdot G \theta)\circ\mathsf{n}_{r}^{\phi^{*}}\]
are mates, so that we have
\[\mathsf{n}_{s}^{\phi^{*}}\circ(F\theta\cdot\phi_{f}^{*})=(\phi_{g}^{*}\cdot G \theta)\circ\mathsf{n}_{r}^{\phi^{*}},\]
giving naturality. To confirm this,
\[\lambda\circ(\varepsilon\cdot\mathsf{id})\circ\mathsf{n}_{s}^{ \phi^{*}}\circ(F\theta\cdot\phi_{f}^{*}) =\rho\circ(\phi_{s}\cdot\varepsilon)\circ\circ(F\theta\cdot\phi_ {f}^{*})\] \[=\rho\circ\big{(}(\phi_{s}\circ F\theta)\cdot(\varepsilon\circ \phi_{f}^{*})\big{)}\] \[=\rho\circ\big{(}(\phi_{s}\circ F\theta)\cdot(1\circ\varepsilon )\big{)},\] \[(\phi_{g}^{*}\cdot G\theta)\circ\mathsf{n}_{r}^{\phi^{*}}\circ( \mathsf{id}\cdot\eta)\circ\rho^{-1} =(\phi_{g}^{*}\cdot G\theta)\circ(\eta\cdot\phi_{r})\circ\lambda ^{-1}\] \[=\big{(}(\phi_{g}^{*}\circ\eta)\cdot(G\theta\circ\phi_{r})\big{)} \circ\lambda^{-1}\] \[=\big{(}(\eta\circ 1)\cdot(G\theta\circ\phi_{r})\big{)}\circ\lambda^{-1}\]
Now, we note that \(\phi_{1_{x}}\circ\mathbf{e}_{x}^{F}=\mathbf{e}_{x}^{G}\circ 1_{\phi_{x}}\) and \(\phi_{s\cdot r}\circ\mathfrak{m}^{F}=\mathfrak{m}^{G}\circ(\phi_{s}\cdot\phi_{r})\). We shall deduce that the coherence diagrams for \(\phi^{*}\) commute by taking the mates of these commutative squares, thereby confirming that \(\phi^{*}\) is a lax horizontal transformation. Via (i), we will prove that the following pairs
\[\mathfrak{m}^{G}\circ(\phi_{s}\cdot\phi_{r}) \text{and} (\mathsf{id}\cdot\mathfrak{m}^{G})\circ\alpha\circ(\phi_{s}^{*} \cdot\mathsf{id})\circ\alpha^{-1}\circ(\mathsf{id}\cdot\phi_{r}^{*})\circ\alpha\] \[\phi_{s\cdot r}\circ\mathfrak{m}^{F} \text{and} \mathsf{n}_{s\cdot r}^{\phi^{*}}\circ(\mathfrak{m}^{F}\cdot \mathsf{id})\] \[\phi_{1_{x}}\circ\mathbf{e}^{F} \text{and} \mathsf{n}_{1_{x}}^{\phi^{*}}\circ(\mathbf{e}^{F}\cdot\mathsf{ id})\] \[\mathbf{e}^{G}\circ 1_{\phi_{x}} \text{and} (\mathsf{id}\cdot\mathbf{e}^{G})\circ\rho^{-1}\circ\lambda\]
are under mate correspondence. The last three are one-liners, respectively:
\[\lambda\circ(\varepsilon\cdot\mathsf{id})\circ\mathsf{n}_{s\cdot r }^{\phi^{*}}\circ(\mathfrak{m}^{F}\cdot\mathsf{id}) =\rho\circ(\phi_{s\cdot r}\cdot\varepsilon)\circ(\mathfrak{m}^{F }\cdot\mathsf{id})=\rho\circ((\phi_{s\cdot r}\circ\mathfrak{m}^{F})\cdot \varepsilon),\] \[\lambda\circ(\varepsilon\cdot\mathsf{id})\circ\mathsf{n}_{1_{x} }^{\phi^{*}}\circ(\mathbf{e}^{F}\cdot\mathsf{id}) =\rho\circ(\phi_{1_{x}}\cdot\varepsilon)\circ(\mathbf{e}^{F}\cdot \mathsf{id})=\rho\circ((\phi_{1_{x}}\circ\mathbf{e}^{F})\cdot\mathsf{id})\] \[\lambda\circ(\varepsilon\cdot\mathsf{id})\circ(\mathsf{id}\cdot \mathbf{e}^{G})\circ\rho^{-1}\circ\lambda\circ(\mathsf{id}\cdot\eta)\circ\rho^ {-1} =\mathbf{e}^{G}\circ\lambda\circ(\varepsilon\cdot\mathsf{id})\circ \rho^{-1}\circ\eta=\mathbf{e}^{G}\circ 1_{\phi_{x}}.\]
For the first pair, observe that
\[\lambda\circ(\varepsilon\cdot\mathsf{id})\circ(\mathsf{id}\cdot \mathfrak{m}^{G})\circ\alpha =\lambda\circ(1\cdot\mathfrak{m}^{G})\circ(\varepsilon\cdot \mathsf{id})\circ\alpha\] \[=\mathfrak{m}^{G}\circ\lambda\circ\alpha\circ((\varepsilon\cdot \mathsf{id})\cdot\mathsf{id})\] \[=\mathfrak{m}^{G}\circ(\lambda\cdot\mathsf{id})\circ((\varepsilon \cdot\mathsf{id})\cdot\mathsf{id})\] \[((\varepsilon\cdot\mathsf{id})\cdot\mathsf{id})\circ(\mathsf{n} _{s}^{\phi^{*}}\cdot\mathsf{id})\circ\alpha^{-1} =(((\varepsilon\cdot\mathsf{id})\circ\mathsf{n}_{s}^{\phi^{*}}) \cdot\mathsf{id})\circ\alpha^{-1}\] \[=((\gamma^{-1}\circ(\phi_{s}\cdot\varepsilon))\cdot\mathsf{id}) \circ\alpha^{-1}\] \[=(\gamma^{-1}\cdot\mathsf{id})\circ((\phi_{s}\cdot\varepsilon) \cdot\mathsf{id})\circ\alpha^{-1}\] \[=(\lambda^{-1}\cdot\lambda)\circ(\phi_{s}\cdot(\varepsilon\cdot \mathsf{id}))\] \[(\phi_{s}\cdot(\varepsilon\cdot\mathsf{id}))\circ(\mathsf{id} \cdot\mathsf{n}_{r}^{\phi^{*}})\circ\alpha =(\phi_{s}\cdot((\varepsilon\cdot\mathsf{id})\circ\mathsf{n}_{r}^{ \phi^{*}}))\circ\alpha\] \[=(\phi_{s}\cdot(\gamma^{-1}\circ(\phi_{r}\cdot\varepsilon)))\circ\alpha\] \[=(\mathsf{id}\cdot\gamma^{-1})\circ(\phi_{s}\cdot(\phi_{r}\cdot \varepsilon))\circ\alpha\] \[=(\mathsf{id}\cdot\gamma^{-1})\circ\alpha\circ((\phi_{s}\cdot \phi_{r})\cdot\varepsilon)\]
and pasting the expressions above together verifies the claim.
Finally, note that
\[\mathsf{n}_{r}^{\phi^{*}}\circ(\mathsf{id}_{Fr}\cdot\eta_{x}) =(\eta_{y}\cdot\phi_{r})\circ\gamma^{-1}\] \[(\varepsilon_{y}\cdot\mathsf{id}_{Gr})\circ\mathsf{n}_{r}^{\phi^ {*}} =\gamma\circ(\phi_{r}\cdot\varepsilon_{x})\]
are immediate consequences of mate correspondence. Thus, \(\eta\) and \(\varepsilon\) define modifications, and by calculating pointwise, we conclude that \(\phi^{*}\) is the conjoint of \(\phi\).
We say that a vertical transformation \(\phi\)_has a strong conjoint (companion)_ if its conjoint (companion) in the appropriate pseudodouble category is a strong horizontal transfomation; that is, if \(\mathsf{n}^{\phi^{*}}\) (\(\mathsf{n}^{\phi^{*}}\)) is an invertible natural transformation. The notion of a vertical transformation \(\phi\) having a strong companion (conjoint) is present in [15, A.4]; therein, the terminology is _(co)horizontally strong_.
To provide a class of examples, recall from [15, A.6] that for a natural transformation \(\phi\colon F\to G\) between pullback-preserving functors \(F\colon\mathcal{B}\to\mathcal{C}\) on categories with pullbacks, the induced vertical transformation \(\hat{\phi}\colon\hat{F}\to\hat{G}\) between the induced strong functors \(\hat{F},\hat{G}\colon\mathsf{Span}(\mathcal{B})\to\mathsf{Span}(\mathcal{C})\) has a strong conjoint if and only if it has a strong companion, if and only if \(\phi\) is a cartesian natural transformation.
We also have the following the result:
**Lemma 3.7**.: _Let \(\phi\colon F\to G\) be a vertical transformation of lax functors \(F,G\colon\mathbb{D}\to\mathbb{E}\), let \(H\colon\mathbb{E}\to\mathbb{F}\) be another lax functor. We assume \(\mathbb{E}\) is conjoint closed, and that \(\phi\) has a strong conjoint._
\(H\phi\) _has a strong conjoint if and only if \(\mathfrak{m}^{H}\circ(\mathsf{id}\cdot\sigma^{H})\colon HFr\cdot(H\phi_{x})^{*} \to H(Fr\cdot\phi_{x}^{*})\) is invertible for all \(x\) and all \(r\)._
Proof.: We shall verify that
(3.4)
for all \(r\) and \(x\), from which our result follows as a consequence of Lemma 3.5. Note that
\[\mathfrak{m}^{H}\circ(\sigma^{H}\cdot\mathsf{id})\circ\mathfrak{n}_{ r}^{(H\phi)^{*}}\circ(\mathsf{id}\cdot\eta) =\mathfrak{m}^{H}\circ(\sigma^{H}\cdot\mathsf{id})\circ(\eta\cdot H \phi_{r})\circ\gamma^{-1}\] \[=\mathfrak{m}^{H}\circ(H\eta\cdot H\phi_{r})\circ(\mathsf{e}^{H} \cdot\mathsf{id})\circ\gamma^{-1}\] \[=H(\eta\cdot\phi_{r})\circ\mathfrak{m}^{H}\circ(\mathsf{e}^{H} \cdot\mathsf{id})\circ\gamma^{-1}\] \[=H(\eta\cdot\phi_{r})\circ H\lambda^{-1}\circ\rho\] \[H\,\mathfrak{n}_{r}^{\phi^{*}}\circ\mathfrak{m}^{H}\circ( \mathsf{id}\cdot\sigma^{H})\circ(\mathsf{id}\cdot\eta) =H\,\mathfrak{n}_{r}^{\phi^{*}}\circ\mathfrak{m}^{H}\circ( \mathsf{id}\cdot H\eta)\circ(\mathsf{id}\cdot\mathsf{e}^{H})\] \[=H\,\mathfrak{n}_{r}^{\phi_{r}}\circ H(\mathsf{id}\cdot\eta) \circ\mathfrak{m}^{H}\circ(\mathsf{id}\cdot\mathsf{e}^{H})\] \[=H(\eta\cdot\phi_{r})\circ H\gamma^{-1}\circ\mathfrak{m}^{H} \circ(\mathsf{id}\cdot\mathsf{e}^{H})\] \[=H(\eta\cdot\phi_{r})\circ H\lambda^{-1}\circ\rho\]
so (3.4) holds by mate correspondence.
This invertibility condition is satisfied, for instance, by Barr extensions of monads on \(\mathsf{Set}\); see [24, 1.10.2(2)], and by strong functors.
## 4. Double categories as pseudo-algebras
This section is devoted to proving the following result:
**Proposition 4.1**.: _We have an equivalence of double categories \(\mathsf{PsDbCat}\simeq\mathsf{Ps}\text{-}\mathfrak{F}\text{-}\mathsf{Alg}\), where \(\mathfrak{F}=(\mathfrak{F},m,e)\) is the free internal category 2-monad on \(\mathsf{Grph}(\mathsf{Cat})\), and \(\mathsf{Ps}\text{-}\mathfrak{F}\text{-}\mathsf{Alg}\) is the sub-double category of \(\mathsf{Lax}\text{-}\mathfrak{F}\text{-}\mathsf{Alg}\) consisting of the pseudo-\(\mathfrak{F}\text{-}\mathfrak{F}\text{-}\mathsf{Alg}\)-algebras._
The proof is laid out as follows:
* We recall the definition of \(\mathfrak{F}\), verifying it is a 2-monad.
* We provide a construction of a pseudo-\(\mathfrak{F}\)-algebra from a given pseudodouble category.
* We provide a construction of (op)lax morphisms of pseudo-\(\mathfrak{F}\)-algebras from given (op)lax functors of pseudodouble categories. Moreover, we verify this construction defines a functor \(\mathsf{PsDbCat}_{\mathsf{lax}}\to\mathsf{Ps}\text{-}\mathfrak{F}\text{-} \mathsf{Alg}_{\mathsf{lax}}\) (and dually, \(\mathsf{PsDbCat}_{\mathsf{qcl}}\to\mathsf{Ps}\text{-}\mathfrak{F}\text{-} \mathsf{Alg}_{\mathsf{qcl}}\)).
* We prove the aforementioned functor is fully faithful and essentially surjective.
* Let \(H\colon\mathbb{A}\to\mathbb{B}\) and \(K\colon\mathbb{C}\to\mathbb{D}\) be lax functors, and let \(F\colon\mathbb{A}\to\mathbb{C}\) and \(G\colon\mathbb{B}\to\mathbb{D}\) be oplax functors, and consider the induced lax and oplax \(\mathfrak{F}\)-algebra morphisms (as in (4.3)). Given a 2-cell \(\omega\colon GH\to KF\) of internal \(\mathsf{Cat}\)-graphs, we prove that \(\omega\) is a generalized vertical transformation if and only if \(\omega\) is a generalized 2-cell of pseudo \(\mathfrak{F}\)-algebras.
We begin by recalling that \(\mathsf{Grph}(\mathsf{Cat})\) is the functor 2-category \([\cdot_{1}\rightrightarrows\cdot_{0},\mathsf{Cat}]\), whose 2-cells \(\theta\colon F\to G\) are pairs of natural transformations \(\theta_{i}\colon F_{i}\to G_{i}\) for \(i=0\), 1 such that \(d_{j}\cdot\theta_{1}=\theta_{0}\cdot d_{j}\) for \(j=0\), 1.
Since \(\mathsf{Cat}\) is an extensive category with pullbacks, we can define the free internal category monad \(\mathfrak{F}=(\mathfrak{F},m,e)\) on the underlying category of \(\mathsf{Grph}(\mathsf{Cat})\).
To extend \(\mathfrak{F}\) to a 2-monad, let \(\theta\colon F\to G\) be a 2-cell in \(\mathsf{Grph}(\mathsf{Cat})\). We define \(\mathfrak{F}\theta\) by letting \((\mathfrak{F}\theta)_{0}=\theta_{0}\) and \((\mathfrak{F}\theta)_{1}\colon(\mathfrak{F}F)_{1}\to(\mathfrak{F}G)_{1}\) is given at a composable string of horizontal arrows \(r_{1},\ldots,r_{n}\) by
\[(\mathfrak{F}\theta)_{r_{1},\ldots,r_{n}}=(\theta_{r_{1}},\ldots,\theta_{r_{n}}),\]
which is a horizontally composable string of 2-cells, and \((\mathfrak{F}\theta)_{()}=\theta_{0}\).
We must check \((\mathfrak{F}\theta)_{1}\) is natural; indeed, if \(\phi_{i}\colon r_{i}\to s_{i}\) is a horizontally composable string of \(2\)-cells, then \(\theta_{s_{i}}\circ F\phi_{i}=G\phi_{i}\circ\theta_{r_{i}}\) for all \(i\), so
\[(\mathfrak{F}\theta)_{s_{1},\dots,s_{n}}\circ(\mathfrak{F}F)(\phi _{1},\dots,\phi_{n}) =(\theta_{s_{1}},\dots,\theta_{s_{n}})\circ(F\phi_{1},\dots,F\phi _{n})\] \[=(G\phi_{1},\dots,G\phi_{n})\circ(\theta_{r_{1}},\dots,\theta_{r_ {n}})\] \[=(\mathfrak{F}G)(\phi_{1},\dots,\phi_{n})\circ(\mathfrak{F} \theta)_{r_{1},\dots,r_{n}},\]
and since \(\theta_{0}\) is already natural, there is nothing to check for \(n=0\).
Finally, note that \(d_{1}\cdot(\mathfrak{F}\theta)_{r_{1},\dots,r_{n}}=d_{1}(\theta_{r_{1}},\dots, \theta_{r_{n}})=d_{1}(\theta_{r_{1}})=\theta_{d_{1}r_{1}}=\theta_{d_{1}(r_{1},\dots,r_{n})}\), and likewise \(d_{0}\cdot(\mathfrak{F}\theta)_{1}=\theta_{0}\cdot d_{0}\).
To verify \(\mathfrak{F}\) is a \(2\)-functor, we must prove we have strict preservation of vertical and horizontal composition of \(2\)-cells. Therefore, let \(\omega\colon G\to H\) and \(\xi\colon H\to K\) be \(2\)-cells, with \(H,\,K\) composable with \(F,\,G\) respectively. We have \(\mathfrak{F}(\omega\circ\theta)_{0}=\mathfrak{F}(\omega)_{0}\circ\mathfrak{F} (\theta)_{0}\) and \(\mathfrak{F}(\xi\cdot\theta)_{0}=\mathfrak{F}(\xi)_{0}\cdot\mathfrak{F}( \theta)_{0}\). Moreover, given a composable string of horizontal arrows \(r_{1},\dots,r_{n}\), we have
\[\mathfrak{F}(\omega\circ\theta)_{r_{1},\dots,r_{n}} =((\omega\circ\theta)_{r_{1}},\dots,(\omega\circ\theta)_{r_{n}})\] \[=(\omega_{r_{1}},\dots,\omega_{r_{n}})\circ(\theta_{r_{1}}, \dots,\theta_{r_{n}})\] \[=\mathfrak{F}(\omega)_{r_{1},\dots,r_{n}}\circ\mathfrak{F}( \theta)_{r_{1},\dots,r_{n}},\] \[\mathfrak{F}(\xi\cdot\theta)_{r_{1},\dots,r_{n}} =((\xi\cdot\theta)_{r_{1}},\dots,(\xi\cdot\theta)_{r_{n}})\] \[=(\xi_{r_{1}},\dots,\xi_{r_{n}})\cdot(\theta_{r_{1}},\dots, \theta_{r_{n}})\] \[=\mathfrak{F}(\xi)_{r_{1},\dots,r_{n}}\cdot\mathfrak{F}(\theta) _{r_{1},\dots,r_{n}},\]
as desired. Nothing needs to be done to verify that \(m,e\) are \(2\)-natural transformations.
A pseudodouble category consists of a graph of categories \(\mathbb{D}=(\mathbb{D}_{1}\rightrightarrows\mathbb{D}_{0})\), with vertical domain and codomain functors. The algebra structure \(a\colon\mathfrak{F}\mathbb{D}\to\mathbb{D}\) is the identity on \(0\)-cells and vertical \(1\)-cells. We define \(a()=1\) (at \(0\)-cells), and if \(a\) is defined for \(\mathbb{D}^{(n)}\), we define
\[a(r_{1},\dots,r_{n+1})=r_{n+1}\cdot a(r_{1},\dots,r_{n}).\]
We let \(\eta\colon\mathsf{id}\to a\cdot e\) be the identity on \(\mathsf{id}_{\mathbb{D}_{0}}\), and \(\eta_{r}\colon r\to a(r)\) is given by
\[\rho^{-1}\colon r\to r\cdot 1=r\cdot a()=a(r).\]
We define \(\mu\colon a\cdot\mathfrak{F}a\to a\cdot m\) to be the identity on \(\mathsf{id}_{\mathbb{D}_{0}}\), and on \(\mathfrak{F}\mathfrak{F}\mathbb{D}_{1}\to\mathfrak{F}\mathbb{D}_{1}\) by double induction:
\[\mu_{()} =\mathsf{id},\] \[\mu_{k_{1},\dots,k_{n},0} =\mu_{k_{1},\dots,k_{n}}\circ\lambda,\] \[\mu_{k_{1},\dots,k_{n+1}+1} =(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}})\circ\alpha\]
where
\[\mu_{k_{1},\dots,k_{n}}\colon a(a(r_{1,1},\dots,r_{1,k_{1}}),\dots,a(r_{n,1}, \dots,r_{n,k_{n}}))\to a(r_{1,1},\dots,r_{n,k_{n}}).\]
To prove that \((\mathbb{D},a,\eta,\mu)\) is a pseudo-\(\mathfrak{F}\)-algebra, we must verify that
\[\mu_{m}\circ\eta_{a(r_{1},\dots,r_{m})} =\mathsf{id},\] \[\mu_{1,\dots,1}\circ a(\eta_{r_{1}},\dots,\eta_{r_{m}}) =\mathsf{id},\] \[\mu_{j_{1},\dots,j_{n,k_{n}}}\circ\mu_{k_{1},\dots,k_{n}} =\mu_{j_{1},k_{1},\dots,j_{n,k_{n}}}\circ a(\sigma_{1},\dots,\sigma _{n})\]
where we use the following abbreviations:
\[\hat{j}_{i,k_{i}} =\sum_{p=1}^{k_{i}}j_{i,p}\] \[\sigma_{p} =\mu_{j_{p},1,\dots,j_{p,k_{p}}}\]
For the first and second, we argue by induction. When \(m=0\), the first becomes \(\lambda_{1}\circ\rho_{1}^{-1}=\mathsf{id}\), and the second trivializes. If we assume the equations hold for some \(m\), then
\[\mu_{m+1}\circ\rho^{-1}=(\mathsf{id}\cdot\mu_{m})\circ\alpha\circ\rho^{-1}=( \mathsf{id}\cdot\mu_{m})\circ(\mathsf{id}\cdot\rho^{-1})=\mathsf{id},\]
\[\mu_{1,\dots,1,1}\circ a(\rho^{-1},\dots,\rho^{-1},\rho^{-1}) =(\mathsf{id}\cdot\mu_{1,\dots,1})\circ(\mathsf{id}\cdot\lambda) \circ\alpha\circ(\rho^{-1}\cdot\mathsf{id})\circ(\mathsf{id}\cdot a(\rho^{-1}, \dots,\rho^{-1}))\] \[=(\mathsf{id}\cdot\mu_{1,\dots,1})\circ(\mathsf{id}\cdot a(\rho^{ -1},\dots,\rho^{-1}))\] \[=\mathsf{id}\]
For the third, we use triple induction. If \(n=0\), it trivializes, so we assume it holds for some \(n\). If \(k_{n+1}=0\), we have
\[\mu_{j_{1,1},\dots,j_{n,k_{n}}}\circ\mu_{k_{1},\dots,k_{n},k_{n+1}} =\mu_{j_{1,1},\dots,j_{n,k_{n}}}\circ\mu_{k_{1},\dots,k_{n}}\circ\lambda\] \[=\mu_{j_{1,k_{1}},\dots,j_{n,k_{n}}}\circ a(\sigma_{1},\dots, \sigma_{n})\circ\lambda\] \[=\mu_{j_{1,k_{1}},\dots,j_{n,k_{n}}}\circ a(\sigma_{1},\dots, \sigma_{n},\mathsf{id})\] \[=\mu_{j_{1,k_{1}},\dots,j_{n,k_{n}}}\circ a(\sigma_{1},\dots, \sigma_{n},\sigma_{n+1}).\]
Now, we assume it holds for some \(k_{n+1}\). If \(j_{n+1,k_{n+1}+1}=0\), we have
\[\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}}}\circ\mu_{k_{1},\dots,k_{n},k_ {n+1}+1} =\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}}}\circ\lambda\circ(\mathsf{id} \cdot\mu_{k_{1},\dots,k_{n+1}})\circ\alpha\] \[=\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}}}\circ\mu_{k_{1},\dots,k_{n+1} }\circ\lambda\circ\alpha\] \[=\mu_{j_{1,k_{1}},\dots,j_{n+1,k_{n+1}}}\circ a(\sigma_{1},\dots, \sigma_{n+1})\circ(\lambda\cdot\mathsf{id})\] \[=\mu_{j_{1,k_{1}},\dots,j_{n+1,k_{n+1}}}\circ a(\sigma_{1},\dots, (\sigma_{n+1}\circ\lambda)),\]
and finally, if we assume it holds for some \(j_{n+1,k_{n+1}+1}\), then we have
\[\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}+1}}\circ\mu_{k_{1},\dots,k_{n+ 1}+1} =(\mathsf{id}\cdot\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}+1}})\circ \alpha\circ(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}})\circ\alpha\] \[=(\mathsf{id}\cdot\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}+1}})\circ( \mathsf{id}\cdot(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}}))\circ\alpha\circ\alpha\] \[=(\mathsf{id}\cdot\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}+1}})\circ( \mathsf{id}\cdot(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}}))\circ(\mathsf{id} \cdot\alpha)\circ\alpha\circ(\alpha\cdot\mathsf{id})\] \[=(\mathsf{id}\cdot\mu_{j_{1,1},\dots,j_{n+1,k_{n+1}+1}})\circ( \mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}+1})\circ\alpha\circ(\alpha\cdot \mathsf{id})\] \[=(\mathsf{id}\cdot\mu_{j_{1,k_{1}},\dots,j_{n+1,k_{n+1}}})\circ( \mathsf{id}\cdot a(\sigma_{1},\dots,\sigma_{n+1}))\circ\alpha\circ(\alpha \cdot\mathsf{id})\] \[=(\mathsf{id}\cdot\mu_{j_{1,k_{1}},\dots,j_{n+1,k_{n+1}}})\circ \alpha\circ((\mathsf{id}\cdot\sigma_{n+1})\cdot a(\sigma_{1},\dots,\sigma_{n} ))\circ(\alpha\cdot\mathsf{id})\] \[=\mu_{j_{1,k_{1}},\dots,j_{n+1,k_{n+1}}}\circ a(\sigma_{1},\dots, \sigma_{n},(\mathsf{id}\cdot\sigma_{n+1})\circ\alpha),\]
so the result holds by induction. It should be noted that the proof (so far) remains unchanged if we consider left-biased double categories, in which case \((\mathbb{D},a,\eta,\mu)\) is a lax \(\mathfrak{F}\)-algebra instead. Respectively, right-biased double categories \(\rightarrow\) oplax \(\mathfrak{F}\)-algebras.
If we have a lax functor \(F\colon\mathbb{D}\rightarrow\mathbb{E}\) between ordinary double categories, we define a pseudo \(\mathfrak{F}\)-algebra lax morphism \((F,\gamma^{F})\colon(\mathbb{D},a,\eta,\mu)\rightarrow(\mathbb{E},b,\eta,\mu)\), taking \(F\) to be the same underlying graph morphism, and we define \(\gamma^{F}\colon b\circ\mathfrak{F}F\to F\circ a\) inductively as follows:
\[\gamma^{F}_{x} =\mathsf{e}^{F}_{x},\] \[\gamma^{F}_{r_{1},\dots,r_{n+1}} =\mathsf{m}^{F}\circ(\mathsf{id}\cdot\gamma^{F}_{r_{1},\dots,r_{ n}}).\]
to confirm \((F,\gamma^{F})\) is indeed a lax morphism, we will prove that
\[F\eta=\gamma_{r}\circ\eta\]
and
\[\gamma_{r_{1,1},\dots,r_{n,k_{n}}}\circ\mu_{k_{1},\dots,k_{n}} =F\mu_{k_{1},\dots,k_{n}}\circ\gamma_{s_{1},\dots,s_{n}}\circ b( \sigma_{1},\dots,\sigma_{n}),\]
where
\[s_{i} =a(r_{i,1},\dots,r_{i,k_{i}}),\] \[\sigma_{i} =\gamma_{r_{i,1},\dots,r_{i,k_{i}}}.\]
The first is just a restatement of the coherence diagram for the right unitor. For the second, when \(n=0\), the equation is trivial; \(\mathsf{e}^{F}=\mathsf{e}^{F}\). Now, we assume the equation holds for some \(n\). If \(k_{n+1}=0\)
then
\[\gamma_{r_{1,1},\dots,r_{n,k_{n}}}\circ\mu_{k_{1},\dots,k_{n},0} =\gamma_{r_{1,1},\dots,r_{n,k_{n}}}\circ\mu_{k_{1},\dots,k_{n}}\circ\lambda\] \[=F\mu_{k_{1},\dots,k_{n}}\circ\gamma_{s_{1},\dots,s_{n}}\circ b( \sigma_{1},\dots,\sigma_{n})\circ\lambda\] \[=\lambda\circ(\mathsf{id}\cdot F\mu_{k_{1},\dots,k_{n}})\circ( \mathsf{id}\cdot\gamma_{s_{1},\dots,s_{n}})\circ(\mathsf{id}\cdot b(\sigma_{1}, \dots,\sigma_{n}))\] \[=F\lambda\circ\mathsf{m}^{F}\circ(\mathsf{e}^{F}\cdot\mathsf{id}) \circ(\mathsf{id}\cdot F\mu_{k_{1},\dots,k_{n}})\circ(\mathsf{id}\cdot\gamma_ {s_{1},\dots,s_{n}})\circ(\mathsf{id}\cdot b(\sigma_{1},\dots,\sigma_{n}))\] \[=F\lambda\circ F(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n}})\circ \mathsf{m}^{F}\circ(\mathsf{id}\cdot\gamma_{s_{1},\dots,s_{n}})\circ(\mathsf{ e}^{F}\cdot b(\sigma_{1},\dots,\sigma_{n}))\] \[=F\mu_{k_{1},\dots,k_{n},0}\circ\gamma_{s_{1},\dots,s_{n},s_{n+1} }\circ b(\sigma_{1},\dots,\sigma_{n},\sigma_{n+1}),\]
so, we assume the identity holds for some \(k_{n+1}\). We have
\[\gamma_{r_{1,1},\dots,r_{n+1,k_{n+1}},r_{n+1,k_{n+1}+1}}\circ\mu_ {k_{1},\dots,k_{n},k_{n+1}+1}\] \[=\mathsf{m}^{F}\circ(\mathsf{id}\cdot\gamma_{r_{1,1},\dots,r_{n+ 1}})\circ(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}})\circ\alpha\] \[=\mathsf{m}^{F}\circ(\mathsf{id}\cdot F\mu_{k_{1},\dots,k_{n+1}}) \circ(\mathsf{id}\cdot\gamma_{s_{1},\dots,s_{n},s_{n+1}})\circ(\mathsf{id} \cdot b(\sigma_{1},\dots,\sigma_{n},\sigma_{n+1}))\circ\alpha\] \[=F(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}})\circ\mathsf{m}^{F} \circ(\mathsf{id}\cdot\mathsf{m}^{F})\circ(\mathsf{id}\cdot(\mathsf{id}\cdot \gamma_{s_{1},\dots,s_{n}}))\circ\alpha\circ((\mathsf{id}\cdot\sigma_{n+1}) \cdot b(\sigma_{1},\dots,\sigma_{n}))\] \[=F(\mathsf{id}\cdot\mu_{k_{1},\dots,k_{n+1}})\circ F\alpha\circ \mathsf{m}^{F}\circ(\mathsf{m}^{F}\cdot\mathsf{id})\circ(\mathsf{id}\cdot \gamma_{s_{1},\dots,s_{n}})\circ((\mathsf{id}\cdot\sigma_{n+1})\cdot b(\sigma _{1},\dots,\sigma_{n}))\] \[=F\mu_{k_{1},\dots,k_{n+1}+1}\circ\gamma_{s_{1},\dots,s_{n+1}} \circ b(\sigma_{1},\dots,\sigma_{n},\mathsf{m}^{F}\circ(\mathsf{id}\cdot \sigma_{n+1})),\]
so, the result follows by induction.
This assignment preserves maps identities to identities (trivially), and preserves composition; that is, this defines a functor \(\mathsf{PsDbCat}_{\mathsf{lax}}\to\mathsf{Ps}\text{-}\mathfrak{F}\text{-} \mathsf{Alg}_{\mathsf{lax}}\). To see this, let \(G\colon\mathbb{E}\to\mathbb{C}\) be another lax functor. We have
\[(G\gamma^{F}\circ\gamma^{G})_{()} =G\,\mathsf{e}^{F}\circ e^{G},\] \[(G\gamma^{F}\circ\gamma^{G})_{r_{1},\dots,r_{n+1}} =\mathsf{m}^{GF}\circ(\mathsf{id}\cdot G\gamma^{F}_{r_{1},\dots,r_ {n}})\circ(\mathsf{id}\cdot\gamma^{G}_{F_{1},\dots,F_{r_{n}}})\] \[=G\,\mathsf{m}^{F}\circ G(\mathsf{id}\cdot\gamma^{F}_{r_{1}, \dots,r_{n}})\circ\mathsf{m}^{G}\circ(\mathsf{id}\cdot\gamma^{G}_{F_{1},\dots,F _{r_{n}}})\] \[=G(\gamma^{F}_{r_{1},\dots,r_{n+1}})\circ\gamma^{G}_{F_{1},\dots,F _{rn+1}}.\]
We claim the functor \(\mathsf{PsDbCat}_{\mathsf{lax}}\to\mathsf{Ps}\text{-}\mathfrak{F}\text{-} \mathsf{Alg}_{\mathsf{lax}}\) is fully faithful; if \((F,\gamma^{F})\colon(\mathbb{D},a,\eta,\mu)\to(\mathbb{E},b,\eta,\mu)\) is a lax morphism between (the image of) double categories, we define
\[\mathsf{e}^{F}_{\mathsf{e}} =\gamma^{F}_{\mathsf{x}}\] \[\mathsf{m}^{F}_{r,s} =F(\mathsf{id}\cdot\rho)\circ\gamma^{F}_{r,s}\circ(\mathsf{id}\cdot \rho^{-1})\]
We must confirm these satisfy the coherence conditions. First, we observe that
\[\mathsf{m}^{F}_{1,s}\circ(\mathsf{id}\cdot\mathsf{e}^{F}) =F(\mathsf{id}\cdot\rho)\circ\gamma^{F}_{1,s}\circ(\mathsf{id} \cdot\rho^{-1})\circ(\mathsf{id}\cdot\mathsf{e}^{F})\] \[=F(\mathsf{id}\cdot\lambda)\circ F(\rho\cdot\mathsf{id})\circ \gamma^{F}_{1,a(s)}\circ(F\rho^{-1}\cdot\mathsf{id})\circ(\mathsf{id}\cdot( \mathsf{e}^{F}\cdot\mathsf{id}))\circ(\mathsf{id}\cdot\lambda^{-1})\] \[=F\mu_{0,1}\circ\gamma^{F}_{1,a(s)}\circ b(\gamma^{F}_{0}, \gamma^{F}_{s})\circ\mu^{-1}_{0,1}=\gamma^{F}_{s},\] \[\mathsf{m}^{F}_{r,1}\circ(\mathsf{e}^{F}\cdot\mathsf{id}) =F(\mathsf{id}\cdot\rho)\circ\gamma^{F}_{r,1}\circ(\mathsf{id}\cdot \rho^{-1})\circ(\mathsf{e}^{F}\cdot\mathsf{id})\] \[=F(\mathsf{id}\cdot\rho)\circ F(\mathsf{id}\cdot(\rho\cdot\mathsf{ id}))\circ\gamma^{F}_{a(r),1}\circ(\mathsf{id}\cdot(F\rho^{-1}\cdot\mathsf{id})) \circ(\mathsf{e}^{F}\cdot\mathsf{id})\circ(\mathsf{id}\cdot\rho^{-1})\] \[=F(\mathsf{id}\cdot\rho)\circ F(\mathsf{id}\cdot\rho)\circ F\mu^{- 1}_{a(r),1}\circ b(\gamma^{F}_{r},\gamma^{F}_{()})\circ(\mathsf{id}\cdot\rho^{-1}) \circ(\mathsf{id}\cdot\rho^{-1})\] \[=F(\mathsf{id}\cdot\rho)\circ F(\mathsf{id}\cdot\rho)\circ F\mu^{- 1}_{1,0}\circ\gamma^{F}_{r}\circ\mu_{1,0}\circ(\mathsf{id}\cdot\rho^{-1}) \circ(\mathsf{id}\cdot\rho^{-1})\] \[=F\lambda^{-1}\circ F\rho\circ\gamma^{F}_{r}\circ\rho^{-1}\circ\lambda\] \[=F\lambda^{-1}\circ\lambda\]
which gives the unit comparision coherences for \(F\), and after calculating
\[\mu_{1,2} =\alpha\circ((\mathsf{id}\cdot\rho)\cdot(\rho\cdot\mathsf{id}))\] \[\mu_{2,1} =(\mathsf{id}\cdot\alpha)\circ(\mathsf{id}\cdot((\mathsf{id}\cdot \rho)\cdot\mathsf{id}))\circ(\rho\cdot\mathsf{id})\]
we verify that
\[F\alpha \circ\mathsf{m}^{F}_{r,t,s}\circ(\mathsf{m}^{F}_{s,t}\cdot\mathsf{ id})\] \[=F\alpha\circ F(\mathsf{id}\cdot\rho)\circ\gamma^{F}_{r,t,s}\circ( \mathsf{id}\cdot\rho^{-1})\circ(F(\mathsf{id}\cdot\rho)\cdot\mathsf{id})\circ( \gamma^{F}_{s,t}\cdot\mathsf{id})\circ((\mathsf{id}\cdot\rho^{-1})\cdot \mathsf{id})\] \[=F\alpha\circ F(\mathsf{id}\cdot\rho)\circ F((\mathsf{id}\cdot \rho)\cdot(\rho\cdot\mathsf{id}))\circ\gamma^{F}_{a(r),a(s,t)}\circ(\gamma^{F} _{s,t}\cdot(F\rho^{-1}\cdot\mathsf{id}))\circ((\mathsf{id}\cdot\rho^{-1})\cdot \rho^{-1})\] \[=F(\mathsf{id}\cdot(\mathsf{id}\cdot\rho))\circ F\alpha\circ F(( \mathsf{id}\cdot\rho)\cdot(\rho\cdot\mathsf{id}))\circ\gamma^{F}_{a(r),a(s,t)} \circ b(\gamma^{F}_{r},\gamma^{F}_{s,t})\circ((\mathsf{id}\cdot\rho^{-1}) \cdot(\rho^{-1}\cdot\mathsf{id}))\circ(\mathsf{id}\cdot\rho^{-1})\] \[=F(\mathsf{id}\cdot(\mathsf{id}\cdot\rho))\circ\gamma^{F}_{r,s,t} \circ\alpha\circ(\mathsf{id}\cdot\rho^{-1})\]
\[\mathsf{m}^{F}_{s\cdot r,t} \circ(\mathsf{id}\cdot\mathsf{m}^{F}_{r,s})\circ\alpha\] \[=F(\mathsf{id}\cdot\rho)\circ\gamma^{F}_{s\cdot r,t}\circ( \mathsf{id}\cdot\rho^{-1})\circ(\mathsf{id}\cdot F(\mathsf{id}\cdot\rho)) \circ(\mathsf{id}\cdot\gamma^{F}_{r,s})\circ(\mathsf{id}\cdot(\mathsf{id} \cdot\rho^{-1}))\circ\alpha\] \[=F(\mathsf{id}\cdot\rho)\circ F(\rho\cdot((\mathsf{id}\cdot\rho) \cdot\mathsf{id}))\circ\gamma^{F}_{a(r,s),a(t)}\circ(F\rho^{-1}\cdot(F( \mathsf{id}\cdot\rho^{-1})\cdot\mathsf{id}))\circ(\mathsf{id}\cdot\rho^{-1}) \circ(\mathsf{id}\cdot F(\mathsf{id}\cdot\rho))\] \[\qquad\circ(\mathsf{id}\cdot\gamma^{F}_{r,s})\circ(\mathsf{id} \cdot(\mathsf{id}\cdot\rho^{-1}))\circ\alpha\] \[=F(\mathsf{id}\cdot(\mathsf{id}\cdot\rho))\circ F(\rho\cdot\rho) \circ\gamma^{F}_{a(r,s),a(t)}\circ b(\gamma^{F}_{r,s},\gamma^{F}_{t})\circ( \rho^{-1}\cdot\rho^{-1})\circ(\mathsf{id}\cdot(\mathsf{id}\cdot\rho^{-1}))\circ\alpha\] \[=F(\mathsf{id}\cdot(\mathsf{id}\cdot\rho))\circ\gamma^{F}_{r,s,t }\circ\alpha\circ(\mathsf{id}\cdot\rho^{-1})\]
which confirms coherence for the associator comparison. We further verify that, by induction, \(\mu_{n,1}\circ(\rho^{-1}\cdot\rho^{-1})=\mathsf{id}\) (pattern matching), so that
\[\mathsf{m}^{F}_{a(r_{1},\ldots,r_{n}),r_{n+1}}\circ(\mathsf{id} \cdot\gamma^{F}_{r_{1},\ldots,r_{n}}) =F(\mathsf{id}\cdot\rho)\circ\gamma^{F}_{a(r_{1},\ldots,r_{n}),r _{n+1}}\circ(\mathsf{id}\cdot\rho^{-1})\circ(\mathsf{id}\cdot\gamma^{F}_{r_{1 },\ldots,r_{n}})\] \[=F(\rho\cdot\rho)\circ\gamma^{F}_{a(r_{1},\ldots,r_{n}),a(r_{n+1 })}\circ b(\gamma^{F}_{r_{1},\ldots,r_{n}},\gamma^{F}_{r_{n+1}})\circ(\rho^{-1 }\cdot\rho^{-1})\] \[=\gamma^{F}_{r_{1},\ldots,r_{n+1}},\]
confirming that the functor \(\mathsf{PsDbCat}_{\mathsf{lax}}\to\mathsf{Ps}\)-\(\mathfrak{F}\)-\(\mathfrak{Alg}_{\mathsf{lax}}\) is fully faithful.
We claim the functor \(\mathsf{PsDbCat}_{\mathsf{lax}}\to\mathsf{Ps}\)-\(\mathfrak{F}\)-\(\mathfrak{Alg}_{\mathsf{lax}}\) is essentially surjective; let \((\mathbb{D},a,\eta,\mu)\) be a pseudo-\(\mathfrak{F}\)-algebra. We define
\[1 =a(),\] \[s\cdot r =a(r,s),\] \[\lambda_{r} =\eta_{r}^{-1}\circ\mu_{r,-}\circ a(\eta_{r},\mathsf{id}),\] \[\rho_{r} =\eta_{r}^{-1}\circ\mu_{-,r}\circ a(\mathsf{id},\eta_{r}),\] \[\alpha_{r,s,t} =a(\mathsf{id},\eta_{t}^{-1})\circ\mu_{rs,t}^{-1}\circ\mu_{r,st} \circ a(\eta_{r},\mathsf{id}).\]
These endow \(\mathbb{D}\) with the structure of a double category; to see this, we must verify the coherence conditions hold. First, we have
\[(\mathsf{id}\cdot\lambda_{r})\circ\alpha_{r,1,s} =a(\eta_{r}^{-1},\mathsf{id})\circ a(\mu_{r,-},\mathsf{id})\circ a (a(\eta_{r},\mathsf{id}),\mathsf{id})\circ a(\mathsf{id},\eta_{s}^{-1})\circ \mu_{r,1s}\circ a(\eta_{r},\mathsf{id})\] \[=a(\eta_{r}^{-1},\eta_{s}^{-1})\circ a(\mu_{r,-},\mathsf{id}) \circ a(a(\eta_{r},\mathsf{id}),\mathsf{id})\circ\mu_{r1,s}^{-1}\circ\mu_{r,1s }\circ a(\eta_{r},\mathsf{id})\] \[=\mu_{r,s}\circ a(\mu_{r,-},\mathsf{id})\circ\mu_{a(r)1,s}^{-1} \circ a(\eta_{r},\mathsf{id},\eta_{s})\circ\mu_{r,1s}\circ a(\eta_{r},\mathsf{id})\] \[=\mu_{r,-s}\circ a(\mu_{r,-},\mathsf{id}_{s})\circ a^{-1}_{a(r)1,a( s)}\circ a(\eta_{r},\mathsf{id},\eta_{s})\circ\mu_{r,1s}\circ a(\eta_{r},\mathsf{id})\] \[=\mu_{r,-s}\circ a_{(r)1,a(s)}\circ a(a(\eta_{r}),a(\mathsf{id}, \eta_{s}))\circ a(\eta_{r},\mathsf{id})\] \[=\mu_{r,s}\circ a(\mu_{r},\mu_{-,s})\circ a(a(\eta_{r}),a(\mathsf{ id},\eta_{s}))\circ a(\eta_{r},\mathsf{id})\] \[=a(\eta_{r}^{-1},\eta_{s}^{-1})\circ a(\mathsf{id},\mu_{-,s}) \circ a(\mathsf{id},a(\mathsf{id},\eta_{s}))\circ a(\eta_{r},\mathsf{id})\] \[=a(\mathsf{id},\eta_{s}^{-1})\circ a(\mathsf{id},\mu_{-,s})\circ a (\mathsf{id},a(\mathsf{id},\eta_{s}))=\rho_{s}\cdot\mathsf{id}.\]
and for the associator pentagon, we have, on one hand
\[(\mathsf{id}\cdot\alpha_{r,s,t})\circ\alpha_{q,a(r,s),t}\circ(\alpha_ {q,r,s}\cdot\mathsf{id})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\circ a(\mathsf{id},\eta_{s}^{-1}),\mathsf{id})\circ a(\mu_{q,rs},\mathsf{id})\circ a(a(\eta_{q}, \mathsf{id}),\mathsf{id})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\circ a(\mathsf{id},\eta_{t}^{-1})\circ\mu_{qa(r,s),t}\circ\mu_{qa(r,s) }\circ a(\eta_{q},\mathsf{id})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\circ a(\mathsf{id},a(\mathsf{id},\eta_{t}^{-1}))\circ a( \mathsf{id},\mu_{r,s}^{-1})\circ a(\mathsf{id},\mu_{r,st})\circ a(\mathsf{id},a (\eta_{r},\mathsf{id}))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
so, our goal is to prove that
\[\begin{split} a(\mu_{q,r,s}^{-1},\mu_{t}^{-1})\circ\mu_{qrs,t}^{-1} \circ\mu_{q,rst}\circ a(\mu_{q},\mu_{r,st})&=\mu_{a(q,r)a(s),a(t)} ^{-1}\circ a(\mu_{q,r},\mu_{s},\mu_{t})\\ &\circ\mu_{a(q)(a)(r),a(s),a(t)}^{-1}\circ\mu_{a(q),a(r),a(s)a(t)} \\ &\circ a(\mu_{q}^{-1},\mu_{r}^{-1},\mu_{s,t}^{-1})\circ\mu_{a(q),a (r)a(s,t)}.\end{split} \tag{4.1}\]
And to do so, we observe that the following diagrams
are pastings of associativity squares for \(\mu\), and are therefore commutative. Pasting these diagrams along \(\mu_{q,r,s,t}\) will confirm (4.1), and we conclude that \(\mathbb{D}\) has the structure of a pseudodouble category.
Now, write \((\mathbb{D},\overline{a},\overline{\eta},\overline{\mu})\) for the pseudo-\(\mathfrak{F}\)-algebra induced by the above double category. We define \(\gamma\colon\overline{a}\to a\) to be the natural transformation inductively given by
\[\gamma_{\gamma}=\mathsf{id},\]
We claim that \((\mathsf{id},\gamma)\colon(\mathbb{D},a,\eta,\mu)\to(\mathbb{D},\overline{a}, \overline{\eta},\overline{\mu})\) is an invertible lax morphism of pseudo-\(\mathfrak{F}\)-algebras. First, note that
\[\gamma_{r}\circ\overline{\eta}_{r}=\mu_{-,r}\circ a(\mathsf{id},\eta_{r}) \circ a(\mathsf{id},\eta_{r}^{-1})\circ\mu_{-,r}^{-1}\circ\eta_{r}=\eta_{r},\]
and we shall prove that
\[\gamma_{r_{1,1},\ldots,r_{n},k_{n}}\circ\overline{\mu}_{k_{1},\ldots,k_{n}}= \mu_{(r_{1,i})_{i=1}^{k_{1}},\ldots,(r_{n,i})_{i=1}^{k_{n}}}\circ\gamma_{a(r_{ 1,i})_{i=1}^{k_{1}},\ldots,(r_{n,i})_{i=1}^{k_{n}}}\circ\overline{a}(\gamma_{ r_{1,1},\ldots,r_{1,k_{1}}},\ldots,\gamma_{r_{n,1},\ldots,r_{n,k_{n}}}) \tag{4.2}\]
by double induction. When \(n=0\), the above reduces to
\[\mathsf{id}=\mu_{(}\circ a(\mathsf{id}),\]
which holds, so assume the above is true for some \(n\). If \(k_{n+1}=0\), the left-hand side of (4.2) becomes
\[\gamma_{r_{1,1},\ldots,r_{n,k_{n}}}\circ\overline{\mu}_{k_{1}, \ldots,k_{n},0} =\gamma_{r_{1,1},\ldots,r_{n,k_{n}}}\circ\overline{\mu}_{k_{1}, \ldots,k_{n}}\circ\lambda\] \[=\lambda\circ a(\gamma_{r_{1,1},\ldots,r_{n,k_{n}}},\mathsf{id}) \circ a(\overline{\mu}_{k_{1},\ldots,k_{n}},\mathsf{id})\]
while the right-hand side of (4.2) becomes
\[\mu_{(r_{1,i}),\ldots,(r_{n,i}),()} \circ\gamma_{a(r_{1,i}),\ldots,a(r_{n,i})}\circ\overline{a}( \gamma_{r_{1,i}},\ldots,\gamma_{r_{n,i}},\mathsf{id})\] \[=\mu_{(r_{1,i}),\ldots,(r_{n,i}),()}\circ\mu_{a(r_{1,i})\cdot a(r_ {n,i}),a()}\circ a(\gamma_{a(r_{1,i}),\ldots,a(r_{n,i})},\eta_{a()})\circ a( \overline{a}(\gamma_{r_{1,i}},\ldots,\gamma_{r_{n,i}}),\mathsf{id})\] \[=\mu_{r_{1,1}\cdots r_{n,i},\cdot\circ a(\mu_{(r_{1,i}),\ldots,(r _{n,i})},\mu_{(}))}\circ a(\gamma_{a(r_{1,i}),\ldots,a(r_{n,i})},\eta_{a()}) \circ a(\overline{a}(\gamma_{r_{1,i}},\ldots,\gamma_{r_{n,i}}),\mathsf{id})\] \[=\mu_{r_{1,1}\cdots r_{n,i},\cdot\circ a(\gamma_{r_{1,1},\ldots,r _{n,k_{n}}},\mathsf{id})\circ a(\overline{\mu}_{k_{1},\ldots,k_{n}},\mathsf{ id})\]
so the equality holds by verifying that
\[\mu_{r_{1}\cdots r_{n,i},()}=\lambda,\]
which is equivalent to proving that the following diagram commutes
which is given by the coherence condition \(\mu\circ\mathfrak{F}\mu=\mu\circ\mu_{\mathfrak{F}}\); recall that \(\mu_{()}=\mathsf{id}\).
Hence, for the final induction step, we suppose (4.2) holds for some \(k_{n+1}\). For \(k_{n+1}+1\), the left-hand side of (4.2) is given by
\[\gamma_{r_{1},1,\ldots,r_{n+1,k_{n+1}+1}}\circ\overline{\mu}_{k_ {1},\ldots,k_{n+1}+1}\] \[\qquad=\mu_{r_{1},1\cdots r_{n+1,k_{n+1}},r_{n+1,k_{n+1}}}\circ a (\gamma_{r_{1},1,\ldots,r_{n+1,k_{n+1}}},\eta_{r_{n+1,k_{n+1}+1}})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
and therefore, proving (4.2) reduces to verifying that
\[\mu_{r_{1},\cdots r_{n+1,k_{n+1},r_{n+1},k_{n+1}+1}}\circ a(\mu_{(r_{ 1},i\cdots r_{n,k_{n}}),(r_{n+1,i})},\mathsf{id})\circ\alpha\] \[\quad=\mu_{(r_{1},\ldots,r_{n,k_{n}}),(r_{n+1,i},r_{n+1,k_{n+1}+1 })}\circ a(\mathsf{id},\mu_{(r_{n+1,i}),r_{n+1,k_{n+1}+1}}).\]
Here, we have
\[\alpha =a(\mathsf{id},\mu_{r_{n+1,k_{n+1}+1}})\circ\mu_{a(r_{1},1,\ldots,r_{n,k_{n}})a(r_{n+1,i}),a(r_{n+1,k_{n+1}+1})}^{-1}\] \[\quad\circ\mu_{a(r_{1},\ldots,r_{n,k_{n}}),a(r_{n+1,i})a(r_{n+1,k _{n+1}+1})}\] \[\quad\circ a(\mu_{r_{1},1\cdots r_{n,k_{n}}}^{-1},\mathsf{id}),\]
so we just need to verify that
\[\mu_{r_{1},1\cdots r_{n+1,k_{n+1},r_{n+1},k_{n+1}+1}}\circ a(\mu_ {(r_{1},i\cdots r_{n,k_{n}}),(r_{n+1,i})},\mu_{r_{n+1,k_{n+1}+1}})\circ\mu_{a(r _{1},1\ldots,r_{n,k_{n}})a(r_{n+1,i}),a(r_{n+1,k_{n+1}+1})}^{-1}\] \[\quad=\mu_{(r_{1},1,\ldots,r_{n,k_{n}}),(r_{n+1,i},r_{n+1,k_{n+1} +1})}\circ a(\mu_{r_{1},1\cdots r_{n,k_{n}}}\mu_{(r_{n+1,i}),r_{n+1,k_{n+1}+1}} )\circ\mu_{a(r_{1},1,\ldots,r_{n,k_{n}}),a(r_{n+1,i})a(r_{n+1,k_{n+1}+1})}^{-1},\]
which holds, since both sides of the above expression equal
\[\mu_{r_{1},1\cdots r_{n,k_{n}},(r_{n+1,i}),r_{n+1,k_{n+1}+1}},\]
confirming that \((\mathsf{id},\gamma)\) is an invertible pseudo-morphism of pseudo-\(\mathfrak{F}\)-algebras.
We consider double categories, lax (horizontal) and oplax (vertical) functors as in the following diagram, and the respective diagram in the double category \(\mathsf{Ps}\)\(\cdot\)\(\mathfrak{F}\)-Alg.
(4.3)
Let \(\omega\) be a \(2\)-cell \(GH\to KF\) of internal \(\mathsf{Cat}\)-graphs. The claim is that \(\omega\) is a generalized vertical transformation if and only if \(\omega\) is a generalized \(2\)-cell of pseudo-\(\mathfrak{F}\)-algebras.
If \(\omega\) is a generalized vertical transformation, we wish to prove that the following diagram commutes
For all \(n\) and all horizontal \(1\)-cells \(r_{1},\ldots,r_{n}\). We proceed by induction: when \(n=0\), the above is just coherence of \(\omega\) for the unit comparison. If true for some \(n\), then
\[K\delta_{r_{1},\ldots,r_{n+1}}^{F} \circ\omega_{a(r_{1},\ldots,r_{n+1})}\circ G\gamma_{r_{1},\ldots,r _{n+1}}^{H}\] \[=K(\mathsf{id}\cdot\delta_{r_{1},\ldots,r_{n}}^{F})\circ K\, \mathsf{m}^{F}\circ\omega_{r_{n+1}\cdot a(r_{1},\ldots,r_{n})}\circ G\, \mathsf{m}^{H}\circ G(\mathsf{id}\cdot\gamma_{r_{1},\ldots,r_{n}}^{H})\] \[=K(\mathsf{id}\cdot\delta_{r_{1},\ldots r_{n}}^{F})\circ\mathsf{ m}^{K}\circ(\omega_{r_{n+1}}\cdot\omega_{a(r_{1},\ldots,r_{n})})\circ\mathsf{m}^{G} \circ G(\mathsf{id}\cdot\gamma_{r_{1},\ldots,r_{n}}^{H})\] \[=\mathsf{m}^{K}\circ(\mathsf{id}\cdot K\delta_{r_{1},\ldots,r_{n} }^{F})\circ(\omega_{r_{n+1}}\cdot\omega_{a(r_{1},\ldots,r_{n})})\circ(\mathsf{ id}\cdot G\gamma_{r_{1},\ldots,r_{n}}^{H})\circ\mathsf{m}^{G}\] \[=\mathsf{m}^{K}\circ(\mathsf{id}\cdot\gamma_{Fr_{1},\ldots,Fr_{n} }^{K})\circ(\omega_{r_{n+1}}\cdot d(\omega_{r_{1}},\ldots,\omega_{r_{n}})) \circ(\mathsf{id}\cdot\delta_{Hr_{1},\ldots,Hr_{n}}^{G})\circ\mathsf{m}^{G}\] \[=\gamma_{Fr_{1},\ldots,Fr_{n+1}}^{K}\circ d(\omega_{r_{1}},\ldots, \omega_{r_{n+1}})\circ\delta_{Hr_{1},\ldots,Hr_{n+1}}^{G},\]
so \(\omega\) is a pseudo-\(\mathfrak{F}\)-algebra \(2\)-cell as well.
Now, if \(\omega\) is a pseudo-\(\mathfrak{F}\)-algebra \(2\)-cell, the coherence for the unit comparison holds by definition, and
\[\mathfrak{m}^{K}_{Fr,Fs} \circ(\omega_{s}\cdot\omega_{r})\circ\mathfrak{m}^{G}_{Hr,Hs}\] \[=K(\mathsf{id}\cdot\rho)\circ\gamma^{K}_{Fr,Fs}\circ(\mathsf{id} \cdot\rho^{-1})\circ(\omega_{s}\cdot\omega_{r})\circ(\mathsf{id}\cdot\rho) \circ\delta^{G}_{Hr,Hs}\circ G(\mathsf{id}\cdot\rho^{-1})\] \[=K(\mathsf{id}\cdot\rho)\circ\gamma^{K}_{Fr,Fs}\circ(\omega_{s} \cdot(\omega_{r}\cdot\mathsf{id}))\circ\delta^{G}_{Hr,Hs}\circ G(\mathsf{id} \cdot\rho^{-1})\] \[=K(\mathsf{id}\cdot\rho)\circ\gamma^{K}_{Fr,Fs}\circ d(\omega_{s },\omega_{s})\circ\delta^{G}_{Hr,Hs}\circ G(\mathsf{id}\cdot\rho^{-1})\] \[=K(\mathsf{id}\cdot\rho)\circ K\delta^{F}_{r,s}\circ\omega_{a(r, s)}\circ G\gamma^{H}_{r,s}\circ G(\mathsf{id}\cdot\rho^{-1})\] \[=K(\mathsf{id}\cdot\rho)\circ K\delta^{F}_{r,s}\circ KF(\mathsf{ id}\cdot\rho^{-1})\circ\omega_{s\cdot r}\circ GH(\mathsf{id}\cdot\rho) \circ G(\mathsf{id}\cdot\rho^{-1})\] \[=K(\mathsf{m}^{F})\circ\omega_{s\cdot r}\circ G(\mathsf{m}^{H}),\]
verifies coherence for composition comparison, completing our proof.
Now, as promised at the start of Section 2, we obtain:
**Proposition 4.2**.: _We have a conjunction_
_in the double category \(\mathsf{PsDbCat}\)._
Proof.: Via the equivalence \(\mathsf{PsDbCat}\simeq\mathsf{Ps}\mbox{-}\mathfrak{F}\mbox{-}\mathsf{Alg}\), we simply apply Proposition 3.1 to the adjunction \(-\cdot 1\dashv\mathcal{V}(1,-)\) in \(\mathsf{Grph}(\mathsf{Cat})\), with the oplax functor structure of \(-\cdot 1\colon\mathsf{Span}(\mathcal{V})\to\mathcal{V}\mbox{-}\mathsf{Mat}\), all of which were described in Section 2.
## 5. Horizontal Lax algebras and change of base
We will review the notion of categories of _horizontal lax algebras_ introduced in [15], and we define the _change-of-base_ functors between such categories, induced by an appropriate notion of monad morphism. We begin by fixing monads \(S=(\mathbb{D},S,m,e)\) and \(T=(\mathbb{E},T,m,e)\) in the \(2\)-category \(\mathsf{PsDbCat}_{\mathsf{lax}}\).
We define the category \(\mathbb{H}\operatorname{\mathsf{Lax}}\mbox{-}T\mbox{-}\mathsf{Alg}\) of _horizontal lax \(T\)-algebras_, as follows:
* Objects are given by \(4\)-tuples \((x,a,\upsilon,\mu)\) where \(x\) is a \(0\)-cell, \(a\colon Tx\nrightarrow x\) is a horizontal \(1\)-cell, and \(\upsilon,\mu\) are \(2\)-cells \[\begin{CD}x@>{1}>{}>x\\ @V{e}V{}V@V{}V{}V\\ Tx@>{}>{a}>x\end{CD}\] satisfying \[\mu\circ(\upsilon\cdot e_{a}) =\lambda\] \[\mu\circ(\mathsf{id}\cdot(T\upsilon\circ\mathsf{e}^{T})) =\rho\] \[\mu\circ(\mathsf{id}\cdot(T\mu\circ\mathsf{m}^{T})) =\mu\circ(\mu\cdot m_{a})\circ\alpha^{-1}\]
* A morphism \((x,a,\upsilon,\mu)\to(y,b,\upsilon,\mu)\) is a pair \((f,\zeta)\) where \(f\colon x\to y\) is a vertical \(1\)-cell and \(\zeta\) is a \(2\)-cell \[\begin{CD}Tx@>{a}>{}>x\\ @V{Tf}V{}V@V{}V{}V\\ Ty@>{}>{b}>y\end{CD}\] satisfying \(\zeta\circ\upsilon=\upsilon\circ 1_{f}\) and \(\zeta\circ\mu=\mu\circ(\zeta\cdot T\zeta)\).
It should be noted that \(\mathsf{id}=(\mathsf{id},\mathsf{id})\colon(x,a,\upsilon,\mu)\to(x,a,\upsilon,\mu)\) is a horizontal lax \(T\)-algebra morphism, and if \((f,\zeta)\), \((g,\xi)\) are composable horizontal lax \(T\)-algebra morphisms, then so is \((g,\xi)\circ(f,\zeta)=(g\circ f,\xi\circ\zeta)\). Associativity and identity properties are inherited from \(\mathbb{E}_{0}\) and \(\mathbb{E}_{1}\), making \(\mathbb{H}\operatorname{\mathsf{Lax}}\mbox{-}T\mbox{-}\mathsf{Alg}\) into a category.
Our work focuses on the cases \(\mathbb{E}=\mathsf{Span}(\mathcal{V})\), with \(T\) induced by a cartesian monad (also denoted by \(T\)) on \(\mathcal{V}\), and \(\mathbb{D}=\mathcal{V}\)-Mat with \(S\) a lax monad. Then, \(\mathbb{H}\operatorname{\mathsf{Lax}}\)-\(\operatorname{\mathsf{T}}\)-\(\operatorname{\mathsf{Alg}}=\mathsf{Cat}(T,\,\mathcal{V})\) is the category of _internal \(T\)-categories_ of [22], while \(\mathbb{H}\operatorname{\mathsf{Lax}}\)-\(S\)-\(\operatorname{\mathsf{Alg}}=(S,\,\mathcal{V})\)-Cat is a generalization of the category of _enriched \(S\)-categories_ introduced by [12], by not requiring \(S\) to be normal.3
Footnote 3: When \(\mathcal{V}\) is a quantale, this generalization is already present in [40].
Let \((F,\phi)\colon S\to T\) be a monad oplax morphism, and we assume \(\mathbb{E}\) is conjoint closed. By Theorem 3.6, \(\phi\) has a conjoint, given by a lax horizontal transformation \(\phi^{*}\colon TF\to FS\). We define a \(2\)-cell \(\mathsf{e}^{\phi^{*}_{2}}\) for each \(0\)-cell \(x\) given by
as the mate of the commutative square \(\phi_{x}\circ Fe_{x}=e_{Fx}\circ\mathsf{id}\), and a \(2\)-cell \(\mathsf{m}^{\phi^{*}_{2}}\) given by
where \(\pi\) is given as in (3.2), and \(1^{\vee}\) is the mate of the commutative square \(\phi_{x}\circ Fm_{x}=m_{Fx}\circ(T\phi_{x}\circ\phi_{Sx})\). To be explicit, via mate correspondence we have
\[\varepsilon\circ\mathsf{e}^{\phi^{*}_{x}}=1_{e_{Fx}},\quad\text{and}\quad \varepsilon\circ\mathsf{m}^{\phi^{*}_{x}}=1_{m_{Fx}}\circ\rho\circ\big{(}(1_{T \phi_{x}}\circ\varepsilon)\cdot\varepsilon\big{)}. \tag{5.1}\]
Analogously, when \(\mathbb{D}\) is companion closed, we define \(2\)-cells \(\mathsf{e}^{\psi_{\vdash}}_{y}\) and \(\mathsf{m}^{\psi_{\vdash}}_{y}\) for a monad lax morphism \((G,\psi)\colon T\to S\).
**Lemma 5.1**.: _If \((F,\phi)\colon S\to T\) is a monad oplax morphism and \(\mathbb{E}\) is conjoint closed, then \(\mathsf{e}^{\phi^{*}}\) and \(\mathsf{m}^{\phi^{*}}\) are modifications, and the following relations hold:_
1. \(\mathsf{m}^{\phi^{*}}_{x}\circ(\mathsf{e}^{\phi^{*}_{x}}_{Sx}\cdot\mathsf{e}^ {\vee}_{\phi_{x}})=\lambda\)_,_
2. \(\mathsf{m}^{\phi^{*}}_{x}\circ(\phi^{*}_{x}\cdot\mathsf{e}^{(T\phi)^{*}}_{x})=\rho\)_,_
3. \(\mathsf{m}^{\phi^{*}}_{x}\circ(\mathsf{m}^{\phi^{*}_{x}}_{Sx}\cdot\mathsf{m}^ {\vee}_{\phi_{x}})=\mathsf{m}^{\phi^{*}_{x}}_{x}\circ(\phi^{*}_{m}\cdot\mathsf{ m}^{(T\phi)^{*}}_{x})\circ\alpha\)_,_
_where \(e^{\vee}_{\phi_{x}}\) and \(m^{\vee}_{\phi_{x}}\) are the rates of the naturality squares of \(e\) and \(m\) at \(\phi_{x}\), and \(\mathsf{e}^{(T\phi)^{*}}_{x}\), \(\mathsf{m}^{(T\phi)^{*}_{x}}\) are respectively given by the mate of the commutative square \(T\phi_{x}\circ TFE_{x}=Te_{Fx}\circ\mathsf{id}\), and the mate of the commutative square \(T\phi_{x}\circ TFm_{x}=Tm_{Fx}\circ(TT\phi_{x}\circ T\phi_{Sx})\) composed with \(\pi\), satisfying properties similar to (5.1)._
Proof.: Note that \(\pi\) is given as a \(2\)-cell (modification) in \(\operatorname{\mathsf{Lax}}(\mathbb{D},\mathbb{E})_{\mathsf{lax}}\), and \(\mathsf{e}^{\phi^{*}}\) and \(1^{\vee}\) are mates of equations of vertical \(1\)-cells. It follows that \(\mathsf{e}^{\phi^{*}}\) and \(\mathsf{m}^{\phi^{*}}\) are modifications.
We have
\[\varepsilon\circ\mathsf{m}^{\phi^{*}}_{x}\circ(\mathsf{e}^{\phi^{* }_{Sx}}_{Sx}\cdot\mathsf{e}^{\vee}_{\phi_{x}}) =1_{m_{Fx}}\circ\lambda\circ\big{(}(1_{T\phi_{x}}\circ\varepsilon )\cdot\varepsilon\big{)}\circ(\mathsf{e}^{\phi^{*}_{Sx}}_{Sx}\cdot\mathsf{e}^ {\vee}_{\phi_{x}})\] \[=1_{m_{Fx}}\circ\lambda\circ(1_{T\phi_{x}}\circ e_{Fx}\cdot(1_{e_{Fx }}\circ\varepsilon))\] \[=1_{m_{Fx}}\circ 1_{e_{Fx}}\circ\varepsilon\circ\lambda=\varepsilon\circ\lambda,\] \[\varepsilon\circ\mathsf{m}^{\phi^{*}}_{x}\circ(\phi^{*}_{e}\cdot \mathsf{e}^{(T\phi)^{*}}_{x}) =1_{m_{Fx}}\circ\rho\circ\big{(}(1_{T\phi_{x}}\circ\varepsilon) \cdot\varepsilon\big{)}\circ(\phi^{*}_{e}\cdot\mathsf{e}^{(T\phi)^{*}}_{x})\] \[=1_{m_{Fx}}\circ\rho\circ\big{(}(1_{T\phi_{x}}\circ TFE_{e_{x}} \circ\varepsilon)\cdot 1_{Te_{Fx}}\big{)}\] \[=1_{m_{Fx}}\circ 1_{Te_{Fx}}\circ\varepsilon\circ\rho=\varepsilon\circ\rho,\]
Now, we note that
\[\varepsilon\circ\mathsf{m}^{\phi^{*}}_{x}\circ(\mathsf{m}^{\phi^{*}}_{Sx}\cdot m^ {\vee}_{\phi_{x}})=\rho\circ\big{(}(1_{m_{Fx}\circ T\phi_{x}}\circ\varepsilon \circ\mathsf{m}^{\phi^{*}_{Sx}}_{Sx})\cdot(1_{m_{Fx}}\circ\varepsilon\circ m^ {\vee}_{\phi_{x}})\big{)}, \tag{5.2}\]
and we note that
\[\varepsilon\circ\mathsf{m}^{\phi^{*}}_{Sx}=\rho\circ\big{(}(1_{m_{FSx}\circ T\phi _{Sx}}\circ\varepsilon)\cdot(1_{m_{FSx}}\circ\varepsilon)\big{)},\]
and
\[\varepsilon\circ m_{\phi_{x}}^{\vee}=1_{m_{FSx}}\circ\varepsilon,\]
so that (5.2) becomes
\[\varepsilon\circ m_{x}^{\phi^{*}}\circ(m_{Sx}^{\phi^{*}}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(\theta^{T}_{\phi}\) is the inverse of \(\mathsf{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T})\), and \(G_{!}v\), \(G_{!}\mu\) are respectively given by the following 2-cells:
where \(\theta^{S}_{\psi}\) is the inverse of \(\mathsf{m}^{S}\circ(\mathsf{id}\cdot\tau^{S})\), given by Lemma 3.5.
If \((f,\zeta)\colon(w,a,v,\mu)\to(x,b,v,\mu)\) is a horizontal lax \(S\)-algebra morphism, and if \((g,\xi)\colon(y,c,v,\mu)\to(z,d,v,\mu)\) is a horizontal lax \(T\)-algebra morphism, then we have
\[F_{!}(f,\zeta)=(Ff,F\zeta\cdot\phi^{*}_{f})\quad\text{and}\quad G_{!}(g,\xi)=( Gg,G\xi\cdot\psi_{f!}).\]
We observe that \(\phi\) and \(T\phi\) are required to have strong conjoints only to guarantee the existence of \(F_{!}\mu\). All other things being equal, up to a letter substitution, we conclude it is enough to verify that one of \(F_{!}(x,a,v,\mu)\), \(G_{!}(y,b,v,\mu)\) is a horizontal lax algebra, and likewise for the morphisms.
Throughout the calculations, we will use the following abbreviations:
* \(v^{F}=Fv\circ\mathsf{e}^{F}\),
* \(\mu^{F}=F\mu\circ\mathsf{m}^{F}\),
* \(\hat{\alpha}=(\mathsf{id}\cdot\alpha^{-1})\circ\alpha\),
* \(\hat{\theta}=\mathsf{id}\cdot(\theta\cdot\mathsf{id})\) for a 2-cell \(\theta\).
* \(N^{\omega}_{\rho}=\hat{\alpha}^{-1}\circ(\widehat{\mathsf{n}}^{\omega}_{\rho}) ^{-1}\circ\hat{\alpha}\) for a strong (lax) horizontal transformation \(\omega\).
We begin by verifying that the following equalities hold:
\[\theta^{T}\circ e_{Fa\cdot\phi^{*}_{\pi}} =e_{Fa}\cdot\mathsf{e}^{\vee}_{\phi_{\pi}}, \tag{5.5}\] \[\theta^{T}\circ T(\upsilon^{F}\cdot\mathsf{e}^{\phi^{*}}_{x}) \circ T\rho^{-1}\circ\mathsf{e}^{T} =(\upsilon^{TF}\cdot\mathsf{e}^{(T\phi)^{*}}_{x})\circ\rho^{-1}, \tag{5.4}\]
We obtain (5.4), via mate correspondence, by noting that
\[\mathsf{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T})\circ(e_{Fa}\cdot e ^{\vee}_{\phi_{\pi}})\circ(\mathsf{id}\cdot\eta)\circ\rho^{-1} =\mathsf{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T})\circ(\mathsf{ id}\cdot\eta)\circ(e_{Fa}\cdot 1_{e_{FSx}})\circ\rho^{-1}\] \[=\mathsf{m}^{T}\circ(\mathsf{id}\cdot T\eta)\circ(\mathsf{id} \cdot\mathsf{e}^{T})\circ\rho^{-1}\circ e_{Fa}\] \[=T(\mathsf{id}\cdot\eta)\circ\mathsf{m}^{T}\circ(\mathsf{id} \cdot\mathsf{e}^{T})\circ\rho^{-1}\circ e_{Fa}\] \[=T(\mathsf{id}\cdot\eta)\circ T\rho^{-1}\circ e_{Fa}\] \[=e_{Fa\cdot\phi^{*}_{\pi}}\circ(\mathsf{id}\cdot\eta)\circ\rho^{ -1},\]
and (5.5), directly, since
\[\mathsf{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T})\circ(\upsilon^{TF} \cdot\mathbf{e}_{x}^{(T\phi)^{*}})\circ\rho^{-1} =\mathsf{m}^{T}\circ(\mathsf{id}\cdot T\eta)\circ(\mathsf{id} \cdot\mathbf{e}^{T})\circ(\upsilon^{TF}\cdot 1_{TFe_{x}})\circ\rho^{-1}\] \[=T(\mathsf{id}\cdot\eta)\circ\mathsf{m}^{T}\circ(\mathsf{id} \cdot\mathbf{e}^{T})\circ\rho^{-1}\circ\upsilon^{TF}\] \[=T(\mathsf{id}\cdot\eta)\circ T\rho^{-1}\circ\upsilon^{TF}\] \[=T(\mathsf{id}\cdot\eta)\circ T(\upsilon^{F}\cdot 1_{Fe_{x}})\circ T \rho^{-1}\circ\mathbf{e}^{T}\] \[=T(\upsilon^{F}\cdot\mathbf{e}_{x}^{\phi^{*}})\circ T\rho^{-1} \circ\mathbf{e}^{T}\,.\]
Furthermore, since \(\mathbf{e}^{\phi^{*}}\) is a modification and \(\mathsf{n}^{\phi^{*}}\) is natural, we respectively obtain
\[(\mathsf{n}_{a}^{\phi^{*}})^{-1}\circ(\mathbf{e}_{x}^{\phi^{*}} \circ e_{Fa}) =(Fe_{a}\cdot\mathbf{e}_{Sx}^{\phi^{*}})\circ\gamma, \tag{5.7}\] \[(\mathsf{n}_{a}^{\phi^{*}})^{-1}\circ(\mathsf{id}\cdot\upsilon^{ TF}) =(\upsilon^{FS}\cdot\phi^{*}_{e})\circ\gamma. \tag{5.6}\]
And lastly, we note that the following diagrams commute
which respectively confirm that
\[\mu^{F}\circ(\upsilon^{F}\cdot Fe_{a}) =\lambda, \tag{5.9}\] \[\mu^{F}\circ(\mathsf{id}\cdot\upsilon^{FS}) =\rho. \tag{5.8}\]
By applying (5.4), (5.6), (5.8), and (a) from Lemma 5.1, we obtain
\[(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}} \circ(\mathsf{id}\cdot\theta^{T})\circ\big{(}(\upsilon^{F}\cdot \mathbf{e}_{x}^{\phi^{*}})\cdot e_{Fa\phi^{*}_{x}}\big{)}\circ(\lambda^{-1} \cdot\mathsf{id})\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}} \circ\big{(}(\upsilon^{F}\cdot\mathbf{e}_{x}^{\phi^{*}})\cdot(e_{Fa}\cdot e _{\phi_{x}}^{\vee})\big{)}\circ(\lambda^{-1}\cdot\mathsf{id})\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ\big{(}(\upsilon^{F }\cdot Fe_{a})\cdot(\mathbf{e}_{Sx}^{\phi^{*}}\cdot\mathbf{e}_{x}^{\vee}) \big{)}\circ N_{a}^{1}\circ(\lambda^{-1}\cdot\mathsf{id})\] \[=(\lambda\cdot\lambda)\circ\hat{\alpha}^{-1}\circ\hat{\gamma} \circ\hat{\alpha}\circ(\lambda^{-1}\cdot\mathsf{id})=\lambda,\]
verifying the left identity law, and by applying (5.5), (5.7), (5.9) and (b) from Lemma 5.1, we obtain
\[(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}} \circ(\mathsf{id}\cdot\theta^{T})\circ(\mathsf{id}\cdot(T(\upsilon^{F} \cdot\mathbf{e}^{\phi^{*}})\circ T\lambda^{-1}\circ\mathbf{e}^{T}))\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}} \circ\big{(}\mathsf{id}\cdot(\upsilon^{TF}\cdot\mathbf{e}_{x}^{(T\phi^{*})} )\big{)}\circ(\mathsf{id}\cdot\rho^{-1})\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ\big{(}(\mathsf{id} \cdot\upsilon^{FS})\cdot(\phi^{*}_{e}\cdot\mathbf{e}_{x}^{(T\phi^{*})})\big{)} \circ N_{a}^{1}\circ(\mathsf{id}\cdot\rho^{-1})\] \[=(\rho\cdot\rho)\circ\hat{\alpha}^{-1}\circ\hat{\gamma}\circ\hat {\alpha}\circ(\mathsf{id}\cdot\rho^{-1})=\rho,\]
verifying the right identity law.
Now, note that
\[\theta^{T}\circ m_{Fa\phi^{*}_{x}}=(m_{Fa}\cdot m_{\phi_{x}}^{\vee})\circ \theta^{TT} \tag{5.10}\]
holds via mate correspondence, since we have
\[m_{Fa\cdot\phi_{x}^{\ast}}\circ\mathfrak{m}^{TT}\circ(\mathsf{id} \cdot\sigma^{TT})\circ(\mathsf{id}\cdot\eta)\circ\rho^{-1} =m_{Fa\cdot\phi_{x}^{\ast}}\circ\mathfrak{m}^{TT}\circ(\mathsf{ id}\cdot TT\eta)\circ(\mathsf{id}\cdot\mathsf{e}^{TT})\circ\rho^{-1}\] \[=m_{Fa\cdot\phi_{x}^{\ast}}\circ TT(\mathsf{id}\cdot\eta)\circ \mathfrak{m}^{TT}\circ(\mathsf{id}\cdot\mathsf{e}^{TT})\circ\rho^{-1}\] \[=T(\mathsf{id}\cdot\eta)\circ m_{Fa\cdot 1}\circ TT\rho^{-1}\] \[=T(\mathsf{id}\cdot\eta)\circ T\rho^{-1}\circ m_{Fa},\] \[\mathfrak{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T})\circ(m_{Fa} \cdot m_{\phi_{x}}^{\vee})\circ(\mathsf{id}\cdot\eta)\circ\rho^{-1} =\mathfrak{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T})\circ( \mathsf{id}\cdot\eta)\circ(m_{Fa}\cdot 1)\circ\rho^{-1}\] \[=\mathfrak{m}^{T}\circ(\mathsf{id}\cdot T\eta)\circ(\mathsf{id} \cdot\mathsf{e}^{T})\circ\rho^{-1}\circ m_{Fa}\] \[=T(\mathsf{id}\cdot\eta)\circ\mathfrak{m}^{T}\circ(\mathsf{id} \cdot\mathsf{e}^{T})\circ\rho^{-1}\circ m_{Fa}\] \[=T(\mathsf{id}\cdot\eta)\circ T\rho^{-1}\circ m_{Fa},\]
and since \(\mathfrak{m}^{\phi^{\ast}}\) is a modification, we get
\[(\mathsf{n}_{a}^{\phi^{\ast}})^{-1}\circ(\mathfrak{m}_{x}^{\phi^{\ast}}\cdot m _{Fa})=(Fm_{a}\cdot\mathfrak{m}_{Sx}^{\phi^{\ast}})\circ(\mathsf{n}_{a}^{\phi^ {\ast}_{S}(T\phi^{\ast})})^{-1}. \tag{5.11}\]
Now, note that the following diagram commutes
which confirms
\[\mu^{F}\circ(\mu^{F}\cdot Fm_{a})=\mu^{F}\circ(\mathsf{id}\cdot\mu^{FS})\circ\alpha. \tag{5.12}\]
Our next step is to confirm that
\[((\mathsf{id}\cdot\mathfrak{m}^{FS})\cdot\mathsf{id})\circ(\alpha\cdot \alpha)\circ N_{a}^{\phi^{\ast}_{S}(T\phi)^{\ast}}\circ(N_{a}^{\phi^{\ast}} \cdot\mathsf{id})=N_{a\cdot Sa}^{\phi^{\ast}_{*}}\circ(\mathsf{id}\cdot( \mathfrak{m}^{TF}\cdot\mathsf{id}))\circ(\mathsf{id}\cdot N_{a}^{(T\phi)^{ \ast}})\circ\alpha. \tag{5.13}\]
First, we recall that
\[\mathsf{n}_{a}^{\phi^{\ast}_{S}(T\phi)^{\ast}}=\alpha^{-1}\circ(\mathsf{id} \cdot\mathsf{n}_{a}^{(T\phi)^{\ast}})\circ\alpha\circ(\mathsf{n}_{Sa}^{\phi^ {\ast}}\cdot\mathsf{id})\circ\alpha^{-1},\]
and
\[\mathsf{n}_{a\cdot Sa}^{\phi^{\ast}}\circ(\mathfrak{m}^{FS}\cdot\mathsf{id}) =(\mathsf{id}\cdot\mathfrak{m}^{TF})\circ\alpha\circ(\mathsf{n}_{a}^{\phi^ {\ast}}\cdot\mathsf{id})\circ\alpha^{-1}\circ(\mathsf{id}\cdot\mathsf{n}_{Sa} ^{\phi^{\ast}})\circ\alpha,\]
and note that by coherence, we have
\[\tilde{\alpha}\circ\hat{\alpha}\circ(\hat{\alpha}^{-1}\cdot \mathsf{id}) =\alpha^{-1}\circ(\mathsf{id}\cdot\hat{\alpha})\circ(\mathsf{id} \cdot\hat{\alpha})\circ\alpha,\] \[(\alpha\cdot\alpha)\circ\hat{\alpha}^{-1}\circ\tilde{\alpha} =\hat{\alpha}^{-1}\circ\tilde{\alpha}^{-1}\circ\hat{\alpha} \circ(\mathsf{id}\cdot\alpha),\] \[\hat{\alpha}\circ(\mathsf{id}\cdot\alpha)\circ\tilde{\alpha}^{-1 }\circ\alpha^{-1}\circ(\mathsf{id}\cdot\hat{\alpha}) =\tilde{\alpha}\circ(\mathsf{id}\cdot\hat{\alpha}^{-1}),\] \[\hat{\alpha}^{-1}\circ\tilde{\alpha}\circ(\mathsf{id}\cdot\hat{ \alpha})^{-1} =(\mathsf{id}\cdot\hat{\alpha}^{-1})\circ\alpha^{-1}\circ(\mathsf{id}\cdot\alpha)\] \[\alpha^{-1}\circ(\mathsf{id}\cdot\alpha)\circ(\mathsf{id}\cdot\hat{ \alpha})\circ\alpha\circ(\hat{\alpha}\cdot\mathsf{id}) =(\mathsf{id}\cdot\hat{\alpha})\circ\alpha,\]
so that
\[((\mathsf{id}\cdot\mathsf{m}^{FS})\cdot\mathsf{id})\circ(\alpha \cdot\alpha)\circ N_{a}^{\phi_{S}^{*}(T\phi)^{*}}\circ(N_{a}^{\phi_{a}^{*}}\cdot \mathsf{id})\] \[=((\mathsf{id}\cdot\mathsf{m}^{FS})\cdot\mathsf{id})\circ(\alpha \cdot\alpha)\circ\tilde{\alpha}^{-1}\circ\tilde{\alpha}\circ(\mathsf{id}\cdot(( \mathsf{n}_{Sa}^{\phi_{a}^{*}}\cdot\mathsf{id})\cdot\mathsf{id}))^{-1}\circ \tilde{\alpha}^{-1}\] \[\quad\circ(\mathsf{id}\cdot((\mathsf{id}\cdot\mathsf{n}_{a}^{(T \phi)^{*}})\cdot\mathsf{id}))^{-1}\circ\tilde{\alpha}\circ\hat{\alpha}\circ( \tilde{\alpha}^{-1}\cdot\mathsf{id})\circ((\mathsf{id}\cdot(\mathsf{n}_{a}^{ \phi^{*}}\cdot\mathsf{id}))\cdot\mathsf{id})^{-1}\circ(\hat{\alpha}\cdot \mathsf{id})\] \[=\hat{\alpha}^{-1}\circ(\mathsf{id}\cdot((\mathsf{m}^{FS}\cdot \mathsf{id})\cdot\mathsf{id}))\circ\tilde{\alpha}^{-1}\circ(\mathsf{id}\cdot(( \mathsf{id}\cdot\mathsf{n}_{Sa}^{\phi_{a}^{*}})\cdot\mathsf{id}))^{-1}\] \[\quad\circ\hat{\alpha}\circ(\mathsf{id}\cdot\alpha)\circ\tilde{ \alpha}^{-1}\circ\alpha^{-1}\circ(\mathsf{id}\cdot\hat{\alpha})\circ(\mathsf{ id}\cdot(\mathsf{n}_{a}^{\phi_{a}^{*}}\cdot\mathsf{id}))^{-1}\] \[\quad\circ(\mathsf{id}\cdot(\mathsf{id}\cdot(\mathsf{n}_{a}^{(T \phi)^{*}}\cdot\mathsf{id})))^{-1}\circ(\mathsf{id}\cdot\hat{\alpha})\circ \alpha\circ(\hat{\alpha}\cdot\mathsf{id})\] \[=\hat{\alpha}^{-1}\circ(\mathsf{id}\cdot((\mathsf{m}^{FS}\cdot \mathsf{id})\cdot\mathsf{id}))\circ\tilde{\alpha}^{-1}\circ(\mathsf{id}\cdot(( \mathsf{id}\cdot\mathsf{n}_{Sa}^{\phi_{a}^{*}})\cdot\mathsf{id}))^{-1}\circ \tilde{\alpha}\circ(\mathsf{id}\cdot((\mathsf{n}_{a}^{\phi^{*}}\cdot\mathsf{ id})\cdot\mathsf{id}))^{-1}\] \[\quad\circ(\mathsf{id}\cdot\hat{\alpha})^{-1}\circ(\mathsf{id} \cdot(\mathsf{id}\cdot(\mathsf{n}_{a}^{(T\phi)^{*}}\cdot\mathsf{id})))^{-1} \circ(\mathsf{id}\cdot\hat{\alpha})\circ\alpha\circ(\hat{\alpha}\cdot\mathsf{ id})\] \[=\hat{\alpha}^{-1}\circ(\mathsf{id}\cdot(\mathsf{n}_{a}^{\phi^{*} }\cdot\mathsf{id}))^{-1}\circ(\mathsf{id}\cdot((\mathsf{id}\cdot\mathsf{m}^{ FF})\cdot\mathsf{id}))\circ\tilde{\alpha}\circ(\mathsf{id}\cdot N_{a}^{(T\phi)^{*}})\circ \alpha\circ(\hat{\alpha}\cdot\mathsf{id})\] \[=N_{a\cdot Sa}^{\phi_{a}^{*}}\circ(\mathsf{id}\cdot(\mathsf{m}^{ FF}\cdot\mathsf{id}))\circ\hat{\alpha}^{-1}\circ\tilde{\alpha}\circ(\mathsf{id} \cdot N_{a}^{(T\phi)^{*}})\circ\alpha\circ(\hat{\alpha}\cdot\mathsf{id})\] \[=N_{a\cdot Sa}^{\phi_{a}^{*}}\circ(\mathsf{id}\cdot(\mathsf{m}^{ FF}\cdot\mathsf{id}))\circ(\mathsf{id}\cdot\hat{\alpha}^{-1})\circ(\mathsf{id} \cdot\hat{\pi}_{a}^{(T\phi)^{*}})^{-1}\circ\alpha^{-1}\circ(\mathsf{id}\cdot \alpha)\circ(\mathsf{id}\cdot\hat{\alpha})\circ\alpha\circ(\hat{\alpha}\cdot \mathsf{id})\] \[=N_{a\cdot Sa}^{\phi_{a}^{*}}\circ(\mathsf{id}\cdot(\mathsf{m}^{ FF}\cdot\mathsf{id}))\circ(\mathsf{id}\cdot N_{a}^{(T\phi)^{*}})\circ\alpha.\]
Next, we observe that
\[(FS\mu\cdot\phi_{m}^{*})\circ(\mathsf{n}_{a\cdot Sa}^{\phi^{*}})^{-1}=( \mathsf{n}_{a}^{\phi^{*}})^{-1}\circ(\mathsf{id}\cdot TF\mu) \tag{5.14}\]
holds by naturality of \(\mathsf{n}^{\phi^{*}}\), and lastly we must confirm that
\[(\mu^{TF}\cdot\mathsf{m}_{a}^{(T\phi)^{*}})\circ N_{a}^{(T\phi)^{*}}\circ(\theta ^{T}\cdot\theta^{TT})=\theta^{T}\circ T(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}}) \circ TN_{a}^{\phi^{*}}\circ T(\mathsf{id}\cdot\theta^{T})\circ\mathsf{m}^{T}, \tag{5.15}\]
which we reduce to proving the following relations:
\[TN_{a}^{\phi^{*}}\circ T(\mathsf{id}\cdot\theta^{T})\circ\mathsf{m}^{T}\circ( \mathsf{m}^{T}\cdot\mathsf{m}^{TT})\circ((\mathsf{id}\cdot\sigma^{T})\cdot( \mathsf{id}\cdot\sigma^{TT}))=\mathsf{m}^{T}\circ(\mathsf{m}^{T}\cdot\mathsf{m}^ {T})\circ(\mathsf{id}\cdot(\sigma^{T}\cdot\sigma^{T}))\circ N_{a}^{(T\phi)^{*}}\]
\[\sigma^{T}\circ\mathsf{m}_{x}^{(T\phi)^{*}}=T\,\mathsf{m}_{x}^{\phi^{*}}\circ \mathsf{m}^{T}\circ(\sigma^{T}\cdot\sigma^{T})\]
For the first, we have the commutativity of the following diagram, omitting horizontal 1-cells
then we observe that
\[T(\mathsf{id}\cdot\theta^{T})\circ\mathsf{m}^{T}\circ(\mathsf{m}^ {T}\cdot\mathsf{m}^{TT})\circ((\mathsf{id}\cdot\sigma^{T})\cdot(\mathsf{id} \cdot\sigma^{TT}))\] \[=\mathsf{m}^{T}\circ(\mathsf{id}\cdot T\theta^{T})\circ(\mathsf{ m}^{T}\cdot\mathsf{m}^{TT})\circ((\mathsf{id}\cdot\sigma^{T})\cdot(\mathsf{id} \cdot\sigma^{TT}))\] \[=\mathsf{m}^{T}\circ(\mathsf{m}^{T}\cdot\mathsf{m}^{T})\circ(( \mathsf{id}\cdot\sigma^{T})\cdot(\mathsf{id}\cdot\sigma^{T})),\]
and finally recall from (3.4) that
\[\mathsf{m}^{T}\circ(\sigma^{T}\cdot\mathsf{id})\circ\mathsf{n}_{a}^{(T\phi)^{*}}=T \,\mathsf{n}_{a}^{\phi^{*}}\circ\mathsf{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T}),\]
so that we may calculate
\[T(\mathsf{id}\cdot\theta^{T}) \circ\mathsf{m}^{T}\circ(\mathsf{m}^{T}\cdot\mathsf{m}^{TT})\circ(( \mathsf{id}\cdot\sigma^{T})\cdot(\mathsf{id}\cdot\sigma^{TT}))\circ\hat{\alpha} ^{-1}\circ\hat{\mathsf{n}}^{(T\phi)^{*}}\circ\hat{\alpha}\] \[=\mathsf{m}^{T}\circ(\mathsf{m}^{T}\cdot\mathsf{m}^{T})\circ(( \mathsf{id}\cdot\sigma^{T})\cdot(\mathsf{id}\cdot\sigma^{T}))\circ\hat{\alpha} ^{-1}\circ\hat{\mathsf{n}}^{(T\phi)^{*}}\circ\hat{\alpha}\] \[=\mathsf{m}^{T}\circ(\mathsf{m}^{T}\cdot\mathsf{m}^{T})\circ\hat{ \alpha}^{-1}\circ(\mathsf{id}\cdot((\sigma^{T}\cdot\mathsf{id})\cdot\sigma^{T} ))\circ\hat{\mathsf{n}}^{(T\phi)^{*}}\circ\hat{\alpha}\] \[=T\hat{\alpha}^{-1}\circ\mathsf{m}^{T}\circ(\mathsf{id}\cdot \mathsf{m}^{T})\circ\hat{\mathsf{m}}^{T}\circ(\mathsf{id}\cdot((\sigma^{T} \cdot\mathsf{id})\cdot\sigma^{T}))\circ\hat{\mathsf{n}}^{(T\phi)^{*}}\circ\hat {\alpha}\] \[=T\hat{\alpha}^{-1}\circ\mathsf{m}^{T}\circ(\mathsf{id}\cdot \mathsf{m}^{T})\circ(\mathsf{id}\cdot(T\,\mathsf{n}_{a}^{\phi^{*}}\cdot \mathsf{id}))\circ\hat{\mathsf{m}}^{T}\circ(\mathsf{id}\cdot((\mathsf{id}\cdot \sigma^{T})\cdot\sigma^{T}))\circ\hat{\alpha}\] \[=T\hat{\alpha}^{-1}\circ T\hat{\mathsf{n}}_{a}^{\phi^{*}}\circ \mathsf{m}^{T}\circ(\mathsf{id}\cdot\mathsf{m}^{T})\circ\hat{\mathsf{m}}^{T} \circ\hat{\alpha}\circ(\mathsf{id}\cdot(\sigma^{T}\cdot\sigma^{T}))\] \[=T\hat{\alpha}^{-1}\circ T\hat{\mathsf{n}}_{a}^{\phi^{*}}\circ T \hat{\alpha}\circ\mathsf{m}^{T}\circ(\mathsf{m}^{T}\cdot\mathsf{m}^{T})\circ( \mathsf{id}\cdot(\sigma^{T}\cdot\sigma^{T}))\]
The second follows by applying the mate correspondence twice: we have
\[\sigma^{T}\circ\mathsf{m}_{x}^{(T\phi)^{*}}\circ(\eta\cdot(\eta \circ 1))\circ\rho^{-1} =\sigma^{T}\circ 1^{\vee}\circ\eta\] \[=\sigma^{T}\circ\eta\circ 1\] \[=T\eta\circ\mathsf{e}^{T}\circ 1\] \[=T(\eta\circ 1)\circ\mathsf{e}^{T},\]
and
\[T\,\mathsf{m}_{x}^{\phi^{*}}\circ\mathsf{m}^{T}\circ(\sigma^{T} \cdot\sigma^{T})\circ(\eta\cdot(\eta\circ 1))\circ\rho^{-1} =T\,\mathsf{m}_{x}^{\phi^{*}}\circ\mathsf{m}^{T}\circ(T\eta\cdot T (\eta\circ 1))\circ(\mathsf{e}^{T}\cdot\mathsf{e}^{T})\circ\rho^{-1}\] \[=T\,\mathsf{m}_{x}^{\phi^{*}}\circ T(\eta\cdot(\eta\circ 1))\circ \mathsf{m}^{T}\circ(\mathsf{id}\cdot\mathsf{e}^{T})\circ\rho^{-1}\circ\mathsf{ e}^{T}\] \[=T(1^{\vee}\circ\eta)\circ\mathsf{e}^{T}\] \[=T(\eta\circ 1)\circ\mathsf{e}^{T}.\]
We obtain (5.15) via the following calculation:
\[T(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}}) \circ TN\phi_{a}^{*}\circ T(\mathsf{id}\cdot\theta^{T})\circ\mathsf{ m}^{T}\circ(\mathsf{m}^{T}\cdot\mathsf{m}^{TT})\circ((\mathsf{id}\cdot \sigma^{T})\cdot(\mathsf{id}\cdot\sigma^{TT}))\circ(N_{a}^{(T\phi)^{*}})^{-1}\] \[=T(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ\mathsf{m}^{T} \circ(\mathsf{m}^{T}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\sigma^{T} \cdot\sigma^{T}))\] \[=\mathsf{m}^{T}\circ(\mu^{TF}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ (\mathsf{id}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\sigma^{T}\cdot\sigma^ {T}))\] \[=\mathsf{m}^{T}\circ(\mathsf{id}\cdot\sigma^{T})\circ(\mu^{TF} \cdot\mathsf{m}_{x}^{(T\phi)^{*}}).\]
Now, we apply (5.10), (5.11), (5.12), (c) from Lemma 5.1, (5.13), (5.14), (5.15) in sucession, to obtain
\[(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}}\circ (\mathsf{id}\cdot\theta^{T})\circ((\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\cdot m _{Fa\cdot\phi^{*}})\circ(N_{a}^{\phi^{*}}\cdot\mathsf{id})\circ((\mathsf{id} \cdot\theta^{T})\cdot\mathsf{id})\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}} \circ((\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\cdot(m_{Fa}\cdot m_{\phi^{*}}^{ \vee}))\circ(N_{a}^{\phi^{*}}\cdot\mathsf{id})\circ((\mathsf{id}\cdot\theta^{T })\cdot\theta^{TT})\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ((\mu^{F}\cdot Fm_{a} )\cdot(\mathsf{m}_{Sx}^{\phi^{*}}\cdot m_{\phi^{*}}^{\vee}))\circ N_{a}^{\phi ^{*}\cdot(T\phi)^{*}}\circ(N_{a}^{\phi^{*}}\cdot\mathsf{id})\circ((\mathsf{id} \cdot\theta^{T})\cdot\theta^{TT})\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ((\mathsf{id}\cdot \mu^{FS})\cdot(\phi_{m}^{*}\cdot m_{\phi^{*}}^{\vee}))\circ(\alpha\cdot \alpha)\circ N_{a}^{\phi^{*}\cdot(T\phi)^{*}}\circ(N_{a}^{\phi^{*}}\cdot\mathsf{ id})\circ((\mathsf{id}\cdot\theta^{T})\cdot\theta^{TT})\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ((\mathsf{id}\cdot FS \mu)\cdot(\phi_{m}^{*}\cdot m_{\phi^{*}}^{\vee}))\circ N_{a}^{\phi^{*}}\circ( \mathsf{id}\cdot(\mathsf{m}^{TF}\cdot\mathsf{id}))\circ(\mathsf{id}\cdot N_{a}^{( T\phi)^{*}})\circ(\mathsf{id}\cdot(\theta^{T}\cdot\theta^{TT}))\circ\alpha\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}} \circ(\mathsf{id}\cdot(\mu^{TF}\cdot\mathsf{m}_{x}^{(T\phi^{*})}))\circ( \mathsf{id}\cdot N_{a}^{(T\phi)^{*}})\circ(\mathsf{id}\cdot(\theta^{T}\cdot\theta^ {TT}))\circ\alpha\] \[=(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})\circ N_{a}^{\phi^{*}} \circ(\mathsf{id}\cdot\theta^{T})\circ(\mathsf{id}\cdot T(\mu^{F}\cdot\mathsf{ m}_{x}^{\phi^{*}}))\circ(\mathsf{id}\cdot TN_{a}^{\phi^{*}})\circ(\mathsf{id}\cdot T( \mathsf{id}\cdot\theta^{T}\cdot\theta^{TT}))\circ\alpha\]
which confirms the associativity law.
We conclude the proof by confirming that \((Ff,F\zeta\cdot\phi_{f}^{*})\) is a horizontal lax \(T\)-algebra morphism, for any given horizontal lax \(S\)-algebra morphism \((f,\zeta)\): indeed, note that the following diagrams commute
via pasting of naturality and modification squares.
Functoriality is confirmed componentwise.
We close this section with a comparative analysis of Theorem 5.2 with the notions of change-of-base for internal \(T\)-categories described in [33], and with the notions of change-of-base for enriched \(T\)-categories described in [12, Sections 5 and 6]; we confirm all of these generalize to our setting. The description of our main object of study, the functor \((\overline{T},\,\mathcal{V})\text{-}\mathsf{Cat}\to\mathsf{Cat}(T,\,\mathcal{V})\) induced by \(\boldsymbol{-\cdot}1\colon\mathcal{V}\text{-}\mathsf{Mat}\to\mathsf{Span}( \mathcal{V})\), must be postponed to Section 8.
### Internal \(T\)-categories:
Let \(\mathcal{D},\mathcal{E}\) be categories with pullbacks, with respective cartesian monads \(S\), \(T\) on \(\mathcal{D}\), \(\mathcal{E}\). We consider the equipments \(\mathbb{D}=\mathsf{Span}(\mathcal{D})\) and \(\mathbb{E}=\mathsf{Span}(\mathcal{E})\), and, abusing notation, we denote by \(S\) and \(T\) the induced strong monads on \(\mathbb{D}\) and \(\mathbb{E}\).
Using the terminology of [33], we consider a cartesian monad oplax morphism \((P,\phi)\colon S\to T\) and a cartesian monad lax morphism \((Q,\psi)\colon T\to S\). Their underlying data is given by
* pullback-preserving functors \(P,Q\colon\mathcal{D}\to\mathcal{E}\),
* a cartesian natural transformation \(\phi\colon PS\to TP\),
* a natural transformation \(\psi\colon SQ\to QT\).
We note \(P\) and \(Q\) induce strong functors \(\hat{P}\colon\mathbb{D}\to\mathbb{E}\), \(\hat{Q}\colon\mathbb{E}\to\mathbb{D}\), and \(\phi\), \(\psi\) induce vertical transformations \(\hat{\phi}\), \(\hat{\psi}\), which, in turn, define a monad oplax morphism \((\hat{P},\hat{\phi})\) and a monad lax morphism \((\hat{Q},\hat{\psi})\).
We conclude, by Theorem 5.2 that \((\hat{Q},\hat{\psi})\) defines a functor \(\hat{Q}_{\dagger}\colon\mathsf{Cat}(T,\,\mathcal{V})\to\mathsf{Cat}(S,\, \mathcal{V})\). Moreover, since \(\phi\) is cartesian, \(\hat{\phi}\) has a strong conjoint, and since \(\hat{P}\) is strong, \(\hat{P}\hat{\phi}\) also has a strong conjoint; therefore we may also conclude that \((\hat{P},\hat{\phi})\) induces a functor \(\hat{P}_{\dagger}\colon\mathsf{Cat}(S,\,\mathcal{V})\to\mathsf{Cat}(T,\, \mathcal{V})\).
In fact, this notion of change-of-base can easily be extended to include Burroni's \(T\)-categories [8]. This would require a notion of horizontal lax \(T\)-algebra for \(T\) an _oplax_ monad (possible with merely a couple of adjustments), and a replacement of lax functors with oplax functors in Theorem 5.2. We leave a pursuit of these results and possible applications for future work.
### Enriched \(T\)-categories:
Two instances of change-of-base are constructed in [12]; we begin by fixing a distributive monoidal category \(\mathcal{V}\), and let \(\mathbb{D}=\mathcal{V}\text{-}\mathsf{Mat}\). Therein, a _lax extension_ of a \(\mathsf{Set}\)-monad \(T\) to \(\mathcal{V}\text{-}\mathsf{Mat}\) is a normal lax monad on \(\mathbb{D}\) with underlying \(\mathsf{Set}\)-monad \(T\).
First, we suppose we have two normal lax monads \(S\) and \(T\) on \(\mathbb{D}\), and let \(\phi\colon T\to S\) be a vertical transformation, so that \((\mathsf{id},\phi)\colon S\to T\) defines a monad lax morphism. This is precisely the data described in [12, Section 5], restated in a double categorical language. Theorem 5.2 produces a functor \((S,\,\mathcal{V})\text{-}\mathsf{Cat}\to(T,\,\mathcal{V})\text{-}\mathsf{Cat}\), which coincides with the _algebraic functor_ construction in the aforementioned work.
Now, let \(\mathcal{W}\) be another distributive monoidal category, let \(\mathbb{E}=\mathcal{W}\text{-}\mathsf{Mat}\), and let \(F\colon\mathcal{V}\to\mathcal{W}\) be a normal lax monoidal functor, preserving the initial object. \(F\) induces a normal lax functor \(\hat{F}\colon\mathbb{D}\to\mathbb{E}\) with \(\hat{F}_{0}=\mathsf{id}_{\mathsf{Set}}\).
We let \(T\) and \(S\) be a lax monads on \(\mathbb{D}\) and on \(\mathbb{E}\), respectively, with the same underlying monad on \(\mathsf{Set}\); in other words, \(S\) and \(T\) are _lax extensions_ of the same \(\mathsf{Set}\)-monad.
Given a vertical transformation \(\phi\colon T\hat{F}\to\hat{F}S\), such that \(\phi_{0}\) is the identity and such that \((\hat{F},\phi)\colon S\to T\) is a monad lax morphism, we may apply Theorem 5.2 to produce a functor \(\hat{F}_{\dagger}\colon(S,\,\mathcal{V})\text{-}\mathsf{Cat}\to(T,\,\mathcal{W}) \text{-}\mathsf{Cat}\); this is precisely the functor constructed in [12, Section 6].
We should highlight that all normality conditions can be omitted, as well as the preservation of the initial object by \(F\), and still obtain change-of-base functors. This normality-free setting for both instances of change-of-base was already studied in [24, Sections 3.4, 3.5], for thin categories \(\mathcal{V}\).
## 6. Induced adjunction
As observed in [33, Section 6.7], an adequate notion of adjunction between cartesian monads will induce an adjunction on the categories of internal \(T\)-categories, which has proven fruitful in their study. Moreover, in [24, Section 3], several change-of-base adjunctions between categories of (monad, quantale)-categories are studied as well. Our aim is to extend these ideas to arbitrary horizontal lax algebras, aiming to compare the enriched and the internal notions of generalized multicategory.
Throughout this section, our setting is a conjunction
in the double category \(\mathsf{Mnd}(\mathsf{PsDbCat}_{\mathsf{lax}})\), such that \(\mathbb{D}\) and \(\mathbb{E}\) are equipments, and \(\phi\), \(T\phi\) have strong conjoints. We denote the unit and counit by \(\hat{\eta},\hat{\varepsilon}\), respectively.
We recall that
* \((F,\phi)\colon(S,\mathbb{D})\to(T,\mathbb{E})\) is a monad oplax morphism,
* \((G,\psi)\colon(T,\mathbb{E})\to(S,\mathbb{D})\) is a monad lax morphism,
* we have an adjunction \(F\dashv G\) in \(\mathsf{PsDbCat}_{\mathsf{lax}}\) with unit \(\hat{\eta}\) and counit \(\hat{\varepsilon}\),
* and by doctrinal adjunction, \(F\) is strong, and \(\phi\), \(\psi\) are mates,
so that by Theorem 5.2, we may construct functors
\[F_{!}\colon\,\mathbb{H}\,\mathsf{Lax}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Since \((f,\zeta)\) is a lax \(S\)-algebra morphism, we have
\[\zeta\circ\upsilon=G_{!}\upsilon\circ 1_{f},\]
so that
\[\zeta^{\vee\sharp\wedge}\circ F_{!}\upsilon =\zeta^{\vee\sharp}\circ F\upsilon\circ\mathbf{e}^{F}\] \[=(\zeta\circ\upsilon)^{\vee\sharp}\circ\mathbf{e}^{F}\] \[=\hat{\varepsilon}_{b}\circ FG\upsilon\circ F\,\mathbf{e}^{G} \circ F1_{f}\circ\mathbf{e}^{F}\] \[=\upsilon\circ\hat{\varepsilon}_{1}\circ\mathbf{e}^{FG}\circ 1_{Ff}\] \[=\upsilon\circ 1_{f^{\sharp}},\]
which gives the unit law for \((f^{\sharp},\zeta^{\vee\sharp\wedge})\).
For the multiplication law, we will confirm that
\[\zeta^{\vee\sharp\wedge}\circ(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{*}})=\mu\circ( \zeta^{\vee\sharp\wedge}\cdot T\zeta^{\vee\sharp\wedge})\circ(\mathsf{id} \cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{id}\cdot\sigma^{T}))\circ \hat{\alpha}^{-1}\circ\hat{\pi}_{a}^{\phi^{*}}\circ\hat{\alpha}\]
via mate correspondence.
On one hand, we have
\[\zeta^{\vee\sharp\wedge}\circ(\mu^{F}\cdot\mathsf{m}_{x}^{\phi^{* }})\circ(\mathsf{id}\cdot(\eta\cdot(\eta\circ 1)))\circ(\mathsf{id}\cdot\rho^{-1}) \circ\rho^{-1} =\zeta^{\vee\sharp\wedge}\circ(\mu^{F}\cdot(\eta\circ 1)) \circ\rho^{-1}\] \[=\zeta^{\vee\sharp}\circ\mu^{F}\] \[=(\zeta\circ\mu)^{\vee\sharp}\circ\mathsf{m}^{F},\]
while on the other, we begin by noting that
\[(\mathsf{id}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{ id}\cdot\sigma^{T}))\circ\hat{\alpha}^{-1}\circ\hat{\pi}_{a}^{\phi^{*}}\circ\hat{ \alpha}\circ(\mathsf{id}\cdot(\eta\cdot(\eta\circ 1)))\circ(\mathsf{id}\cdot\rho^{-1}) \circ\rho^{-1}\] \[=(\mathsf{id}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{ id}\cdot\sigma^{T}))\circ((\mathsf{id}\cdot\eta)\cdot(\phi_{a}\cdot(\eta \circ 1)))\circ\hat{\alpha}^{-1}\circ(\mathsf{id}\cdot(\gamma^{-1}\cdot \mathsf{id}))\circ\hat{\alpha}\circ(\mathsf{id}\cdot\rho^{-1})\circ\rho^{-1}\] \[=(\mathsf{id}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{ id}\cdot\sigma^{T}))\circ((\mathsf{id}\cdot\eta)\cdot(\phi_{a}\cdot(\eta \circ 1)))\circ(\rho^{-1}\cdot\rho^{-1})\] \[=(\mathsf{id}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{ id}\cdot\sigma^{T}))\circ(\mathsf{id}\cdot(\mathsf{id}\cdot\eta))\circ(\mathsf{id} \cdot\rho^{-1})\circ((\mathsf{id}\cdot\eta)\cdot\phi_{a})\circ(\rho^{-1}\cdot \mathsf{id})\] \[=(\mathsf{id}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{ id}\cdot T\eta))\circ(\mathsf{id}\cdot(\mathsf{id}\cdot\mathsf{e}^{T}))\circ( \mathsf{id}\cdot\rho^{-1})\circ((\mathsf{id}\cdot\eta)\cdot\phi_{a})\circ( \rho^{-1}\cdot\mathsf{id})\] \[=(\mathsf{id}\cdot T(\mathsf{id}\cdot\eta))\circ(\mathsf{id} \cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{id}\cdot\mathsf{e}^{T})) \circ(\mathsf{id}\cdot\rho^{-1})\circ((\mathsf{id}\cdot\eta)\cdot\phi_{a}) \circ(\rho^{-1}\cdot\mathsf{id})\] \[=(\mathsf{id}\cdot T(\mathsf{id}\cdot\eta))\circ(\mathsf{id} \cdot T\rho^{-1})\circ((\mathsf{id}\cdot\eta)\cdot\phi_{a})\circ(\rho^{-1} \cdot\mathsf{id}),\]
hence, if we write
\[Y=(\mathsf{id}\cdot\mathsf{m}^{T})\circ(\mathsf{id}\cdot(\mathsf{id}\cdot \sigma^{T}))\circ\hat{\alpha}^{-1}\circ\hat{\pi}_{a}^{\phi^{*}}\circ\hat{ \alpha},\quad Z=(\mathsf{id}\cdot(\eta\cdot(\eta\circ 1)))\circ(\mathsf{id}\cdot\rho^{-1}) \circ\rho^{-1}\]
we deduce that
\[\mu\circ(\zeta^{\vee\sharp\wedge}\cdot T\zeta^{\vee\sharp\wedge })\circ Y\circ Z\] \[=\mu\circ(\zeta^{\vee\sharp}\cdot T\zeta^{\vee\sharp\wedge}) \circ(\mathsf{id}\cdot T(\mathsf{id}\cdot\eta))\circ(\mathsf{id}\cdot T \rho^{-1})\circ((\mathsf{id}\cdot\eta)\cdot\phi_{a}))\circ(\rho^{-1}\cdot \mathsf{id})\] \[=\mu\circ(\zeta^{\vee\sharp}\cdot(T\zeta^{\vee\sharp}\circ\phi_ {a}))\] \[=\mu\circ((\varepsilon_{b}\circ F\zeta^{\vee})\cdot(T\varepsilon_{b }\circ TF\zeta^{\vee}\circ\phi_{a}))\] \[=\mu\circ(\big{(}\varepsilon_{b}\circ F\zeta^{\vee})\cdot(T \varepsilon_{b}\circ\phi_{\mathsf{G}b}\circ FS\zeta^{\vee}))\] \[=\mu\circ((\varepsilon_{b}\circ F\zeta^{\vee})\cdot(\varepsilon_{ Tb}\circ F\psi_{b}\circ FS\zeta^{\vee}))\] \[=\mu\circ(\varepsilon_{b}\cdot\varepsilon_{Tb})\circ\big{(}F\zeta^ {\vee}\cdot(F\psi_{b}\circ FS\zeta^{\vee})\big{)}\] \[=\varepsilon_{b}\circ F\mu^{G}\circ\mathsf{m}^{F}\circ\big{(}F \zeta^{\vee}\cdot(F\psi_{b}\circ FS\zeta^{\vee})\big{)}\] \[=\Big{(}\mu^{G}\circ\big{(}\zeta^{\vee}\cdot(\psi_{b}\circ S\zeta^ {\vee})\big{)}\Big{)}^{\sharp}\circ\mathsf{m}^{F},\]
so we conclude it is sufficient to show that
\[(\zeta\circ\mu)^{\vee}=\mu^{G}\circ\big{(}\zeta^{\vee}\cdot(\psi_{b}\circ S \zeta^{\vee})),\]
and indeed, we have
\[(\zeta\circ\mu)^{\vee} =\rho\circ(\mathsf{id}\cdot\delta)\circ G_{!}\mu\circ(\zeta\cdot S\zeta)\] \[=\rho\circ(\mathsf{id}\cdot\delta)\circ(\mu^{G}\cdot\mathsf{m}_{b}^ {\psi})\circ\hat{\alpha}^{-1}\circ\bar{\mathsf{n}}_{b}^{\psi_{b}}\circ\hat{ \alpha}\circ(\mathsf{id}\cdot\chi^{S})\circ(\zeta\cdot S\zeta)\] \[=\mu^{G}\circ\rho\circ(\mathsf{id}\cdot\rho)\circ\hat{\alpha}^{-1 }\circ(\mathsf{id}\cdot(\gamma\cdot\mathsf{id}))\circ\hat{\alpha}\circ(( \mathsf{id}\cdot\delta)\cdot(\psi_{b}\cdot(1\circ\delta)))\circ(\mathsf{id} \cdot\chi^{S})\circ(\zeta\cdot S\zeta)\] \[=\mu^{G}\circ(\rho\cdot\rho)\circ((\mathsf{id}\cdot\delta)\cdot( \psi_{b}\cdot(1\circ\delta)))\circ(\mathsf{id}\cdot\chi^{S})\circ(\zeta\cdot S\zeta)\] \[=\mu^{G}\circ(\rho\cdot\mathsf{id})\circ((\mathsf{id}\cdot\delta) \cdot\psi_{b})\circ(\mathsf{id}\cdot\rho)\circ(\mathsf{id}\cdot(\mathsf{id} \cdot\delta))\circ(\mathsf{id}\cdot\chi^{S})\circ(\zeta\cdot S\zeta)\] \[=\mu^{G}\circ(\rho\cdot\mathsf{id})\circ((\mathsf{id}\cdot\delta) \cdot\psi_{b})\circ(\mathsf{id}\cdot S\rho)\circ(\mathsf{id}\cdot S(\mathsf{id }\cdot\delta))\circ(\zeta\cdot S\zeta)\] \[=\mu^{G}\circ\big{(}\zeta^{\vee}\cdot(\psi_{b}\circ S\zeta^{ \vee})\big{)},\]
which concludes the proof.
For the purposes of applying this adjunction to the study of full faithfulness of \(F_{!}\) and subsequent applications to descent theory, it is useful to establish criteria for the invertibility of the unit and counit of the adjunction \(F_{!}\dashv G_{!}\), which are provided by the following results:
**Lemma 6.2**.: _Let \((y,b,\upsilon,\mu)\) be a horizontal lax \(T\)-algebra. If \(\hat{\varepsilon}_{y}\) and \(F\psi_{y}\) are invertible, then \(\hat{\varepsilon}_{(y,b,\upsilon,\mu)}\) is invertible if and only if \(\mathsf{n}_{b}^{\sharp^{*}}\) is invertible._
Proof.: We have \(\hat{\varepsilon}_{(y,b,\upsilon,\mu)}=(\hat{\varepsilon}_{y},\mathsf{id}^{ \vee\sharp\wedge})\). We first observe that, up to coherence isomorphisms, we have \(\mathsf{id}^{\vee\sharp\wedge}=\Omega\), where
and \(\omega=\lambda\circ(\delta\cdot\iota)\) is the mate of \(\iota\), which is in turn the mate of \(\hat{\varepsilon}_{Ty}\circ F\psi_{y}=T\hat{\varepsilon}_{y}\circ\phi_{Gx}\).
Indeed, we note that
\[\Omega^{\vee} =\lambda\circ(\varepsilon\cdot\mathsf{id})\circ\mathsf{n}_{b}^ {\sharp^{*}}\circ(\mathsf{id}\cdot\omega)\circ\alpha\circ(\chi^{F}\cdot \mathsf{id})\circ(\mathsf{id}\cdot\eta)\circ\rho^{-1}\] \[=\rho\circ(\hat{\varepsilon}_{b}\cdot\varepsilon)\circ(\mathsf{ id}\cdot\omega)\circ(\mathsf{id}\cdot(\mathsf{id}\cdot\eta))\circ\alpha\circ( \chi^{F}\cdot\mathsf{id})\circ\rho^{-1}\] \[=\rho\circ(\hat{\varepsilon}_{b}\cdot\varepsilon)\circ(\mathsf{ id}\cdot\omega)\circ(\mathsf{id}\cdot(\mathsf{id}\cdot\eta))\circ(\mathsf{id} \cdot\rho^{-1})\circ\chi^{F},\]
and since
\[\varepsilon\circ\omega\circ(\mathsf{id}\cdot\eta)\circ\rho^{-1}=1\circ\delta,\]
we obtain
\[\Omega^{\vee}=\rho\circ(\hat{\varepsilon}_{b}\cdot 1)\circ(\mathsf{id}\cdot \delta)\circ\chi^{F}=\hat{\varepsilon}_{b}\circ\rho\circ(\mathsf{id}\cdot \delta)\circ\chi^{F},\]
and of course, \(\rho\circ(\mathsf{id}\cdot\delta)\circ\chi^{F}\) is precisely \(F(\rho\circ(\mathsf{id}\cdot\delta))\), so we obtain \(\Omega^{\vee}=\mathsf{id}^{\vee\sharp}\), as desired.
Our claim follows by noting that if \(\hat{\varepsilon}_{y}\) and \(F\psi_{y}\) are invertible, then so is \(\iota\), and since \(\delta\colon(F\psi_{y})_{!}\to 1\) is invertible, so is \(\omega\).
The inverse of \(\iota\) is given by the mate of \(\phi_{Gy}\circ(F\psi_{y})^{-1}=T\hat{\varepsilon}_{y}^{-1}\circ\varepsilon_{ Ty}\), which we denote by \(\theta\). We have
\[\varepsilon\circ\iota\circ\theta=1_{T\varepsilon_{y}}\circ\varepsilon\circ \theta=1_{\mathsf{id}}\circ\varepsilon=\varepsilon\quad\text{and}\quad \varepsilon\circ\theta\circ\iota=1_{T\hat{\varepsilon}_{y}^{-1}}\circ \varepsilon\circ\iota=1_{\mathsf{id}}\circ\varepsilon=\varepsilon,\]
finishing the proof.
The analogous characterization for the unit is not quite the dual of Lemma 6.2; it requires one more verification.
**Lemma 6.3**.: _Let \((x,a,\upsilon,\mu)\) be a horizontal lax \(S\)-algebra. If \(\eta_{x}\), \(G\phi_{x}\) and \(\mathfrak{n}_{a}^{(G\phi)^{*}}\) are invertible, then \(\hat{\eta}_{(x,a,\upsilon,\mu)}\) is invertible if and only if \(\mathfrak{n}_{a}^{\eta_{a}}\) is invertible._
Proof.: The only missing detail is that, if \(\mathfrak{n}_{a}^{(G\phi)^{*}}\) is invertible, then so is
\[\mathfrak{m}^{G}\circ(\mathsf{id}\cdot\sigma^{G})\colon GFa\cdot(G\phi)_{x}^{ *}\to G(Fa\cdot\phi_{x}^{*}).\]
To see this, we take (3.4), with \(H=G\) and \(r=a\), and we recall that \(\phi\) has a strong conjoint, by hypothesis.
As an immediate corollary, we obtain:
**Corollary 6.4**.: \(F\colon\,\mathbb{H}\operatorname{\mathsf{L}ax\text{-}}\text{$S\text{-} \mathsf{A}\mathsf{I}\mathsf{S}$}\to\mathbb{H}\operatorname{\mathsf{L}ax\text{-} \text{T-}\mathsf{A}\mathsf{I}\mathsf{S}$ is fully faithful whenever $F$}\colon\mathbb{D}\to\mathbb{E}\) _is fully faithful and \(G\phi\) is invertible._
Proof.: If \(F\colon\mathbb{D}\to\mathbb{E}\) is fully faithful, then \(\hat{\eta}\) is invertible, and therefore has a strong companion. Likewise, \(G\phi\) has a strong conjoint. Thus, \(\eta_{x}\), \(\mathfrak{n}_{a}^{\eta}\) and \(\mathfrak{n}_{a}^{(G\phi)^{*}}\) are invertible for all \(x\) and all \(a\), so we apply Lemma 6.3.
For the remainder of this section, we will compare Theorem 6.1 with [33, Section 6.7] and [24, Section 3], confirming we have a common generalization of these results. Furthermore, we provide some comments comparing our approach with the pseudofunctoriality ideas stated in [15, 4.4].
### Internal \(T\)-categories:
We recall the setting described in Subsection 5.1. If \(P\vdash Q\) and \(\phi\) and \(\psi\) are mates, we can immediately apply Theorem 6.1, to obtain an adjunction \(P_{!}\dashv Q_{!}\) as claimed in [33, Section 6.7].
Likewise, with an adequate restatement of Theorem 6.1 for oplax monads and functors, we can also obtain adjunctions between categories of Burroni's \(T\)-categories.
### Enriched \(T\)-categories:
We note that Theorem 6.1 is a generalization of [24, Proposition 3.5.1], however, we cannot obtain the adjunction studied in [24, Subsection 3.4], using our result in the current form.
We will work out the same argument in our more general setting, to emphasize what goes wrong. Given a monad \(T=(T,m,e)\) in \(\mathbb{E}\), note that \(e\colon\mathsf{id}\to T\) defines a monad lax morphism \((\mathsf{id},e)\colon T\to\mathsf{id}\), which, by Theorem 5.2, gives a functor
\[e_{!}\colon\,\mathbb{H}\operatorname{\mathsf{L}ax\text{-}\text{T-}\mathsf{A} \mathsf{I}\mathsf{S}\to\mathbb{H}\operatorname{\mathsf{L}ax\text{-}\mathsf{ id}\mathsf{A}\mathsf{I}\mathsf{S}\text{,}\]
meaning every horizontal lax \(T\)-algebra has an underlying horizontal lax \(\mathsf{id}\)-algebra (monad!). Moreover, \(e\) also defines a monad oplax morphism \((\mathsf{id},e)\colon\mathsf{id}\to T\), but unless \(e\) and \(Te\) have strong conjoints, Theorem 5.2 cannot be applied to construct a functor \(\mathbb{H}\operatorname{\mathsf{L}ax\text{-}\mathsf{id}\mathsf{A}\mathsf{I} \mathsf{S}\to\mathbb{H}\operatorname{\mathsf{L}ax\text{-}\text{T-}\mathsf{A} \mathsf{I}\mathsf{S}\text{,}\) which would guarantee \(e_{!}\) has a left adjoint.
However, it is possible to expand our notion of change-of-base to rectify the above problem: an analogous version of Theorem 5.2 can be obtained for a monad oplax morphism \((F,\phi)\colon\mathsf{id}\to T\), without requiring strong conjoints for either \(\phi\) or \(T\phi\), by defining \(F_{!}(x,a,\eta,\mu)\) so that \(F_{!}a=\phi_{x}^{*}\cdot TFa\); note that this is precisely \(a_{\sharp}\) of [24, Subsection 3.4] when \(F=\mathsf{id}\), and is isomorphic to the construction of Theorem 5.2 when \(\phi\) and \(T\phi\) do have strong conjoints.
This would also require an analogous version of Theorem 6.1 for this specialized change-of-base construction, but since such results are outside of our scope, we leave them for future work.
### Pseudofunctoriality:
Theorems 5.2 and 6.1 prompt one to view \(\mathbb{H}\operatorname{\mathsf{L}ax\text{-}(\text{-})\text{-}\mathsf{A} \mathsf{I}\mathsf{S}\) as a double pseudofunctor \(\mathbb{M}\to\mathsf{CAT}\) (see [41, Section 6]), for a suitable sub-double category \(\mathbb{M}\) of \(\mathsf{Mnd}(\mathsf{PsDbCat_{\mathsf{l}ax}})\). Since double pseudofunctors preserve conjoints, we would obtain the conclusion of Theorem 6.1 as an immediate corollary, for those conjunctions which are in \(\mathbb{M}\).
We haven't pursued this line of reasoning, as obtaining a suitable choice of \(\mathbb{M}\) which includes our main examples has proved to be elusive, as we briefly explain below.
We observe that the hypotheses required for Theorem 5.2 restrict us to a setting where the vertical \(1\)-cells \((F,\phi)\colon S\to T\) (monad oplax morphisms) of \(\mathbb{M}\) are those such that \(\phi\) and \(T\phi\) have strong conjoints. Unfortunately, this property on its own doesn't determine a sub-double category, as it is not closed under vertical composition: if \((G,\psi)\colon T\to U\) is another vertical \(1\)-cell, there is no reason for \(\omega=\psi_{F}\circ G\phi\) nor \(U\omega\) to have strong conjoints, so this property doesn't define a sub-double category.
This obstacle could be overcome, provided we can guarantee that \(G\phi\) and \(UG\phi\) have strong conjoints. The first condition can be guaranteed if we require that the underlying functor of every monad oplax morphism \((H,\chi)\) is such that
\[Hr\cdot(Hf)^{*}\xrightarrow{\operatorname{\mathsf{id}}\sigma^{H}}Hr\cdot H(f^{*}) \xrightarrow{\mathfrak{m}^{H}}H(r\cdot f^{*}) \tag{6.2}\]
is invertible for all horizontal \(1\)-cells \(r\) and vertical \(1\)-cells \(f\); note that this implies that \(H\phi\) has a strong conjoint whenever \(\phi\) has a strong conjoint. This property is satisfied, for instance, when \(H\) is strong. Therefore, this extra requirement is still within the setting of Theorem 6.1, as the underlying functors of the left adjoints are necessarily strong.
The problem lies in guaranteeing that \(UG\phi\) has a strong conjoint; we would need to guarantee that the underlying lax functors of the monads make (6.2) invertible. However, it can be shown that this does not hold for our applications.
Lacking an alternative method to overcome this obstacle, we opted for the current _ad-hoc_, yet more general, approach for obtaining an adjunction of change-of-base functors, instead of going for the more attractive pseudofunctoriality argument.
## 7. Extensive categories
Extensivity of \(\mathcal{V}\) is a crucial hypothesis to construct and study the comparison functor \(\mathcal{V}\text{-}\mathsf{Cat}\to\mathsf{Cat}(\mathcal{V})\) (see [35] and [13]), and therefore we shall devote this section to the study of extensive categories.
Let \(\mathcal{C}\) be a category with coproducts. We say \(\mathcal{C}\) is _extensive_ if the coproduct functor
\[\prod_{i\in I}\mathcal{C}\downarrow X_{i}\to\mathcal{C}\downarrow\sum_{i\in I }X_{i} \tag{7.1}\]
is an equivalence for all families \((X_{i})_{i\in I}\) of objects in \(\mathcal{C}\). We refer to [9] for a comprehensive introduction to these categories. Extensive categories to keep in mind are \(\mathsf{Set}\), \(\mathsf{Top}\), \(\mathsf{Cat}\), any Grothendieck topos such as \(\mathsf{Grph}\), and any free coproduct completion \(\mathsf{Fam}(\mathcal{B})\) of a category \(\mathcal{B}\).
The following characterization of extensivity in terms of Artin glueing [21, p. 465] is quite important: an immediate corollary is that \(\sum\colon\mathsf{Fam}(\mathcal{C})\to\mathcal{C}\) preserves limits when \(\mathcal{C}\) has all finite limits (that is, when \(\mathcal{C}\) is _lextensive_). The converse was also shown to hold in [10, Section 4.3].
**Lemma 7.1**.: _Let \(\mathcal{C}\) be a category with coproducts and a terminal object. Then Diagram (7.2)_
(7.2)
_is a comma diagram if and only if \(\mathcal{C}\) is extensive, where \(\sigma_{(c_{x})_{x\in X}}\colon\sum_{x\in X}c_{x}\to X\mathbin{\boldsymbol{ \cdot}}1\) is the coproduct over \(X\) of the morphisms \(c_{x}\to 1\)._
Proof.: If \(\mathcal{C}\) is extensive, for a morphism \(f\colon c\to X\mathbin{\boldsymbol{\cdot}}1\), we consider the family \((c_{x})_{x\in X}\) given by the following family of pullbacks:
The family is, by definition, indexed over \(X\), and by extensivity, we have an isomorphism \(\sum_{x\in X}c_{x}\cong c\), whose composition with \(f\) equals \(\sigma_{(c_{x})_{x\in X}}\).
Let \((c_{x})_{x\in X}\), \((d_{y})_{y\in Y}\) be families of objects, and let \(\hat{f}\colon\sum_{x\in X}c_{x}\to\sum_{y\in Y}d_{y}\) be a morphism and let \(f\colon X\to Y\) be a function such that \(\sigma\circ\hat{f}=(f\mathbin{\boldsymbol{\cdot}}1)\circ\sigma\). For each \(y\in Y\), we consider the following
diagram:
The left, right and inside squares are pullbacks, hence the outer square must be a pullback; let \(\hat{f}|_{x}\colon c_{x}\to d_{fx}\) be the top morphism composed with the inclusion \(c_{x}\to\sum_{x\in f^{*}fx}c_{x}\), and consider the morphism \((f,\hat{f}|_{x})\colon(c_{x})_{x\in X}\to(d_{y})_{y\in Y}\) in \(\mathsf{Fam}(\mathcal{C})\). It is the unique morphism \(\psi\colon(c_{x})_{x\in X}\to(d_{y})_{y\in Y}\) indexed by \(f\) such that \(\sum\psi=\hat{f}\), by extensivity. With this, we conclude that (7.2) is a comma diagram.
Now, given that (7.2) is a comma diagram, we aim to confirm (7.1) is an equivalence. First, full faithfulness: given a commutative triangle in \(\mathcal{C}\)
we have
\[\sigma_{(X_{i})}\circ\sum_{i}f_{i}=\sigma_{(Y_{i})}\quad\text{and}\quad\sigma_ {(X_{i})}\circ\sum_{i}g_{i}=\sigma_{(Z_{i})},\]
from which we obtain \(\sigma_{(Y_{i})}=\sigma_{(Z_{i})}\circ\phi\). This implies unique existence of a morphism \((\mathsf{id},\phi_{i})\colon(Y_{i})_{i\in I}\to(Z_{i})_{i\in I}\) in \(\mathsf{Fam}(\mathcal{C})\) such that \(\sum_{i}\phi_{i}=\phi\), by the \(2\)-dimensional universal property of comma squares.
Now, if we have a morphism \(\omega\colon S\to\sum_{i\in I}X_{i}\), we consider its composite with \(\sigma_{(X_{i})}\). This yields, by the \(2\)-dimensional universal property, a family \((S_{i})_{i\in I}\) and an isomorphism \(\nu\colon\,\sum_{i\in I}S_{i}\cong S\). From full faithfulness above we obtain a family \(\omega_{i}\colon S_{i}\to X_{i}\) such that \(\sum_{i}\omega_{i}\circ\nu=\omega\).
The following instance of limit preservation is extensively used:
**Theorem 7.2**.: _Let \(\mathcal{C}\) be an extensive category. If we have a commutative square in \(\mathsf{Fam}(\mathcal{C})\)_
(7.3)
_such that_
(7.4)
_is a pullback diagram, as well as_
(7.5)
_for each \(w\in W\). Then_
(7.6)
_is a pullback diagram._
Proof.: The hypotheses (7.4) and (7.5) guarantee that (7.3) is a pullback in \(\mathsf{Fam}(\mathcal{C})\), by [23, Definition 4.7 and Corollary 4.9]. Since \(\sum\) preserves limits, (7.6) must be a pullback diagram, as desired.
As corollaries, we obtain succinct proofs of a couple of results from [35] and [13], which clarify the role of the extensivity condition.
**Lemma 7.3**.: _If \(\mathcal{V}\) is extensive with finite limits, \(-\cdot\,1\colon\mathcal{V}\text{-}\mathsf{Mat}\to\mathsf{Span}(\mathcal{V})\) is strong._
Proof.: Since \(-\cdot\,1\) is normal, it is enough to prove that \(\mathsf{m}^{-1}\) is invertible. Indeed, we have the following pullback diagrams:
thus, applying Theorem 7.2, we conclude that the outer square of diagram (2.8) is a pullback, verifying our claim.
**Remark 7.4**.: We observe that the above lemma can be restated in terms of a Beck-Chevalley condition; see [36, Definition 1.4.13]. To wit, the lax functor \(\mathcal{V}(1,-)\colon\mathsf{Span}(\mathcal{V})\to\mathcal{V}\text{-} \mathsf{Mat}\) satisfies the Beck-Chevalley condition if \(\mathcal{V}\) is extensive. Then, by [36, Theorem 1.4.14], we conclude that \(-\cdot\,1\dashv\mathcal{V}(1,-)\) is an _adjunction_ in the \(2\)-category \(\mathsf{PsDbCat}_{\mathsf{tax}}\).
We can also give a short proof that a considerable class of monads are cartesian:
**Lemma 7.5**.: _Let \(\mathcal{V}\) be a letensive, monoidal category, whose tensor product \(\otimes\) preserves coproducts and pullbacks. Then the free \(\otimes\)-monoid monad on \(\mathcal{V}\) is cartesian._
Proof.: We let \(X^{0}=I\) be the unit object, and \(X^{n+1}=X^{n}\otimes X\). Recall that the underlying functor of the free \(\otimes\)-monoid monad may be given by \(X\mapsto X^{*}=\sum_{n\in N}X^{n}\) (see, for instance, [26, Theorem 23.4]), and note that since pullbacks are preserved by \(\otimes\) (by hypothesis) and by coproducts (as a corollary of Lemma 7.1), we conclude the free \(\otimes\)-monoid monad preserves pullbacks. Moreover, note that
is a pullback diagram for all \(n\in\mathbb{N}\), due to extensivity. Taking \(n=1\) confirms \(\eta\) is a cartesian natural transformation.
Now, we consider the following pullback diagrams
to which we may apply Theorem 7.2, allowing us to conclude that \(\mu\) is a cartesian natural transformation as well.
**Connected categories.** An object \(x\) in a category \(\mathcal{V}\) is said to be _connected_ if the hom-functor \(\mathcal{V}(x,-)\) preserves coproducts, and, borrowing terminology from topos theory, \(\mathcal{V}\) is said to be _connected_ if the terminal object is connected.
Under the hypothesis that \(\mathcal{V}\) is kextensive, understading this condition turns out to be helpful in our work on the enriched \(\to\) internal embedding, particularly regarding the study of certain monads on \(\mathcal{V}\); see Lemma 8.7.
**Lemma 7.6**.: _If \(\mathcal{V}\) is kextensive, then it is connected if and only if \(-\,\cdot\,1\colon\mathsf{Set}\to\mathcal{V}\) is fully faithful._
Proof.: Given a morphism \(p\colon 1\to\sum_{i\in I}X_{i}\), we consider the following diagram
which is a pasting of pullback squares.
It is clear that if \(-\,\cdot\,1\) is fully faithful, then \(u\cong[\sigma\circ p=i]\,\cdot\,1\). Thus, by universality of coproducts, \(p\) is uniquely determined by a morphism \(1\to X_{i}\).
Conversely, since \(1\) is terminal we have \(\mathcal{V}(1,X\,\cdot\,1)\cong X\,\cdot\,\mathcal{V}(1,1)\cong X\), which implies the unit of \(-\,\cdot\,1\dashv\mathcal{V}(1,-)\) must be an isomorphism.
**Lemma 7.7**.: _If \(\mathcal{V}\) is a connected, kextensive category, then \(-\,\cdot\,1\colon\mathcal{V}\text{-}\mathsf{Mat}\to\mathsf{Span}(\mathcal{V})\) is fully faithful on 2-cells._
Proof.: The outer square of (2.4) is a pullback, due to extensivity. Then, since \(\hat{\eta}\colon X\to\mathcal{V}(1,X\,\cdot\,1)\) is an isomorphism for all \(X\), the result follows.
## 8. Fibrewise discrete morphisms
Let \(T\) be a cartesian monad on a kextensive category \(\mathcal{V}\), also denoting by \(T\) the induced strong monad on \(\mathsf{Span}(\mathcal{V})\)[22, 15]. The \(\mathsf{Set}\)-monad \(\overline{T}\) under study (as well as its lax extension to \(\mathcal{V}\text{-}\mathsf{Mat}\), also denoted by \(\overline{T}\)), is constructed via the following consequence of Proposition 3.1:
**Proposition 8.1**.: _Let \(\mathbb{B}\) be a 2-category, let \((l,r,\eta,\varepsilon)\colon b\to c\) be an adjunction in \(\mathbb{B}\), and let \((t,m,e)\) be a monad on \(c\). Then \((rtl,r(m\circ t\varepsilon t)l,rel\circ\eta)\) is a monad on \(b\), and we have a conjunction_
\[(b,rtl)\overbrace{\underbrace{\stackrel{{(l,\varepsilon tl)}}{{ \bot}}}_{(r,rt\varepsilon)}}^{\stackrel{{\bot}}{{\bot}}}(c,t)\]
_in \(\mathsf{Mnd}(\mathbb{B})\)._
Indeed, by Remark 7.4, we have an adjunction
(8.1)
in the 2-category \(\mathsf{PsDbCat}_{\mathsf{lax}}\), thus, we may apply Proposition 8.1 to (8.1) with the monad \(T\) on \(\mathsf{Span}(\mathcal{V})\) to obtain a monad \(\overline{T}=\mathcal{V}(1,T(-\,\cdot\,1))\) on \(\mathcal{V}\text{-}\mathsf{Mat}\), and a conjunction
(8.2)
where \((-\,\cdot\,1,\,\hat{\varepsilon}_{T(-\cdot\,1)})\) is a monad oplax morphism and \((\mathcal{V}(1,-),\,\mathcal{V}(1,\,T\hat{\varepsilon}))\) is a monad lax morphism.
The only remaining ingredient to place ourselves under the setting of Section 6 and therefore apply Theorem 6.1 to (8.2), is the hypothesis that \(\hat{\varepsilon}_{T(-\cdot 1)}\) has a strong conjoint4. The study and characterization of this hypothesis (for \(\mathcal{V}\) connected) is the central purpose of this Section, culminating in Theorem 8.6.
Footnote 4: Note that, since \(T\) is strong, it follows that \(T\hat{\varepsilon}_{T(-\cdot 1)}\) has a strong conjoint as well.
Once this characterization is obtained, we obtain an adjunction (see Lemma 9.1)
(8.3)
from (8.2), for pairs \((T,\mathcal{V})\) where \(T\) is a cartesian monad on a kextensive, connected category \(\mathcal{V}\) such that \(\hat{\varepsilon}_{T(-\cdot 1)}\) has a strong conjoint.
We begin by establishing the following:
**Lemma 8.2**.: _Let \(\omega\colon F\to G\) be a vertical transformation of lax functors \(F,\,G\colon\mathbb{D}\to\mathsf{Span}(\mathcal{V})\). For a horizontal 1-cell \(a\colon s\nrightarrow t\) in \(\mathbb{D}\), the 2-cell \(\mathfrak{n}_{a}^{\omega^{*}}\) is invertible if and only if_
(8.4)
_is a pullback diagram._
Proof.: We observe that \(\mathfrak{n}_{a}^{\omega^{*}}\) is uniquely determined by the dashed morphism below making the triangles commute
which is invertible if and only if (8.4) is a pullback diagram.
**Lemma 8.3**.: _The following are equivalent:_
1. \(\hat{\varepsilon}_{T(-1)}\) _has a strong conjoint,_
2. \(\hat{\varepsilon}_{T(-1)}\) _is a cartesian natural transformation._
Proof.: Instanciating (8.4) with \(\omega=\hat{\varepsilon}_{T(-1)}\), we find (a) holds if and only if
(8.5)
is a pullback diagram for all \(\mathcal{V}\)-matrices \(a\).
If we take \(a=f_{!}\) for a function \(f\colon s\to t\), (8.5) becomes
thereby verifying (a) \(\to\) (b).
Now, we assume that the outer square of Diagram (8.6) below commutes:
(8.6)
An immediate calculation shows the entire diagram commutes. Since the square in the middle is a pullback, there exists a unique morphism \(l_{v}\colon v\to\overline{T}s\mathbin{\boldsymbol{\cdot}}1\) such that the left and top squares commute.
Thus, we conclude that the following diagram
commutes. We observe that Diagram (2.5) is a pullback square when \(\mathcal{V}\) is extensive, by Theorem 7.2, so there exists a unique morphism
\[\omega^{\sharp}\colon v\to\sum_{\begin{subarray}{c}\underline{r}\in\overline{T }s\\ \mathfrak{p}\in\overline{T}t\end{subarray}}(\overline{T}a)(\mathfrak{r}, \mathfrak{y})\]
such that \(\hat{\varepsilon}_{T(\mathfrak{a}\mathbin{\boldsymbol{\cdot}}1)}\circ \omega^{\sharp}=\omega\), \(r_{\overline{T}a\mathbin{\boldsymbol{\cdot}}1}\circ\omega^{\sharp}=r_{v}\) and \(l_{\overline{T}a\mathbin{\boldsymbol{\cdot}}1}\circ\omega^{\sharp}=l_{v}\), which, in particular, confirms that Diagram (8.5) is a pullback square.
Fibrewise discrete monads.The search for a more concrete notion of what it means for \(\hat{\varepsilon}_{T(\mathbin{\boldsymbol{\cdot}}1)}\) to have a strong conjoint led us to the notion of fibrewise discreteness.
Let \(\mathcal{V}\) be a lextensive category, and let \(f\colon x\to y\) be a morphism in \(\mathcal{V}\). We say \(f\) is _fibrewise discrete_ if for every pullback diagram
the object \(f^{*}p\) is _discrete_; that is, \(\hat{\varepsilon}_{f^{*}p}\) is an isomorphism. For instance, in \(\mathcal{V}=\mathsf{Top}\), local homeomorphisms are fibrewise discrete.
We say an endofunctor \(F\) on \(\mathcal{V}\) is _fibrewise discrete_ if for all sets \(X\), the morphism \(F^{!}\colon F(X\mathbin{\boldsymbol{\cdot}}1)\to F1\) is fibrewise discrete.
**Lemma 8.4**.: _Let \(F\) be an endofunctor on a lextensive category \(\mathcal{V}\). If \(\hat{\varepsilon}_{F(\mathbin{\boldsymbol{\cdot}}1)}\) is cartesian, then \(F\) is fibrewise discrete. The converse holds when \(\mathcal{V}\) is connected._
Proof.: Let \(\overline{F}=\mathcal{V}(1,F(-\cdot\,1))\). We consider Diagram (8.7)
(8.7)
where the square in the lower right corner is a pullback. The outer square commutes by naturality, so there exists a unique \(\theta_{X}\), depicted by a dashed arrow, making both incident triangles commute. Note that \(\hat{\varepsilon}_{F(-1)}\) is cartesian if and only if \(\theta_{X}\) is an isomorphism for all \(X\).
Now, consider the image of (8.7) via \(\mathcal{V}(1,-)\), which preserves pullbacks. Note that since \(\mathcal{V}\) connected, \(\mathcal{V}(1,\hat{\varepsilon}_{F1})=\eta_{\mathcal{V}(1,F1)}^{-1}\) is an isomorphism, so we conclude that \(\mathcal{V}(1,\omega_{x})\) is an isomorphism as well.
Hence, we consider the following naturality square
and we observe that \(\theta_{X}\circ\mathcal{V}(1,\omega_{X})\cdot 1=\hat{\varepsilon}_{\tau_{X}}\) holds, by the universal property. Thus, \(\theta_{X}\) is invertible if and only if \(\hat{\varepsilon}_{\tau_{x}}\) is invertible; that is, if and only if \(\tau_{x}\) is discrete.
**Remark 8.5**.: In Diagram (8.7), we have a morphism \(\tau_{x}\to\overline{F}1\boldsymbol{\cdot}1\), which corresponds to a family \((\tau_{x,p})_{p\in\overline{F}1}\), by extensivity (see Theorem 7.1); these are given via pullback
(8.8)
and we also have \(\sum_{p\in\overline{F}1}\tau_{x,p}\cong\tau_{x}\). Thus, \(\tau_{x}\) is discrete if and only if \(\tau_{x,p}\) is discrete for all \(p\in\overline{F}1\).
With this, we obtain the following characterization:
**Theorem 8.6**.: _If \(\mathcal{V}\) is connected, the following are equivalent for an endofunctor \(F\colon\mathcal{V}\to\mathcal{V}\):_
1. \(\hat{\varepsilon}_{F(-1)}\) _has a strong conjoint._
2. \(\hat{\varepsilon}_{F(-1)}\) _is a cartesian natural transformation._
3. \(F\) _is fibrewise discrete._
4. \(\tau_{x,p}\)_, as given in (_8.8_), is discrete for all_ \(x\) _and all_ \(p\in\overline{F}1\)_._
Proof.: The equivalence (i) \(\iff\) (ii) is given by Lemma 8.3, we have (ii) \(\iff\) (iii) by Lemma 8.4, and Remark 8.5 confirms (iii) \(\iff\) (iv).
Naturally, we are most concerned with cartesian monads \(T\) such that \(T\) is fibrewise discrete, and, armed with Theorem 8.6, we can promptly verify that many familiar examples of cartesian monads are fibrewise discrete. We begin with the following:
**Lemma 8.7**.: _Let \(\mathcal{V}\) be a connected, distributive monoidal category. The free \(\otimes\)-monoid monad on \(\mathcal{V}\) is fibrewise discrete._
Proof.: Let \(X\) be a set, and let \(p\colon 1\to(X\boldsymbol{\cdot}1)^{*}\) be a morphism. Since \(\mathcal{V}\) is connected, we may apply Lemma 7.6, to confirm \(p\) factors uniquely through \(q\colon 1\to(X\boldsymbol{\cdot}1)^{n}\) for some \(n\in\mathbb{N}\). Now, note that \((X\boldsymbol{\cdot}1)^{n}\cong X^{n}\boldsymbol{\cdot}1\) if \(n>0\), \((X\boldsymbol{\cdot}1)^{0}=I\), and that we have pullback diagrams
\[\begin{CD}1@>{}>{}>I\\ @V{}V{}V@V{}V{}V\\ 1@>{}>{}>I\end{CD}\qquad\text{and, for $n>0$},\qquad\begin{CD}X^{n} \boldsymbol{\cdot}1@>{}>{}>X^{n}\boldsymbol{\cdot}1\\ @V{}V{}V@V{}V{}V\\ 1@>{}>{}>I\end{CD}\]
whence \(\tau_{X,p}\cong X^{n}\boldsymbol{\cdot}1\) for some \(n\in\mathbb{N}\); this concludes the proof, by Theorem 8.6.
**Lemma 8.8**.: _Let \(S,T\) be endofunctors on \(\mathcal{V}\), and let \(\alpha\colon S\to T\) be a cartesian natural transformation. If \(T\) is fibrewise discrete, then so is \(S\)._
Proof.: Consider the following composite of pullbacks:
We have \(\tau_{x,\alpha_{1}\circ p}\cong\sigma_{x,p}\), which is discrete for all \(x,p\).
### Free monoid monad \(\mathsf{Set}\times\mathsf{Set}\):
We will confirm this monad is not fibrewise discrete. Indeed, we have the following pullback diagram
for each \(m,n\in\mathbb{N}\) and each set \(X\). However, \((X^{m},X^{n})\) is not discrete in general, so we cannot obtain a functor \(-\,\cdot\,1\colon(\overline{T},\,\mathcal{V})\mbox{-}\mathsf{Cat}\to\mathsf{Cat }(T,\,\mathcal{V})\) via Theorem 5.2.
### Cartesian monads on slice categories:
If we have a pair \((T,\mathcal{V})\) where \(T\) is a cartesian monad on a category \(\mathcal{V}\) with finite limits, and \(\mathscr{C}\) is an internal \(T\)-category, we may construct [33, Proposition 6.2.1] a cartesian monad \(T_{\mathscr{C}}\) on \(\mathcal{V}\downarrow\mathscr{C}_{0}\), and we obtain an equivalence [33, Corollary 6.2.5] of categories
\[\mathsf{Cat}(T_{\mathscr{C}},\mathcal{V}\downarrow\mathscr{C}_{0})\simeq \mathsf{Cat}(T,\mathcal{V})\downarrow\mathscr{C},\]
which raises the question: can we obtain (8.3) for the pair \((T_{\mathscr{C}},\mathcal{V}\downarrow\mathscr{C}_{0})\)?
Already when \(T=\mathsf{id}\), \(\mathcal{V}=\mathsf{Set}\), we cannot generally guarantee an affirmative answer. Indeed, let \(\mathcal{C}\) be an ordinary small category. In this case, \(T_{\mathcal{C}}\) is the cartesian monad induced by the monadic adjunction
However, \(\mathsf{Set}\downarrow\mathsf{ob}\,\mathcal{C}\cong[(\mathsf{ob}\,\mathcal{C} )\,\cdot\,1,\mathsf{Set}]\) is connected precisely when \(\mathsf{ob}\,\mathcal{C}\cong 1\) or \(\mathsf{ob}\,\mathcal{C}\cong 0\); in fact, we shall confirm that while it is true that \(T_{\mathcal{C}}\) is fibrewise discrete, \(\hat{\varepsilon}_{T_{\mathcal{C}}(-1)}\) is not cartesian, and thus we cannot obtain (8.3) for general \(\mathcal{C}\).
Let \(X=\mathsf{ob}\,\mathcal{C}\). \(T_{\mathcal{C}}\) is defined on objects by
\[(A_{x})_{x\in X}\mapsto\big{(}\sum_{y\in X}A_{y}\times\mathcal{C}(x,y)\big{)} _{x\in X},\]
and the terminal object of \(\mathsf{Set}\downarrow X\) is precisely the constant family \(1=(1)_{x\in X}\). In this case, \(T_{\mathcal{C}}1=(\mathsf{ob}(x\downarrow\mathcal{C}))_{x\in X}\), while \(\overline{T_{\mathcal{C}}}1=\prod_{x\in X}\mathsf{ob}(x\downarrow\mathcal{C})\).
More generally, for a constant family \(A\,\cdot\,1\cong(A)_{x\in X}\), we have
\[T_{\mathcal{C}}(A\,\cdot\,1)\cong(A\times\mathsf{ob}(x\downarrow\mathcal{C}) )_{x\in X},\]
and \(\overline{T}_{\mathcal{C}}(A\,\cdot\,1)=A^{X}\times\prod_{x\in X}\mathsf{ob}(x \downarrow\mathcal{C})\). We have, for each \(x\in X\), a pair of pullback diagrams
which confirms that \(T_{\mathcal{C}}\) is fibrewise discrete, but, since we cannot guarantee \(A\cong A^{X}\), we cannot guarantee \(\hat{\varepsilon}_{T_{\mathcal{C}}(-1)}\) to be cartesian as well.
In spite of this, we can obtain the adjunction (8.3) when \(\mathscr{C}_{0}\) is terminal; that is, when \(\mathscr{C}\) is a _\((T,\mathcal{V})\)-monoid_. We will now treat the case \(T=(-)^{*}\), which are denoted \(\mathcal{V}\)-operads.
### \(\mathcal{V}\)-operadic monads:
An important corollary of Lemmas 8.8 and 8.7 is that for a cartesian monoidal category \(\mathcal{V}\), \(\mathcal{V}\)-operadic monads are fibrewise discrete; note that these are precisely the cartesian monads on \(\mathcal{V}\) with a cartesian natural transformation to the free \(\times\)-monoid monad.
To be explicit, for a \(\mathcal{V}\)-operad \(\mathcal{O}\)[33, p. 44] the monad associated to \(\mathcal{O}\) is given on objects by
\[V\mapsto\sum_{n\in\mathbb{N}}\mathcal{O}_{n}\times V^{n},\]
and the projections \(\mathcal{O}_{n}\times V^{n}\to V^{n}\) induce a cartesian natural transformation to the free \(\times\)-monoid monad.
Thus, any pair \((T,\mathcal{V})\) where \(T\) an operadic monad over a kextensive, connected category \(\mathcal{V}\) induces an adjunction (8.3). Of special interest is the case \(T=(-)^{*}\) is the free monoid monad on \(\mathcal{V}\). In this case, the induced \(\mathsf{Set}\)-monad \(\overline{T}\) is precisely the ordinary free monoid monad.
### Free category monad:
The free category monad \(\mathfrak{F}\) on \(\mathsf{Grph}\) is fibrewise discrete, since we have the following pullback of graphs:
so, for the pair \((\mathfrak{F},\mathsf{Grph})\), we also obtain an adjunction (8.3). We note that \(\overline{\mathfrak{F}}\) is a lax extension of the \(\mathbb{N}\times-\) monad on \(\mathsf{Set}\), for the _multiplicative_ structure of \(\mathbb{N}\).
### Free finite coproduct completion monad:
For a category \(\mathcal{C}\), \(\mathsf{Fam}_{\mathsf{fin}}(\mathcal{C})\) is the category of _finite families_ of objects of \(\mathcal{C}\); it is given on objects by \((\mathsf{ob}\,\mathcal{C})^{*}\), and a morphism \(\mathfrak{x}\to\mathfrak{y}\) is a pair \((f,\phi)\), consisting of a function \(f\colon[m]\to[n]\), where \(m\) and \(n\) are the lengths of \(\mathfrak{x}\) and \(\mathfrak{y}\), respectively, and for each \(i=1,\ldots,n\), a morphism \(\phi_{i}\colon\mathfrak{x}_{i}\to\mathfrak{y}_{fi}\).
From [45, 5.16], we learn that \(\mathsf{Fam}_{\mathsf{fin}}\) is a cartesian (2-)monad on \(\mathsf{Cat}\). We proceed to verify it is fibrewise discrete; first, observe that \(\mathsf{ob}\,\mathsf{Fam}_{\mathsf{fin}}(X\mathbin{\raisebox{-1.0pt}{\scalebox{ {1.0}{$\bullet$}}}}1)=X^{*}\), and the hom-sets are given by
\[\mathsf{Fam}_{\mathsf{fin}}(X\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}}1)(\mathfrak{x},\mathfrak{y})=\sum_{f\colon[m]\to[n]}\prod_{i=1}^{m} [x_{i}=y_{fi}],\]
where \(m\), \(n\) are the lengths of \(\mathfrak{x}\), \(\mathfrak{y}\) respectively. Moreover, note that \(\mathsf{Fam}_{\mathsf{fin}}(1)\simeq\mathsf{FinSet}\). The fiber of \(\mathsf{Fam}_{\mathsf{fin}}(X\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}}1)\to\mathsf{FinSet}\) at (the identity on) \(n\) is given on objects by the set of families of size \(n\), and on morphisms by \([\mathfrak{x}=\mathfrak{y}]\cong\prod_{i=1}^{n}[\mathfrak{x}_{i}=\mathfrak{y }_{i}]\), which yields a discrete category; diagrammatically, we have
(8.9)
as we desired. Thus, the pair \((\mathsf{Fam}_{\mathsf{fin}},\mathsf{Cat})\) gives an adjunction (8.3) as well.
We note that \(\overline{\mathsf{Fam}_{\mathsf{fin}}}\) is a lax extension of the free monoid monad on \(\mathsf{Set}\).
### Free finite product completion monad:
The functor \((-)^{\mathsf{op}}\colon\mathsf{Cat}\to\mathsf{Cat}\) taking each category to its dual is its own, since we have \(\mathsf{Cat}(\mathcal{C}^{\mathsf{op}},\mathcal{D})\cong\mathsf{Cat}(\mathcal{C },\mathcal{D}^{\mathsf{op}})\), so, via 8.1, we can promptly verify that the functor
\[\mathcal{C}\mapsto\mathsf{Fam}_{\mathsf{fin}}^{*}(\mathcal{C})=\mathsf{Fam}_{ \mathsf{fin}}(\mathcal{C}^{\mathsf{op}})^{\mathsf{op}}\]
is a cartesian monad. For a category \(\mathcal{C}\), \(\mathsf{Fam}_{\mathsf{fin}}^{*}(\mathcal{C})\) has the same set of objects as \(\mathsf{Fam}_{\mathsf{fin}}(\mathcal{C})\), but a morphism \(\mathfrak{x}\to\mathfrak{y}\) is a pair \((f,\phi)\) consisting of a function \(f\colon[n]\to[m]\), where \(m\), \(n\) is the length of \(\mathfrak{x}\), \(\mathfrak{y}\) respectively, and \(\phi_{i}\colon\mathfrak{x}_{fi}\to\mathfrak{y}\) is a morphism for each \(i=1,\ldots,m\).
This monad is also fibrewise discrete; the only adjustment we need to make to the pullback diagram (8.9) is to replace \([\mathtt{f}_{i}=\mathfrak{y}_{fi}]\) with \([\mathtt{f}_{fi}=\mathfrak{y}_{i}]\), so the pair \((\mathsf{Fam}_{\mathsf{fin}}^{*},\mathsf{Cat})\) induces an adjunction (8.3).
### Free symmetric strict monoidal category monad:
For a category \(\mathcal{C}\), we let \(\mathfrak{S}\mathcal{C}\) be a subcategory of \(\mathsf{Fam}_{\mathsf{fin}}(\mathcal{C})\) with the same set of objects, and precisely those morphisms \((f,\phi)\colon\mathtt{f}\to\mathfrak{y}\) such that \(f\) is a bijection.
This was shown to be a cartesian monad, for instance, in [33], or in [45, Example 7.5], where it was shown that we have a cartesian (2-)natural transformation \(\mathfrak{S}\to\mathsf{Fam}_{\mathsf{fin}}\). For this same reason, it is fibrewise discrete, by Lemma 8.8, giving us another example of an adjunction (8.3), with the pair \((\mathfrak{S},\mathsf{Cat})\).
Furthermore, note that \(\overline{\mathfrak{S}}\) is also a lax extension of the free monoid monad on \(\mathsf{Set}\).
## 9. Embedding
Throughout this section, we fix a kextensive category \(\mathcal{V}\), and a cartesian monad \(T=(T,m,e)\) on \(\mathcal{V}\). Following the notation from Section 8, we denote by \(\overline{T}\) the monad on \(\mathcal{V}\)-\(\mathsf{Mat}\) induced by \(T\) on \(\mathsf{Span}(\mathcal{V})\).
Via the tools developed throughout the paper, we shall verify that if \(\mathcal{V}\) is connected, and \(T\) is fibrewise discrete, then \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\to\mathsf{Cat}(T,\,\mathcal{V})\) is a fully faithful, pullback-preserving functor. Moreover, among the pairs \((T,\mathcal{V})\) satisfying these hypotheses at the end of Section 8, we shall provide a description of \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\) and \(\mathsf{Cat}(T,\,\mathcal{V})\).
**Lemma 9.1**.: _If \(\hat{\varepsilon}_{T(-1)}\) has a strong conjoint, then we have an adjunction_
(9.1)
_whose unit and counit are also denoted by \(\hat{\eta}\) and \(\hat{\varepsilon}\), respectively._
Proof.: By hypothesis, \(-\cdot\,1\) is a strong functor, and \(\hat{\varepsilon}_{T(-1)}\) has a strong conjoint. Since \(T\) is a strong functor, we also deduce that \(T\hat{\varepsilon}_{T(-1)}\) has a strong conjoint as well. This places us in the setting of Section 6, hence, we obtain (9.1) by applying Theorem 6.1 to the conjunction (8.2).
Henceforth, we shall assume that \(\mathcal{V}\) is a connected category, and that \(T\) is fibrewise discrete.
**Theorem 9.2**.: \(-\cdot\,1\colon(\overline{T},\,\mathcal{V})\mbox{-}\mathsf{Cat}\to\mathsf{Cat }(T,\,\mathcal{V})\) _is fully faithful._
Proof.: By Lemma 7.7, we know \(-\cdot\,1\colon\mathcal{V}\mbox{-}\mathsf{Mat}\to\mathsf{Span}(\mathcal{V})\) is fully faithful, and since \(\mathcal{V}(1,\varepsilon_{T(-1)})\) is a natural isomorphism, the result follows by Corollary 6.4.
These results can be immediately applied to the last four examples in Section 8; we shall describe both \((\overline{T},\,\mathcal{V})\mbox{-}\mathsf{Cat}\) and \(\mathsf{Cat}(T,\,\mathcal{V})\) for each such pair \((T,\mathcal{V})\).
### \(\mathcal{V}\)-operadic multicategories:
Let \(T=T_{\mathfrak{O}}\) be a monad induced by a \(\mathcal{V}\)-operad \(\mathfrak{O}\). When \(\mathcal{V}\) is connected, we have shown that \(T\) is fibrewise discrete, and therefore \((T,\mathcal{V})\) induces an adjunction (9.1). So, we conclude that \(-\,\cdot\,1\colon(\overline{T},\,\mathcal{V})\mbox{-}\mathsf{Cat}\to\mathsf{Cat }(T,\,\mathcal{V})\) is fully faithful, by Theorem 9.2.
The induced monad \(\overline{T}\) on \(\mathsf{Set}\) is given on objects by
\[X\mapsto\sum_{n\in\mathbb{N}}\mathcal{V}(1,\mathfrak{O}_{n})\times X^{n},\]
and note that since \(\mathcal{V}(1,-)\colon\mathcal{V}\to\mathsf{Set}\) is a strong monoidal functor (preserves products), it follows that \(\mathcal{V}(1,\mathfrak{O})\) a \(\mathsf{Set}\)-monad, so \(\overline{T}\) is an operadic monad as well.
Let \(r\colon X\twoheadrightarrow Y\) be a \(\mathcal{V}\)-matrix, and let \(\sigma\in\mathcal{V}(1,\mathfrak{O}_{m})\), \(\mathtt{f}\in X^{m}\), \(\tau\in\mathcal{V}(1,\mathfrak{O}_{n})\), \(\mathfrak{y}\in Y^{n}\). The \(\mathcal{V}\)-matrix \(\overline{T}r\) is given at \((\sigma,\mathtt{f},\tau,\mathfrak{Y})\) by
\[(\overline{T}r)(\sigma,\mathtt{f},\tau,\mathfrak{y})=\begin{cases}0&\text{if } \sigma\neq\tau,\\ \prod_{i=1}^{n}r(x_{i},y_{i})&\text{otherwise}\end{cases}\]
thus, in practice, we just write \((\overline{T}r)(\sigma,\mathtt{f},\mathfrak{y})\) for the possibly non-initial values of \(\overline{T}r\).
The objects of \(\mathsf{Cat}(T,\,\mathcal{V})\) are (internal) operadic \(\mathcal{V}\)-categories, and for this reason, we will consider the objects of \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\) to be the _enriched_ operadic \(\mathcal{V}\)-categories. Such an object consists of
* a set \(X\) of objects,
* a \(\mathcal{V}\)-matrix \(a\colon\overline{T}X\times X\to\mathcal{V}\),
* a \(\mathcal{V}\)-morphism \(1\to a(ex,x)\) for each \(x\in X\),
* a \(\mathcal{V}\)-morphism \(a(\sigma,\mathfrak{x},x)\times\overline{Ta}(\sigma,(\tau_{1},\mathfrak{y}_{1}),\ldots,(\tau_{n},\mathfrak{y}_{k}),\mathfrak{x})\to a(\sigma(\tau_{1},\ldots, \tau_{k}),\mathfrak{y}_{1}\cdots\mathfrak{y}_{k},x)\) for \(\tau_{i}\in\mathfrak{O}_{n_{i}}\), \(\mathfrak{y}_{i}\in X^{n_{i}}\), \(\sigma\in\mathfrak{O}_{m}\), \(\mathfrak{x}\in\mathfrak{X}^{m}\), where \(m=n_{1}+\ldots+n_{k}\).
satisfying suitable identity and associativity conditions.
Of particular interest may be the \(T=(-)^{*}\) free \(\times\)-monoid monad on \(\mathcal{V}\); more generally, monads induced by a _discrete_ operad \(\mathfrak{O}\) and the \(M\times-\) monad for \(M\) a \(\mathcal{V}\)-monoid.
### \((\overline{\mathfrak{F}},\mathsf{Grph})\)-categories:
As we have verified in Section 8, the pair \((\mathfrak{F},\mathsf{Grph})\) consists of a fibrewise discrete monad on a connected, kextensive category, so \(-\,\mathsf{1}\colon(\overline{\mathfrak{F}},\mathsf{Grph})\mbox{-}\mathsf{ C}\mathsf{C}\mathsf{t}\to\mathsf{C}\mathsf{t}(\mathfrak{F},\mathsf{Grph})\) is fully faithful.
The object of the category \(\mathsf{Cat}(\mathfrak{F},\mathsf{Grph})\) are precisely the virtual double categories [33, 15]. An enriched \((\overline{\mathfrak{F}},\mathsf{Grph})\)-category \(X\) consists of
* A set \(X_{0}\) of objects,
* A graph \(X_{1}(n,x,y)=(X_{11}(n,x,y)\rightrightarrows X_{10}(n,x,y)\) for each \(n\in\mathbb{N}\) and \(x,y\in X_{0}\),
* A loop \(1\to X_{1}(1,x,x)\) for each \(x\in X_{0}\),
* A graph morphism \(X_{1}(m,y,z)\times\mathfrak{F}_{m}X_{1}(n,x,y)\to X_{1}(m\cdot n,x,z)\) for each \(x,y,z\in X\) and \(m,n\in\mathbb{N}\), where \(\mathfrak{F}_{m}G\) is the graph of \(m\)-chains of \(G\).
satisfying suitable identity and associativity conditions.
Via the induced functor \(\mathsf{Grph}(1,-)\colon\mathsf{VDbCat}\to(\overline{\mathfrak{F}},\mathsf{ Grph})\mbox{-}\mathsf{Cat}\), we can come across examples of \((\overline{\mathfrak{F}},\mathsf{Grph})\)-categories. If \(\mathbb{V}\) is a virtual double category, \(\mathsf{Grph}(1,\mathbb{V})\) consists of
* a set of objects \(\mathsf{Grph}(1,\mathbb{V}_{0})\), that is, the set of loops in \(\mathbb{V}_{0}\),
* for each \(n\in\mathbb{N}\) and loops \(r,s\) of \(\mathbb{V}_{0}\), a graph \(\mathbb{V}_{1}(n,r,s)\) has edges \(\theta\colon f\to g\) consisting of the \(2\)-cells of the form \(x
### Clubs:
We begin by considering the pair \((\mathfrak{S},\mathsf{Cat})\). The category \(\mathsf{Cat}(\mathfrak{S},\mathsf{Cat})\) is the so-called category of _enhanced symmetric multicategories_ in [33, p. 212], first defined by [1] (therein, these are called opetopes).
By analogy, we let
* \(\mathsf{Cat}(\mathsf{Fam}_{\mathsf{fin}},\mathsf{Cat})\) be the category of _enhanced cocartesian multicategories_, and
* \(\mathsf{Cat}(\mathsf{Fam}_{\mathsf{fin}}^{*},\mathsf{Cat})\) be the category of _enhanced cartesian multicategories_.
As we shall verify in Section 10, in each case \(T=\mathsf{Fam}_{\mathsf{fin}},\mathsf{Fam}_{\mathsf{fin}}^{*},\mathfrak{S}\), the category \((\overline{T},\mathsf{Cat})\)-\(\mathsf{Cat}\) is the full subcategory of \(\mathsf{Cat}(T,\mathsf{Cat})\) with discrete categories of objects. Therefore,
* \((\overline{\mathsf{Fam}_{\mathsf{fin}}},\mathsf{Cat})\)-\(\mathsf{Cat}\) is the category of _cocartesian multicategories_,
* \((\overline{\mathsf{Fam}_{\mathsf{fin}}},\mathsf{Cat})\)-\(\mathsf{Cat}\) is the category of _cartesian multicategories_,
* \((\overline{\mathfrak{S}},\mathsf{Cat})\)-\(\mathsf{Cat}\) is the category of _symmetric multicategories_.
## 10. Application to descent theory
We shall fix a textensive, connected category \(\mathcal{V}\) with its cartesian monoidal structure, and a fibrewise discrete, cartesian monad \(T\) on \(\mathcal{V}\). Theorem 9.2 provides us with an embedding \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\to\mathsf{Cat}(T,\,\mathcal{V})\). Now, we desire to apply this result to study _effective descent morphisms_ in \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\). We promptly review the fundamental aspects of descent theory necessary to draw our desired conclusions. Afterwards, under a suitable hypothesis, we confirm that \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\) is the full subcategory of \(\mathsf{Cat}(T,\,\mathcal{V})\) with a discrete object-of-objects (Theorem 10.3), which we deduce that \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\to\mathsf{Cat}(T,\,\mathcal{V})\) reflects effective descent morphisms, generalizing [35, 9.10 Lemma and 9.11 Theorem] to the generalized multicategory setting.
Let \(\mathcal{C}\) be a category with finite limits, and let \(p\colon x\to y\) be a morphism. We have a change-of-base adjunction
By the Benabou-Roubaud theorem [5], we obtain \(\mathsf{Desc}(p)\cong T^{p}\)-\(\mathsf{Alg}\), where \(T^{p}\) is the monad induced by the change-of-base adjunction.
Hence, we may consider the Eilenberg-Moore factorization of \(p^{*}\) in the following form:
We say that
* \(p\) is an _effective descent morphism_ if \(\mathcal{K}^{p}\) is an equivalence,
* \(p\) is a _descent morphism_ if \(\mathcal{K}^{p}\) is fully faithful,
* \(p\) is an _almost descent morphism_ if \(\mathcal{K}^{p}\) is faithful.
For categories \(\mathcal{C}\) with finite limits, descent morphisms are precisely the pullback-stable regular epimorphisms, and almost descent morphisms are precisely the pullback-stable epimorphisms. If \(\mathcal{C}\) is Barr-exact [2] or locally cartesian closed, then effective descent morphisms are precisely the descent morphisms. If \(\mathcal{C}\) is a topos, then effective descent morphisms are precisely the epimorphisms [25].
Effective descent morphisms were studied and characterized for \(\mathcal{C}=\mathsf{Top}\) in [39, 11], and for \(\mathcal{C}=\mathsf{Cat}\) in [32, 25], exhibiting their non-trivial nature.
Having fixed the terminology, we finish our preliminaries by recalling the following result of effective descent morphisms for pseudopullbacks of categories:
**Proposition 10.1** ([35, Theorem 1.6]).: _If we have pseudopullback diagram of categories with pullbacks and pullback preserving functors_
_and a morphism \(f\) of \(\mathcal{A}\) such that_
\[\begin{CD}\mathcal{A}@>{F}>{}>\mathcal{B}\\ @V{}V{G}V@V{}V{H}V\\ \mathcal{C}@>{}>{K}>\mathcal{D}\end{CD}\]
_-_ \(Ff\) _and_ \(Gf\) _are effective descent morphisms, and_
* \(KFf\cong HGf\) _is a descent morphism,_
_then \(f\) is an effective descent morphism._
**Lemma 10.2**.: _If \((X\!\cdot\!1,a,\eta,\mu)\) is an internal \((T,\mathcal{V})\)-category, then \(\hat{\varepsilon}_{a}\) is a split epimorphism. Moreover, if \(\hat{\varepsilon}_{T1}\) is a monomorphism, then \(\hat{\varepsilon}_{a}\) is an isomorphism._
Proof.: We consider the unique morphism \((X\!\cdot\!1,a,\eta,\mu)\to(1,e_{1}^{*},\eta,\mu)\) to the terminal \((T,\mathcal{V})\)-category
and we note that \(e_{1}=\hat{\varepsilon}_{T1}\circ(\overline{e}_{1}\!\cdot\!1)\), so that there there exists a unique \(\hat{l}_{a}\!:M_{a}\to\overline{T}X\!\cdot\!1\) such that \(\hat{\varepsilon}_{T(X\!\cdot\!1)}\circ\hat{l}_{a}=l_{a}\) and \(\overline{T}\!\cdot\!1\circ\hat{l}_{a}=(\overline{e}_{1}\!\cdot\!1)\circ\)!:
(10.1)
It follows that there is a unique \(\omega\!:M_{a}\to M_{V(1,a)\!\cdot\!1}\) such that \(\hat{\varepsilon}_{a}\circ\omega=\mathsf{id}\) and \((\hat{l}_{a},r_{a})=(l_{a},r_{a})\circ\omega\),
(10.2)
thereby confirming \(\hat{\varepsilon}_{a}\) is a split epimorphism.
Moreover, observe that when \(\hat{\varepsilon}_{T1}\) is a monomorphism, it follows by the pullback square in (10.1) that \(\hat{\varepsilon}_{T(X\!\cdot\!1)}\) is a monomorphism, and by the pullback square in (10.2), we may conclude that \(\hat{\varepsilon}_{a}\) is a monomorphism. Thus, \(\hat{\varepsilon}_{a}\) is an isomorphism.
As a corollary, we obtain
**Theorem 10.3**.: _If \(\hat{\varepsilon}_{T1}\) is a monomorphism, then we have a pseudopullback diagram_
(10.3)
_of categories with pullbacks and pullback-preserving functors._
Proof.: We begin by observing that the objects of the pseudopullback are pairs \((S,(X,a,\eta,\mu),\omega)\) where \(S\) is a set, \((X,a,\eta,\mu)\) is an internal \((T,\mathcal{V})\)-category, and \(\omega\!:S\!\cdot\!1\to X\) is an isomorphism. Naturally,
this implies that \(\hat{\varepsilon}_{X}\) is an isomorphism, since \(\hat{\varepsilon}_{S\text{-}1}\) is invertible:
and conversely, for any internal \((T,\mathcal{V})\)-category \((Y,b,\eta,\mu)\) such that \(\hat{\varepsilon}_{Y}\) is invertible, the triple
\[(\mathcal{V}(1,Y),(Y,b,\eta,\mu),\hat{\varepsilon}_{Y})\]
is an object of the pseudopullback.
Hence, given a \((T,\mathcal{V})\)-category \((X,a,\eta,\mu)\) such that \(\hat{\varepsilon}_{X}\) is invertible, we have by Lemma 10.2 that \(\hat{\varepsilon}_{a}\) is invertible, since \(\hat{\varepsilon}_{T1}\) is a monomorphism by hypothesis. By Lemma 8.2, it follows that \(\mathfrak{n}_{a}^{\hat{\varepsilon}^{*}}\) is invertible, so that we can apply Lemma 6.2 to conclude that \((X,a,\eta,\mu)\) is isomorphic to an enriched \((\overline{T},\mathcal{V})\)-category, concluding the proof.
From this, we can now apply Proposition 10.1 to conclude that
**Lemma 10.4**.: _If \(\hat{\varepsilon}_{T1}\) is a monomorphism, then \(-\,\cdot\,1\colon(\overline{T},\,\mathcal{V})\text{-}\mathsf{Cat}\to\mathsf{Cat }(T,\,\mathcal{V})\) reflects effective descent morphisms._
Proof.: Let \(F\colon\mathcal{C}\to\mathcal{D}\) be a functor of enriched \((\overline{T},\mathcal{V})\)-categories such that \(F\mathbin{\boldsymbol{\cdot}}1\) is an effective descent morphism. Since \(\mathsf{Cat}(T,\,\mathcal{V})\to\mathcal{V}\) has fully faithful left and right adjoints, we may apply [37, Lemma 2.3] to conclude that it preserves descent morphisms, so that \((F\mathbin{\boldsymbol{\cdot}}1)_{0}=F_{0}\mathbin{\boldsymbol{\cdot}}1\) is a descent morphism.
Since \(-\,\cdot\,1\colon\mathsf{Set}\to\mathcal{V}\) reflects epimorphisms, we conclude that \(F_{0}\) is an epimorphism; hence an effective descent morphism. Now, we apply Proposition 10.1 with the pseudopullback (10.3) to conclude \(F\) is effective for descent.
Via [38, Theorem 5.3], which provides sufficient conditions for effective descent morphisms in \(\mathsf{Cat}(T,\,\mathcal{V})\) in terms of effective descent in \(\mathcal{V}\), we can now do the same for \((\overline{T},\,\mathcal{V})\text{-}\mathsf{Cat}\):
**Theorem 10.5**.: _Let \(p\colon\mathcal{C}\to\mathcal{D}\) be a functor of \((\overline{T},\mathcal{V})\)-categories. If \(\hat{\varepsilon}_{T1}\) is a monomorphism, and_
* \((p\mathbin{\boldsymbol{\cdot}}1)_{1}\) _is an effective descent morphism,_
* \((p\mathbin{\boldsymbol{\cdot}}1)_{2}\) _is a descent morphism,_
* \((p\mathbin{\boldsymbol{\cdot}}1)_{3}\) _is an almost descent morphism,_
_then \(p\) is an effective descent morphism._
Proof.: The three above conditions guarantee that \(p\mathbin{\boldsymbol{\cdot}}1\) is an effective descent functor of (internal) \((T,\mathcal{V})\)-categories. Since \(\hat{\varepsilon}_{T1}\) is a monomorphism, we can apply Lemma 10.4 to obtain the promised conclusion.
Now, the above work raises (at least) the following two questions:
* For which pairs \((T,\mathcal{V})\) can we guarantee that \(\hat{\varepsilon}_{T1}\) is a monomorphism?
* Is the requirement that \(\hat{\varepsilon}_{T1}\) be a monomorphism "reasonable"?
To answer the first, we note that this holds when
* the terminal object is a _separator_; that is, when \(\mathcal{V}(1,-)\) is faithful, which implies \(\hat{\varepsilon}\) is a componentwise monomorphism. This is the case when \(\mathcal{V}=\mathsf{Set},\mathsf{Top},\mathsf{Cat}\), any hyperconnected Grothendieck topos, but not \(\mathcal{V}=\mathsf{Grph}\).
* \(T\) is discrete; that is, when \(\hat{\varepsilon}_{T1}\) is an isomorphism. This is the case when \(T\) is the free \(\times\)-monoid monad on \(\mathcal{V}\), but not when \(T=\mathfrak{F}\).
And this, in a sense, answers the second question as well: from a practical perspective, the above conditions are sufficient for nearly all of our examples. And while we haven't confirmed whether the condition "\(\hat{\varepsilon}_{T1}\) is a monomorphism" is necessary or not, we can provide a heuristic argument to convey the intuition that this condition correctly captures that \(\overline{T}1\mathbin{\boldsymbol{\cdot}}1\) is a "good" discretization of \(T1\): a pair which satisfies neither of the above hypotheses is the pair \((\mathfrak{F},\mathsf{Grph})\), as \(\mathsf{ob}\,\hat{\varepsilon}_{\mathfrak{F}1}\colon\mathbb{N}\to 1\); here, \(\mathfrak{F}1=\mathsf{Grph}(1,\mathfrak{F}1)\) has too many points to be a "reasonable" discretization.
We now discuss the examples we have worked with so far.
### \(\mathcal{V}\)-operadic \(\mathcal{V}\)-categories:
Let \(\mathfrak{O}\) be a \(\mathcal{V}\)-operad, so that the \(\mathcal{V}\)-operadic monad \(T=T_{\mathfrak{O}}\) induced by \(\mathfrak{O}\) is given by
\[X\mapsto\sum_{n\in\mathbb{N}}\mathfrak{O}_{n}\times X^{n}.\]
Since \(\mathcal{V}(1,-)\) preserves coproducts, we have
\[\mathcal{V}(1,\sum_{n\in\mathbb{N}}\mathfrak{O}_{n})\cong\sum_{n\in\mathbb{N }}\mathcal{V}(1,\mathfrak{O}_{n}),\]
and therefore \(\hat{\varepsilon}_{T1}\cong\sum_{n\in\mathbb{N}}\hat{\varepsilon}_{\mathfrak{ O}_{n}}\). It is easy to verify that in an extensive category, a coproduct of morphisms is a monomorphism if and only if every summand is a monomorphism, so we may apply Theorem when \(\hat{\varepsilon}_{\mathfrak{O}_{n}}\) is a monomorphism for all \(n\in\mathbb{N}\).
Naturally, the result holds if we consider \(\mathcal{V}\)-operads for categories \(\mathcal{V}\) whose terminal is a separator, such as \(\mathsf{Cat}\) or \(\mathsf{Top}\), or if we consider _discrete_\(\mathcal{V}\)-operads; that is, \(\mathcal{V}\)-operads such that \(\hat{\varepsilon}_{\mathfrak{O}_{n}}\) is an isomorphism for all \(n\in\mathbb{N}\).
However, the above conditions are not necessary, if, for instance, one considers \(\mathsf{Grph}\)-operads \(\mathfrak{O}\) such that \(\mathfrak{O}_{n}\) has at most one loop at each vertex for all \(n\in\mathbb{N}\); this is precisely the case when \(\hat{\varepsilon}_{\mathfrak{O}_{n}}\) is a monomorphism for all \(n\in\mathbb{N}\).
If \(\mathfrak{O}\) a discrete \(\mathcal{V}\)-operad, we define \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\) to be the category of enriched \(\mathfrak{O}\)-categories.
### \(\mathcal{V}\)-multicategories:
An important instance of the previous case is the case \(\mathfrak{O}\cong\mathbb{N}\boldsymbol{\cdot}1\); that is, when \(T\) is the free \(\times\)-monoid monad on \(\mathcal{V}\). In this case, \(\mathsf{Cat}(T,\,\mathcal{V})\) is the category of _multicategories internal to \(\mathcal{V}\)_. We note that \(T\) is a discrete monad, and the induced \(\mathsf{Set}\)-monad \(\overline{T}\) is the ordinary free monoid monad.
Thus, we may define the objects of \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\) to be the _enriched_\(\mathcal{V}\)-multicategories, and the morphisms are the respective enriched \(\mathcal{V}\)-functors. An immediate application of Theorem 10.5 provides criteria for such an enriched \(\mathcal{V}\)-functor to be effective for descent.
### Clubs:
We consider the pair \((\mathfrak{S},\mathsf{Cat})\); the free symmetric strict monoidal category monad \(\mathfrak{S}\) on \(\mathsf{Cat}\). By Theorem 10.3, we recover the categories of (many-object) clubs considered in [29, 27] by taking the fibers of the fibration \((\overline{\mathfrak{S}},\mathsf{Cat})\)-\(\mathsf{Cat}\to\mathsf{Set}\) (see [15, 4.19]).
In fact, the above can be carried out for any fibrewise discrete monad \(T\) on \(\mathsf{Cat}\), as the terminal object of \(\mathsf{Cat}\) is a separator.
## 11. Epilogue
We gave a general description of change-of-base functors between horizontal lax algebras induced by monad (op)lax morphisms on the \(2\)-category \(\mathsf{PsDbCat}_{\mathsf{lax}}\), and with this description, we made the dichotomy between enriched and internal generalized multicategories explicit. As our main result, we have shown that enriched generalized multicategories are discrete, internal generalized multicategories, under suitable conditions. Moreover, we applied this result to study the effective descent morphisms of \((\overline{T},\,\mathcal{V})\)-\(\mathsf{Cat}\).
There is still a vast amount of open problems left to settle. For the remainder of this section, we will state a couple of these problems, sketch a possible approach to their solution, and highlight possible connections to other work.
### Object-discreteness
In [15, Section 8], the authors define and study the full subcategories of _normalized_ and _object-discrete_ horizontal lax \(T\)-algebras. Inspired by our Theorem 9.2, we sketch an argument, for an instance of [15, Theorem 8.7] for the equipment of _modules_ of a suitable equipment, via change-of-base.
If \(\mathbb{D}\) is an equipment whose hom-categories of the underlying bicategory have all coequalizers, which preserved by horizontal composition, then we have an equipment \(\mathsf{Mod}(\mathbb{D})\) whose underlying category of objects is \(\mathbb{H}\operatorname{\mathsf{Lax}}\)-id-\(\mathsf{Alg}\), and horizontal \(1\)-cells are _modules_; see [33, Section 5.3], [42, Theorem 11.5]. In fact, \(\mathsf{Mod}\) defines a \(2\)-functor defined on a suitable full sub-\(2\)-category of equipments, hence if \(T\) is a monad on \(\mathbb{D}\), then \(\mathsf{Mod}(T)\) is a monad on \(\mathsf{Mod}(\mathbb{D})\). We have an inclusion
\[\mathsf{J}\colon\mathbb{D}\to\mathsf{Mod}(\mathbb{D}),\]
and this induces a monad oplax morphism \(T\to\mathsf{Mod}(T)\) with the unit comparison \(\mathsf{e}^{T}\). When \(T\) is normal, we may apply Theorem 5.2 to obtain a change-of-base functor
\[\mathfrak{I}\colon\,\mathbb{H}\operatorname{\mathsf{Lax}\text{-}}T\text{-} \operatorname{\mathsf{Alg}}\to\mathbb{H}\operatorname{\mathsf{Lax}\text{-} \mathsf{Mod}(T)\text{-}\operatorname{\mathsf{Alg}},\]
which identifies the full subcategory of "object-discrete" horizontal \(\operatorname{\mathsf{lax}\mathsf{Mod}(T)}\)-algebras as the category of horizontal lax \(T\)-algebras, so we partially obtain [15, Theorem 8.7], when \(T\) is normal.
### Monadicity of horizontal lax algebras
Let \(T=(T,m,e)\) be a monad on an equipment \(\mathbb{D}\) in \(\mathsf{PsDbCat}_{\mathsf{lax}}\), and let \(x\) be an object of \(\mathbb{D}\). We define \(\mathbb{H}\operatorname{\mathsf{Kl}}(T,x)\) to be the category whose objects are horizontal \(1\)-cells \(a\colon Tx\to x\), and morphisms are the globular \(2\)-cells between them.
If \(m\) has a strong conjoint, \(\mathbb{H}\operatorname{\mathsf{Kl}}(T,x)\) has a tensor product defined by
\[b\otimes a=b\cdot(Ta\cdot m_{x}^{*}),\]
which makes it into a _skew monoidal category_[43]. If we let \(\mathbb{H}\operatorname{\mathsf{Lax}\text{-}}T\text{-}\operatorname{ \mathsf{Alg}}(x)\) be the category of horizontal lax \(T\)-algebras with underlying object \(x\), it can be shown that \(\mathbb{H}\operatorname{\mathsf{Lax}\text{-}}T\text{-}\operatorname{ \mathsf{Alg}}(x)\) is the category of monoids of the skew monoidal category \(\mathbb{H}\operatorname{\mathsf{Kl}}(T,x)\)5. Therefore, we have a forgetful functor
Footnote 5: This construction is analogous to the definition of \(\mathbb{H}\operatorname{\mathsf{Lax}\text{-}}T\text{-}\operatorname{ \mathsf{Alg}}\) in [15] as monoids in \(\mathbb{H}\operatorname{\mathsf{Kl}}(T)\), adapted to the fixed-object case, and with \(\mathbb{D}\) an equipment.
\[\mathbb{H}\operatorname{\mathsf{Lax}\text{-}}T\text{-}\operatorname{ \mathsf{Alg}}(x)\to\mathbb{H}\operatorname{\mathsf{Kl}}(T,x),\]
and we can study its monadicity, by studying free monoids in skew monoidal categories, adapting the work of [17, 26, 30].
### \(\boldsymbol{2}\)-dimensional structure
As stated in the Introduction, we have not considered the \(2\)-dimensional structure of \(\mathbb{H}\operatorname{\mathsf{Lax}\text{-}}T\text{-}\operatorname{ \mathsf{Alg}}\). However, inspired by the fact that \((T,\,\mathcal{V})\text{-}\mathsf{Cat}\to\mathsf{Set}\) and \(\mathsf{Cat}(T,\,\mathcal{V})\to\mathcal{V}\) for suitable \(\mathcal{V}\) are fibrations, it might be interesting to explore whether \(\mathbb{H}\operatorname{\mathsf{Lax}\text{-}}T\text{-}\operatorname{ \mathsf{Alg}}\to\mathbb{D}\) is a double fibration (see [14]), as well as possible applications, even in the more general context where \(\mathbb{D}\) is a virtual double category.
### Other notions of change-of-base
We have already mentioned two notions of change-of-base that are not covered by Theorem 5.2 in Subsections 5.1 and 6.2. In fact, with an adequate notion of "monad morphism" \((F,\phi)\colon S\to T\) for \(S\) a lax monad on \(\mathbb{D}\), and \(T\) an oplax monad on \(\mathbb{E}\), we question if it is possible to expand the scope of the dichotomy between enriched and internal multicategories.
|
2306.17721 | HashMem: PIM-based Hashmap Accelerator | Hashmaps are widely utilized data structures in many applications to perform
a probe on key-value pairs. However, their performance tends to degrade with
the increase in the dataset size, which leads to expensive off-chip memory
accesses to perform bucket traversals associated with hash collision. In this
work, we propose HashMem, a processing-in-memory (PIM) architecture designed to
perform bucket traversals along the row buffers at the subarray level. Due to
the inherent parallelism achieved with many concurrent subarray accesses and
the massive bandwidth available within DRAM, the execution time related to
bucket traversals is significantly reduced. We have evaluated two versions of
HashMem, performance-optimized and area-optimized, which have a speedup of
49.1x/17.1x and 9.2x/3.2x over standard C++ map and hyper-optimized hopscotch
map implementations, respectively. | Akhil Shekar, Morteza Baradaran, Sabiha Tajdari, Kevin Skadron | 2023-06-30T15:07:35Z | http://arxiv.org/abs/2306.17721v1 | # HashMem : PIM-based Hashmap Accelerator
###### Abstract
Hashmaps are widely utilized data structures in many applications to perform a probe on key-value pairs. However, their performance tends to degrade with the increase in the dataset size, which leads to expensive off-chip memory accesses to perform bucket traversals associated with hash collision. In this work, we propose HashMem, a processing-in-memory (PIM) architecture designed to perform bucket traversals along the row buffers at the subarray level. Due to the inherent parallelism achieved with many concurrent subarray accesses and the massive bandwidth available within DRAM, the execution time related to bucket traversals is significantly reduced. We have evaluated two versions of HashMem, performance-optimized and area-optimized, which have a speedup of \(49.1x/17.1x\) and \(9.2x/3.2x\) over standard C\(++\) map and hyper-optimized hopscotch map implementations respectively.
P moressing In Memory, HashMaps, In-Situ Computing, DRAM, Memory Systems
## 1 Introduction
As we move further into the digital age, the amount of data being generated and consumed daily is increasing at an unprecedented rate. One of the popular data structures for searching large datasets is a hashmap, due to its near-constant-time lookup performance. For large datasets, the hashmaps usually are not cache-resident, and hence, any lookup would be an expensive off-chip DRAM access to read the entire hash chain to perform a single probe.
By supporting hashmap lookup directly in memory, i.e. using a processing-in-memory (PIM) architecture, applications can avoid costly memory transfers between the processor and memory, leading to faster and more efficient lookups. Hashmaps are particularly well-suited to PIM architectures, as they are able to leverage the parallel processing capabilities of the memory to perform lookups and hash-chain traversals in constant time.
For a brief background, a hashmap is a data structure that allows efficient storage and retrieval of key-value pairs. It uses a hash function to compute an index into an array of buckets or slots from which the desired value can be found. In other words, a hashmap is an associative array that maps keys to values.
The computation of hashmaps is memory-bound because it heavily relies on accessing memory when the dataset cannot fit within the cache. When searching for a key in a hashmap, the hash function is used to compute the index of the bucket where the key is stored. The bucket is then accessed in memory to retrieve the value associated with the key. As the size of the hashmap increases, the number of memory accesses required to perform a lookup also increases, making it a memory-bound operation.
Processing-in-memory is a new architecture paradigm that breaks down the memory wall by integrating processing elements within the memory chips themselves. This eliminates the need for data to be transferred between memory and processor, leading to significant improvements in performance and energy efficiency [9].
In this paper, we propose HashMem, a PIM architecture designed to accelerate key-value probes on hashmaps. Our architecture comprises two versions: area-optimized and performance-optimized. Both place processing elements adjacent to each subarray in the DRAM, but the area-optimized provides one processing unit per subarray and operates on one value at a time, i.e., element-serial, bit-parallel; while the performance-optimized provides multiple processing units per subarray, operating on the entire row at once in an element-parallel but bit-serial fashion. Thus, in the latter case, the values are laid out in a column-oriented fashion, so that each row contains a single-bit slice from thousands of values, achieving high parallelism at the expense of requiring \(b\) steps in order to find a b-bit key.
These organizations are evaluated against standard C\(++\) map and hyper-optimized hopscotch map implementations and found to yield significant speedups compared to a server-grade CPU: \(17.1x/3.2x\) (area-optimized) and \(49.1x/9.2x\) (performance optimized) over standard C\(++\) map and hyper-optimized hopscotch map implementations respectively. The in-situ processing with PIM also minimizes energy spent on moving data, thus achieving energy savings in addition to the performance benefit. Quantifying the energy savings is left for future work.
## 2 Hashmem Architecture
We have designed and implemented a PIM architecture that leverages the lower access latency and inherent parallelism available within the DRAM structure at the subarray level to perform hashmap lookups at the subarray interface. Our key idea behind the design involves mapping an entire hash bucket to a subarray row within PIM memory. Each subarray row contains between 512-2048 _columns_, where a column is defined in terms of a multi-bit access length, with each such column being 4, 8 or 16 bits in length. Internally each of these subarrays could be broken down into mats for implementation purposes. However, we consider a subarray to be a set of mats that are activated in parallel for our understanding of the rest of the paper.
Since an entire hash bucket happens to be mapped to a subarray row, the entire bucket is activated into the row buffer when the subarray row is accessed. We designed processing elements (PEs) to sit at the edge of each subarray closer to the row buffer to perform lookups for necessary keys within the activated hash bucket. Each PE consists of (i) **comparison unit** to perform the comparison operation, (ii) **control unit or logic** to orchestrate and control the operations and (iii) **output register** to hold the resultant value associated with a matched key during the lookup operation. In our architecture, we propose two implementations of these comparison units, an _area-optimized_ version and a _performance-optimized_ version.
### Area-optimized version
The area-optimized version of HashMem performs the hash bucket traversal in an element-serial bit-parallel manner. Fig. 1 demonstrates the architecture for the area-optimized version. The PE accesses each of the activated hash bucket's key-value pairs sequentially from the row buffer and looks for a key match. Upon a match, the corresponding value would be stored within its **output register** to be read out later by the RLU.
### Performance-optimized version
As shown in Fig. 3, the performance-optimized version relies on placing many small comparison units below the row buffer to scan all the keys in parallel within a single or small number of clock ticks. Compared to the Area-optimized version, this offers higher performance with lower execution time but suffers from increased area overhead since many comparison units must be placed and pitch-mapped along the subarray row buffer. This version resembles the operation of Content Addressable Memory (CAM) except that this is operating on a DRAM row buffer at the subarray-level.
### Rank-Level Unit (RLU)
The HashMem architecture also involves the usage of a _Rank-level Unit (RLU)_, which acts as an intermediary orchestrating agent or command processor between the in-situ PIM processing elements and the host processor (CPU). The job of the RLU is to : (i) Propagate the key to be searched to the necessary subarray (ii) Orchestrate probing operations compliant with the DRAM timing parameters and architecture constraints (iii) Retrieve the output values after the probing operation is completed from the subarray units and buffer them before transferring them to the memory controller.
The RLU helps in the overall integration with the rest of the system by abstracting the PIM operations and interfacing with the Memory Controller (MC) with special PIM-capable extended DRAM commands. This ensures the host can support the PIM capabilities with minimal changes to its integrated memory controllers. The RLU is responsible for communicating with the in-situ subarray level PIM elements and orchestrating operations amongst them. It is analogous to the command processor (CP) that exists within GPUs to interface with its Streaming Multiprocessors (SMs) or Compute Units (CUs) and the PCIe bus. As shown in figure 2, since the RLU is mounted as a separate chip, the logic area overhead of the RLU does not affect the memory capacity of the DIMM, as observed in similar rank-level modifications done with AxDIMM [4].
### Virtualization
The CPU operates on physical addresses in the typical fashion. Hence, in order to make the PIM memory systems compatible with the current virtualization scheme, we rely on storing hash buckets at page granularity. With this, irrespective of the location of page within physical memory, when a hash bucket is accessed, the corresponding page containing the bucket is activated and its related subarray processing elements are enabled to perform the lookup operations. In scenarios where a page is co-located with other pages in the
Figure 1: Area-optimized HashMem Architecture
Figure 3: Performance-optimized HashMem Architecture
Figure 2: RLU mounted at rank-level
same row buffer, the page start and end addresses are communicated to the subarray PEs to access only the necessary address range. Using this scheme, the traditional virtualisation is still supported without the need for CPU to concern itself with physical data placement to orchestrate PIM operations.
### Probing
Once the initial dataset is populated within the PIM memory, the CPU communicates the key to be probed in the respective hash bucket. The page table translation helps in locating the necessary rank (RLU) and subarray row that holds the hash bucket. The respective RLU receives a compute-capable DRAM command that informs it of the input key to be probed and the address of the page to be probed. The RLU, in turn, orchestrates the probing operation by activating the necessary subarray row and communicating to the in-situ subarray PE to perform the actual probing operation. The RLU later retrieves the value from the output register of the same subarray-level PE and passes it to the Memory Controller (MC) in a cache line format. The cache line can be padded with additional zeroes if the data being transferred is less than the size of a cache line. The MC, after receiving the data from the RLU, places it into the requested CPU's Last Level Cache (LLC) address. Once the CPU reads and extracts the value from the cache, the probing operation is complete.
Currently, deletion operations involve putting tombstone values at the place where a key-value pair is deleted, at the cost of wasted space. This is similar to software implementation of hashmaps. We aim to further investigate how to perform efficient deletion operations to reclaim the space back for further usage while using the hashmap on PIM.
Often times, the load distribution of key-value pairs amongst the hash buckets is not equal and this might lead to some buckets having too many key-value pairs and some having too few. We ran a test case scenario where we mapped the first 350,000 words of a dictionary into a hashmap and measured the length of each bucket. We observed a significant variance in the lengths as shown in fig. 4 that demonstrated the under and over-utilisation of buckets.
**Under-utilized buckets** - If the page size is N bytes of memory, and the bucket occupies only P bytes of data (where P < N), then the remaining (N-P) bytes of the page are wasted and lead to inefficient memory usage. The page size N is dictated at the boot time without any prior knowledge of the dataset. The value of P (bucket size) is decided during the runtime and depends on the input dataset. Hence, there are bound to be certain buckets which are under-utilized and efforts could be made to map and fit two or more of such buckets into the same page. This helps in reorganising the memory and improving its utilization. However, care has to be taken to ensure proper bookkeeping of the relocated buckets. Also, a strict criterion that the bucket to be relocated is not split and fragmented across multiple pages is to be followed.
**Over-utilized Buckets** - Some hash buckets may exceed the allocated page size of N bytes and occupy P bytes (where P > N), resulting in an overflow situation that needs to accommodate an extra (P-N) bytes. In these scenarios, an extra page is allocated to accommodate the overflow data, and a bookkeeping structure is updated to record the presence of hash bucket across two or more pages which helps while performing a lookup. Essentially, having these extra pages spread across different channels and ranks helps in probing them in parallel, thereby improving the performance. This optimization could be introduced into the Memory Management Unit (MMU) to instruct it to spread pages containing overly-utilized buckets across different channels evenly to enable the parallel probing of pages. We have marked this as an avenue for future work with micro-architectural changes to be investigated to introduce to support for this optimization strategy.
Alternately, several works such as [8, 12], propose ideal hashing functions to counter this unequal distribution phenomenon when certain prior knowledge of the dataset is available to us.
### Memory Controller (MC) Changes
The Memory Controller needs to have the capability to differentiate between a conventional READ/WRITE operation and PIM operation. Specifically, it needs to understand that a particular page being accessed is a hash bucket that needs traversal/lookup to occur on the PIM side and retrieve only the value associated with a specific key. This interaction with the MC should be invoked by the CPU and abstracted from the programmer and exposed as a simple library call. Towards this end, it could be suggested as an extension to the ISA and change on the micro-architectural implementation. Furthermore, we also suggest that the MC have capabilities to communicate with RLU on the other side of the memory bus using special physical layer (PHY) commands that are pin-compatible with the existing DDR standards. The RLU present on the DIM provides the necessary abstraction to
Figure 4: Length of Hashbuckets after mapping first 350,000 words in a dictionary
the memory controllers to understand and parse these special PIM commands and orchestrate the operations with its related in-situ subarray-level elements.
## 3 Programming Interfaces
We have provided snippets of code for insertion and lookup in the HashMem PIM-capable memory below. A pseudo-code is listed that provides the necessary abstraction for the programmer to utilize this PIM-capable memory using a high-level programming language. We make use of a bookkeeping structure that keeps track of the hash buckets and the pages that store them.
### Insertion Operation
The overall idea behind the insertion operation is to obtain the hash bucket (or page/pages the bucket is mapped to) that needs to store the key-value pair by performing a hash function on the key. However, there are several constraints, as mentioned in Section 2, that need to be looked after, especially with regard to bucket utilization and page overflow scenarios.
Initially, in Step-, we obtain the page size from the system information using a library call. This information is usually stored in the operating system after the boot procedure has initialized the page tables and other virtualization structures. In, we hash the key and decide the bucket and page in which to store the key-value pair. Consequently, we need to perform a check if the bucket is going to overflow in step- while inserting the input-key-value pair. If the bucket is not going to overflow, then the key-value pair could be inserted successfully, as in Step-. However, if the bucket is about to overflow, we do pim_malloc() to initialize a fresh page in step-. We update a bookkeeping structure to reflect that a particular hash bucket extends to a new page. This is to ensure any subsequent lookup operations probe the new page in addition to the prior existing page/(s). Once the bookkeeping structure is updated, we go ahead and store the key-value pair on the new page (step-) and provide the necessary return code to reflect the successful insertion of the key-value pair.
```
voidMapInputKeyValuePairToHashMemPage(keyDataType inputKey,valueDataTypeinputValue){ pageSizeInBytes=getPageSizeFromSystemInfo(); /*ThisPageSizeisdecidedduringtheboot timebasedonsubarraystructures.This informationisretrievedusingthe runtimesysteminformationcall.*/ numOfKVPairsPerPage=pageSizeInBytes/ sizeOfEachKVPair; //CalculatingnumberofNVpairsperpage destinationPage= getHashValueFromHashingAlgorithm(inputkey); //PerformHashingoperationtocdedewhich pagethekey-valuepairneedstocurrentNumOfKVPairsInPage= getCurrentPageSize(destinationPage); //Getsizecurrentpagesizeinterns of howmanynumberofkey-valuepairsit iscurrentlyholding /*Performacheckifthepagecanacomodatethe key-valuepairornot*/ if(currentSizeOfPage<numOfKVPairsPerPage){ storeKVPairIntPage(destinationPage,inputKey,inputValue); //StoretheKVPainintothedestinationPage } else{//ifpageisalreadyfull pim_pagenewPage; ret_code=pim_malloc(newPage); //allocatesanewpageandassignstosnewPage structure if(ret_code==PR_ERROR) returnPR_ERROR; //Pageallocation failed,functionexits updateBookkeepingStructure(newPage, destinationPage); /*Attachesandlinksnewpagetoold page(destination_page)inaLinked Listfashion.Thisbook-keeping structurealsoinformsthatthese twopages(newpageandoldpage) needtobeprobedtogetherwhen lookingforkeys.Becausethekey canresideeitherinnewpageorold page.*/ storeKVPairIntoPage(newPage,inputKey,inputValue); //Storethekey-valueintothenewPage } returnPR_SUCCESS; }
```
Listing 1: PIM Insertion Operation
### Lookup operation
The lookup operation is relatively simple and straightforward, where the user provides an input key and expects the value associated with it to be returned. In Step-, the hashing function computes the bucket and page to be probed based on the input key provided. In Step-, we make use of a special library call that performs a hash bucket lookup operation. In the background process, the library call consults the bookkeeping structure to check the number of pages to probe and instructs the MMU to perform page-level probing operations that traverses the entire hash bucket using the subarray PIM processing elements. These PIM probing operations retrieve the value associated with the key and provide the necessary return code to reflect the successful lookup operation.
```
valueDataTypeprobeKey(keyDataTypeinputKey){ bucketToProbe= getHashValueFromHashingAlgorithm(inputkey); //Performhashingtogetthehashbucketto probe /*Variabletoholdvalueassociatedwithkey-value pair*/ valueDataTypeoutputValue=NULL;
```
outputValue = pinProbeBucket(bucketToProbe); # /* This first checks the book keeping structure as to how many pages to probe and then issues a special pin probe command to the memory controller to perform probing on PIM-capable memory*/ /* If the value was not found */ if (outputValue == NULL) return PR_ERROR; return outputValue; } ```
## 4 Evaluation
In this section, we will discuss the performance and area overhead of HashMem architecture and compare it with traditional CPU-based implementation baselines. The configuration of the hardware setups utilized in our analysis are as mentioned in the Table-1.
### Benchmarking
Currently, there exists no standard benchmark to test exclusively for the hashmap performance. Hence, we proposed our microbenchmark to test the hashmap performances on both the CPU and HashMem. The details of the microbenchmark and baselines we compare against are provided in the proceeding section. In order to estimate and model the HashMem performance, we analyzed the timing data gathered from prior works [14, 14, 7, 6].
#### 4.1.1 Microbenchmark
One of the goals of the microbenchmark is for the input dataset to be sufficiently large enough such that it flows out of the cache and into the DRAM. This ensures that the cache effects of the CPU are sufficiently eliminated and the expensive off-chip DRAM accesses are captured while performing lookups during microbenchmark run. Another goal is to generate random accesses, i.e., accessing random keys to eliminate any prefetching based on spatial locality.
Towards this end, we propose a microbenchmark consisting of 100 million key-value (KV) pairs with a key and value occupying 4 bytes each. The key and the value are coded as default uint3_t data type in C++. Hence, each KV pair would occupy 8 bytes of data, and the overall memory footprint of the input dataset would be 800 MB in size, sufficiently large enough to overflow the L3 cache of most processors used in our testing scenario. Furthermore, 10% of the keys in the input dataset are probed, which equates to 10 million keys being searched for in the hashmap. The keys to be probed again are selected at random and fed into both the CPU and PIM architectures simultaneously to assess the hashmap probing performance.
Although this benchmark does not account for string values and other types of data, we envision that they could be pre-processed and dictionary-encoded into numerical values to be used in HashMem. Performing string value comparison or any regex operations at the subarray-level units would incur a very high area overhead and is avoided. Hence we consciously supported probing just numerical values with HashMem PIM architecture.
### Performance of different data structures on CPUs
Our initial research indicated that "unordered_map" Standard Template Library (STL) in C++ has the closest implementation resembling a hashmap. However, there are several other default C++ data structures that perform similar operations as a hashmap but are implemented using alternative techniques. One of the most popular examples is the "map" data structure that is implemented as a red-black tree, a specialized binary search tree data structure. We intended compare against both the unordered_map and map to assess the performance.
Apart from the default C++ libraries, there are various implementations that are more performance-optimized to yield better throughput in performing key-value pair lookups. Hop-scotch is one example that implements a hopscotch hashing algorithm [3] to resolve hashing collisions [13]. We found a popularly used repository online [2] that implemented this
\begin{table}
\begin{tabular}{l l} \hline \hline Property & Value \\ \hline Processor Name & Intel(R) Xeon(R) Silver 4208 CPU (2.1 GHz) \\ Total Cores / Main Memory & 8 (16 threads) / 512 GB \\ L1D(2/L3 Cache Size & 32 KB per core/1 MB per core/11.2 MB shared \\ \hline HashMem & DDR4\_8Gb_x16\_3200 (Single Channel) \\ & 8 banks per rank, 128 subarrays per bank \\ & 512 rows per subarray \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hardware Configuration
\begin{table}
\begin{tabular}{l l} \hline \hline Property & Value \\ \hline Dataset & Contains 100 million key-value pairs (800 MB) \\ & 10\% i.e. 10 million randomly selected keys probed \\ \hline Points of Comparison & Standard C++ Map (binary tree) \\ & C++ underd.map (hashmap) \\ & boscotch map (optimized hashmap) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Workload Overview
Figure 5: Probing Times of Different Data Structures
hashing mechanism that we could consider as a state-of-the-art baseline. Interestingly, we discovered that hopscotch_map is faster than Google's sparse-hash [10] implementation during our testing. We intended our evaluations to compare with a mix of candidates, hence our choice of these three software implementations.
The performance results of the three data structures chosen on CPU are demonstrated above in Figure-5. Hopscotch is significantly faster than the other implementations by a wide margin, around \(5.3x\) and \(3.1x\) compared to C++ map and unordered_map, respectively. The map structure performs the worst considering that it is implemented as a balanced binary search tree with many indirect accesses to traverse along the tree both during insertions and probing. The complexity of a map is \(log_{2}^{n}\). Interestingly, we found that unordered_map rehashes itself when the number of elements to be inserted exceeds the load factor, i.e., the number of buckets available, rather than having more nodes attached to extend each bucket size.
### Area overhead
To estimate the area overhead of subarray-level PIM processing elements, we first obtain the area breakdown of the DDR4 chip (DDR4_8Gb_x8_3200) that is used to build the HashMem. Each subarray contains 512 rows, and there are 128 subarrays per bank. The comparator units of HashMem are implemented in RTL and the delay, area overheads are evaluated using Synopsys DC Compiler in 14nm. We use scaling factors from [11] to scale the results to 22nm.
#### 4.3.1 Performance optimized
For the performance-optimized version, there is a requirement to pitch-map the comparator units to fit within the section boundaries of the row buffer containing the key-value pair segments. This presents significant challenges related to the evaluation of this version and this is part of the future work to investigate HashMem further.
#### 4.3.2 Area optimized
The area-optimized version does not require significant efforts to pitch-map as was required with the performance-oriented version. Our estimate revealed that incorporating 64 additional ALUs, shared by 128 subarrays per bank, incurred only 5.26% area overhead.
## 5 Results
The performance of hashmap workloads are both dependent on the size and the distribution of the input dataset. The numbers reported are for the evaluation of the dataset as detailed in section 4.1.1 describing the microbenchmark.
Figure 6 highlights that our area-optimized HashMem version outperforms the standard map, unordered map, and hopscotch map implementations, achieving speedup values of approximately \(17.1x\), \(5.5x\), and \(3.2x\) respectively.
As shown in figure 6, our performance-optimized HashMem surpasses the standard map, unordered map, and hopscotch implementations by even greater factors of \(49.1x\), \(15.8x\), and \(9.2x\) respectively.
In both scenarios, HashMem outperforms the CPU baseline by a wide margin, even against the state-of-the-art hopscotch implementation. An interesting aspect of the experiment is that these results represent a single DRAM channel competing against a server-class CPU. We could parallelize the lookups across the independent memory channels and obtain further improvement in the HashMem performance. These results demonstrate the potential of PIM architectures, which have tremendous intrinsic parallelism and bandwidth that are not being harnessed by current Von Neumann-bottlenecked architectures.
## 6 Future Work
Our future work broadly aims at improving the architecture with a richer set of evaluations. We aim to look at a wider variety of datasets with different distribution patterns of key and value strings to assess the hashmap performance in each setting.
**Tiered-Latency DRAM.**[5] splits the subarray into low-latency and higher-latency regions by placing isolation transistors to alter the DRAM READ access timings. We aim to leverage this work to map our hashmap buckets into the low-latency region for faster lookups. In a conventional architecture, mapping several elements to a smaller subset of buckets could degrade the performance. However, due to parallelism provided by the PIM processing elements and the lower-latency offered by the Tiered-Latency DRAM, it could improve our performance significantly. Moreover, the lower the number of buckets available within a hashmap, the higher the probability of a row hit which further leads to improved performance and decreased latency.
**Channel-level Parallelism.** Memory channels are independent and parallel READ / WRITE operations could be performed on each of the channels separately. Hence, if there are multiple keys to be probed, these probing operations could be parallelised amongst different channels to increase the throughput. However, this type of parallelism could be exploited only if the keys being probed belong to different channels.
**Energy Savings Analysis.** Due to the vastly reduced number of expensive off-chip data access over the memory bus, there are significant energy savings to be realised. This also
Figure 6: HashMem Speedup Against CPU
leads to reduced instruction overhead on the CPU related to performing a scan operation on the hash bucket traversals.
**Data Types**. Currently our evaluations only support 32-bit int operations on both key and value. The limitation is with regards to the configuration of the subarray-level PIM elements. We aim to investigate re-configurable sizes for future evaluations to support other data types too.
**Hash Function.** We aim to investigate an optimum hashing mechanism that can evenly distribute the input dataset over several buckets of equal or near equal length. This is to reduce certain buckets from getting over-utilized and some from getting under-utilized. There are prior works in this area such as [8].
**Real-world kernels.** Many genomics, database and other applications extensively make use of hashmap structures in their kernels while implementing them. We aim to test our PIM architecture in these settings and observe the performance improvements at the application level.
## 7 Conclusions
We proposed HashMem, a subarray-level PIM architecture that accelerates hashmap lookups by leveraging the existing parallelism available within the DRAM structure. We have demonstrated that the two variants of HashMem, area and performance-optimized ones, were able to outperform state-of-art hopscotch hashmap and default C++ map implementations on CPU by at least \(3.2x/9.2x\) and \(17.1x/49.1x\) respectively. The area overhead as observed on the area-optimized version was found to be \(5.26\%\). We have laid out several directions as part of our future work to further investigate potential of HashMem architecture.
## 8 Acknowledgements
This work was supported in part by PRISM, one of seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. We also thank the reviewers for their helpful feedback and suggestions.
|
2307.00160 | Representations of Color Lie Superalgebras by Hilbert Series | The representations of various color Lie superalgebras by Hilbert series are
the main topic of this work. The Color Lie superalgebras appear in various
branches of mathematics (e.g., topology, algebraic groups, etc.). They are
generalized Lie superalgebras. A generating function known as the Hilbert
series of color Lie superalgebras which encodes crucial knowledge about the
superalgebras representation. In particular, it provides a way to count the
number of states in the a given degree. We present a dimension formula that
resembles Witt's formula for free color Lie superalgebras, and a specific class
of color Lie p-superalgebras. | Shadi Shaqaqha | 2023-06-30T22:31:18Z | http://arxiv.org/abs/2307.00160v1 | **Representations of Color Lie Superalgebras by Hilbert Series**
## Abstract
The representations of various color Lie superalgebras by Hilbert series are the main topic of this work. The Color Lie superalgebras appear in various branches of mathematics (e.g., topology, algebraic groups, etc.). They are generalized Lie superalgebras. A generating function known as the Hilbert series of color Lie superalgebras which encodes crucial knowledge about the superalgebras representation. In particular, it provides a way to count the number of states in the a given degree. We present a dimension formula that resembles Witt's formula for free color Lie superalgebras, and a specific class of color Lie \(p-\)superalgebras.
**Keywords;** Hilbert series, color Lie superalgebras, free color Lie superalgebras, restricted color Lie superalgebras, superalgebras representations, vector space.
## 1 Introduction
Hilbert series of common Lie superalgebra representations is a topic in the field of algebra and representation theory. A Lie superalgebra is a mathematical structure that generalizes the concept of Lie algebra, a vector space equipped with a binary operation that satisfies specific properties. Lie superalgebras are used in various mathematical and physical applications, including quantum mechanics, differential geometry, and string theory. Hilbert series of algebra is a subset of the Hilbert-Poincare series of a graded vector space [1]. Consider \(V=\bigoplus_{k=0}^{\infty}\ V_{k}\) is a graded vector space such that all subspaces \(V_{k}\) are finite-dimensional. The formal power series in the indeterminate \(t\)
\[H(V,t)=\sum_{k=0}^{\infty}(\dim V_{k})t^{k}\]
Let \(V=U_{k=1}^{\infty}V^{k}\) be a filtered vector space such that \(dimV^{k}<\infty\) for all \(k\ \in N\). Set \(V^{0}=0\). The Hilbert-Poincare series of V is \(H(V)=H(V,t)=\sum_{k=1}^{\infty}\dim\left(V^{k}/_{V^{k}-1}\right)t^{k}\). In other words, the Hilbert-Poincare series for a filtered space V is the same as the associated graded space: \(H(V,t)=H(grV,t)\).
Suppose \(L=\)\(\bigoplus_{n=1}^{\infty}\ L_{n}\) be a free Lie algebra of rank \(r.\) The well-known Witt formula gives the dimensions of homogeneous subspaces \(L_{n}\):
\[dimL_{n}=\frac{1}{n}\sum_{d\mid n}\mu(d)r^{\frac{n}{d}},\]
Where \(\mu:\mathbb{N}\rightarrow\{-1,0,1\}\) is the Mobius function defined below. If \(n\) is divisible by a prime number's square, we set \(\mu(n)=0;\) otherwise, we set \(\mu(n)=(-1)^{k},\) where k is the number of prime divisors of \(n\) (with \(k=0\) for \(n=1,\) so \(\mu(1)=1\)). For homogeneous and multi-homogeneous components of free (color) Lie superalgebras, similar formulas exist. Petrogradsky discovered dimension formulas for free Lie p-algebras [2]. More broadly, suppose \(\Lambda\) is a countable abelian semigroup in which every element \(\lambda\ \in\ \Lambda\) can be written as a sum of other elements only in a finite number of ways. Let \(L=\)\(\bigoplus_{\lambda\in\Lambda}\ L_{\lambda}\) be a \(\Lambda\)-graded Lie algebra generated freely by \(X=U_{\lambda\in\Lambda}X_{\lambda}.\) Kang and Kim discovered an analog of Witt's formula, known as the character formula, for the dimensions of homogeneous components \(L_{\lambda},\lambda\ \in\ \Lambda,\) in [3]. Shaaqha also worked on free Lie superalgebras and their related formulas [4].
According to Nielsen and Schreier's well-known Theorem, every subgroup of a free group is again free. Shirshov and Witt independently obtained a corresponding result for Lie algebras. On the other hand, subalgebras of the free associative algebra are not always free (for example, \(F[x^{2},x^{3}]\subseteq F[x]\) is not free) [5, 6].
According to Kukin, \(A\) has a filtration (as an algebra) \(\bigcup_{i=1}^{\infty}A^{i}\) if generated by a finitely graded set \(X.A^{i},\) where \(A^{i}\) is spanned by all weight monomials up to \(i.\) We denote the corresponding series by \(H_{X}(A,t)\)[7]. If \(X\) can freely generate \(A,\) then
\[H_{X}\ (A,t)=\ H\ (Y,t)=\sum_{i=1}^{\infty}\big{|}Y_{i}\big{|}t^{i},\]
Where \(Y\) denotes the set of all finitely graded monomials in \(X.\) If \(B\) is a subspace of \(A,\) then the factor-space \(A/B\) gains a filtration as well.
\[(A/B)^{n}\ =\ (A^{n}+B)/B\ \cong\ A^{n}\ /(B\cap A^{n}\ ).\]
Petrogradsky defined an operator \(\mathcal{E}\) on \(Z[[t]]\) (the ring of formal power series in the indeterminate \(t\) over \(\mathbf{Z}\)) as follows in [8]:
\[\mathcal{E}:\sum_{i=0}^{\infty}a_{i}t^{i}\mapsto\prod_{i=0}^{\infty}\frac{1}{(1-t^{ i})^{a_{i}}}.\]
Then he presented a formal power series analogue of Schreier's formula for free Lie algebras. He demonstrated that if \(L\) is a free Lie algebra generated by a finitely graded set \(X\) and \(K\) is a subalgebra of \(L\), there exists a set of free generators \(Z\) of \(K\) such that.
\[H(Z)\ =\ (H(X)\ -\ 1)\ \mathcal{E}(H\ (L/K))\ +\ 1.\]
_2. Representations on Character Formulas for Color Lie Superalgebras_
Let \(\Lambda\) be a countable additive abelian semigroup such that every element \(\lambda\in\Lambda\) can be written as a sum of other elements only in finitely many ways (the finiteness condition). In order to study (color) Lie (\(p\)-) superalgebras, we fix a homomorphism \(\kappa:\Lambda\ \rightarrow\ \mathbb{Z}_{2}=\{\pm 1\}\). This implies that \(\Lambda\) can be partitioned as
\[\Lambda\ =\ \Lambda_{+}\cup\ \Lambda_{-},\]
Where,
\[\Lambda_{\pm}=\{\lambda\ \in\ \Lambda\ |\ \kappa(\lambda)=\pm 1\}\,.\]
In this section, we consider \(\Lambda\)-graded color Lie superalgebras \(=\bigoplus_{\lambda\in G}L_{\lambda}\), where for each \(g\ \in\ G_{+}\) (respectively, \(g\ \in\ G_{-}\)), we have \(L_{g}=\bigoplus_{\lambda\in\Lambda_{+}}\left(L_{\lambda}\ \cap L_{g}\right)\) (respectively, \(L_{g}=\bigoplus_{\lambda\in\Lambda_{-}}\left(L_{\lambda}\ \cap L_{g}\right)\)). The main purpose in this paper is to derive a dimension formula for the homogeneous subspaces of the free color Lie superalgebras. Also, we will obtain similar results for a certain case of color Lie \(p\)-superalgebras. \(A=\bigoplus_{n\geq 0}\ A_{n}\)
_2.1 Characters of Color Lie Superalgebras_
Let \(U=\bigoplus_{\lambda\in\Lambda}\ U_{\lambda}\) be a \(\Lambda\)-graded space. The character of \(U\) is defined by
\[ch_{\Lambda}U=\sum_{\lambda\in\Lambda}\ (dimU_{\lambda})e^{\lambda}\,.\]
It is an element in \(\mathbb{Q}\)_[[4]]_, the completion of the semigroup algebra \(\mathbb{Q}[\Lambda]\), whose basis consists of symbols \(e^{\lambda},\lambda\in\Lambda\) with the multiplication \(e^{\lambda}e^{\lambda}=e^{\lambda+\mu}\) for all \(\lambda,\mu\in\Lambda\). Gradings \(U=\bigoplus_{\lambda\in\Lambda}U_{\lambda}\) and \(V=\bigoplus_{\lambda\in\Lambda}V_{\lambda}\) induce gradings on the spaces \(U\bigoplus V\) and \(U\bigotimes V\):
\[(U\oplus V)_{\lambda}\ =U_{\lambda}\ \oplus\ V_{\lambda};\ (U\bigotimes V )_{\lambda}\ =\sum_{\lambda=u+v}\ (U_{u}\bigotimes Vv).\]
By the finiteness condition, the sum above is finite. The following Theorem holds.
**2.1.1. Theorem \(ch_{\Lambda}(U\bigoplus V)=ch_{\Lambda}U\ +\ ch_{\Lambda}V\,,and\ ch_{\Lambda}(U \bigotimes V)=ch_{\Lambda}Uch_{\Lambda}V\)**. A critical special case is \(\Lambda=\) N, where Q[[\(\Lambda\)]] is the algebra of formal power series in one variable (without constant term).
_2.2 Characters of Color Lie Superalgebras and Their Enveloping Algebras_
Let \(L=L_{+}\bigoplus L_{-}\) be a free color Lie superalgebra generated by \(X\) where \(L_{\pm}=\bigoplus_{\lambda\in\Lambda_{\pm}}L_{\lambda}\), with \(dimL_{\lambda}\ <\ \infty,\forall\lambda\in\Lambda\) over \(F\). The author considered a particular case of our grading in research [9]. The grading by \(\Lambda=\Gamma\ \times\ G\), where \(\Gamma\) is a countable additive abelian semigroup satisfying the following condition: every element \((\alpha,g)\in\Gamma\times G\) can be presented as a sum of other elements only in finitely many ways, and also \(\Lambda_{+}=\Gamma\ \times\ G_{+}\)_and_\(\Lambda_{-}=\Gamma\ \times\ G_{-}\). As before, the character of \(L\) with respect to \(\Lambda\)-grading is
\[ch_{\Lambda}L=\sum_{\lambda\in\Lambda}\ (dimL_{\lambda})e^{\lambda},dimL_{\pm}= \sum_{\lambda\in\Lambda_{\pm}}\ (dimL_{\lambda})e^{\lambda}.\]
Note that the universal enveloping algebra is graded by \(\overline{\alpha}=\Lambda\ \cup\ \{0\}\). We shall give here the proof of the following formula, established in a study which relates the characters of Lie color superalgebra to that its enveloping algebra [10].
**2.2.1. Lemma** Let \(L=L_{+}\ \bigoplus\ L_{-}\) be a \(\Lambda\)-graded color Lie superalgebra. Then
\[ch_{\overline{\alpha}}U(L)=\frac{n_{\lambda\in\Lambda_{-}\left(1+e^{\lambda} \right)dimL_{\lambda}}}{n_{\lambda\in\Lambda_{+}\left(1-e^{\lambda}\right)dimL _{\lambda}}}.\]
Proof.: Let \(\{e_{\lambda}\ |\ \lambda\ \in\ \Lambda\}\) be a basis of the positive part \(L_{+}\) and \(\{f_{\mu}\ |\ \mu\ \in\ \Lambda\}\) be a basis of the negative part \(L_{-}\). \(U(L)\), as a vector space, is the tensor product of the polynomial algebra \(F[\dots,e_{\lambda},\dots]\) and the Grassmann algebra \(\Lambda[\dots,f_{\mu},\dots]\). Now, the result follows from Theorem 2.1.1.
The super dimension of the homogeneous subspace \(L_{\lambda}\) is defined by
\[sdimL_{\lambda}=k(\lambda)dimL_{\lambda},\ \lambda\in\ \Lambda.\]
Note that
\[ch_{\Lambda}L=\sum_{\lambda\in\Lambda}\ (sdimL_{\lambda})E^{\lambda}\in \mathbb{Q}[[\Lambda]],\]
where \(E^{\lambda}=\kappa(\lambda)e^{\lambda}\). It is convenient to define the following operation, called the _twisted dilation_, on \(\mathbb{Q}[[\overline{\Lambda}]]\):
\[[m]:\sum_{\lambda\in\Lambda}\ f^{\lambda}E^{\lambda}\to f^{\lambda}E^{m \lambda}\ \mathrm{m}\in\mathbb{N}.\]
**2.2.2. Lemma**
1. \(f^{\ [1]}=f\),
2. _the dilation_ \(f\mapsto f^{[m]}\) _is an endomorphism of the algebra_ \(\mathbb{Q}[[\overline{\Lambda}]]\) _,_
3. \(\left(f^{[m]}\ \right)^{[n]}=\left(f^{\ [n]}\ \right)^{[m]}=f^{[mn]}\) _for all_ \(m,n\ \in\mathbb{N}\)_._
_Let us define the following two operators over formal series:_
\[\begin{array}{l}\mathbb{E}:\mathbb{Q}[[\Lambda]]\ \to\ 1\ +\ \mathbb{Q}[[\Lambda]]:\ f\mapsto exp\ (\sum_{m=1}^{\infty}\frac{1}{m}f^{m}),\\ \mathcal{L}:\ 1\ +\mathbb{Q}[[\Lambda]]\ \to\ \mathbb{Q}[[\Lambda]]:f\mapsto\sum_{n=1}^{ \infty}\frac{\mu(n)}{n}lnf^{n}.\end{array}\]
The following lemma, proved by Petrogradsky in a study, shows that the operators above are similar to the exponential and logarithm [11].
#### Lemma
1. The mappings \(\mathcal{E}\) and \(\mathcal{L}\) are well-defined and mutually inverse,
2. \(\mathcal{E}(f_{1}\ +\ f_{2})=\mathcal{E}(f_{1})\mathcal{E}(f_{2}),f_{1},f_{2}\in \mathbb{Q}[[\Lambda]]\),
3. \(\mathcal{L}(f_{1}f_{2})\ =\ \mathcal{L}(f_{1})\ +\ \mathcal{L}(f_{2}),f_{1},f_{2}\in 1 \ +\ \mathbb{Q}[[\Lambda]]\).
Lemma 2.2.1 was used by Petrogradsky to prove the following Theorem.
**2.2.4. Theorem**_Let \(L=\bigoplus_{\lambda\in\Lambda}L_{\lambda}\) be a \(\Lambda\)-graded color Lie superalgebra, and U(\(L\)) be its enveloping algebra [11]. Then_
1. \(ch_{\overline{\Lambda}}U(L)=\mathcal{E}(ch_{\Lambda}L)\)_,_
2. \(ch_{\Lambda}\ L=\mathcal{L}(ch_{\overline{\Lambda}}U(L))\)_._
### \(G\)- Characters of Color Lie Superalgebras and Their Enveloping Algebra
Assume that the \(G\)-grading on \(L\) is determined by the \(\Lambda\)-grading in the sense that: there exists a homomorphism \(\kappa_{G}:\Lambda\to\ G\) such that \(L_{\mathfrak{s}}=\begin{array}{c}\oplus\\ \lambda\in\Lambda\end{array}L_{\mathfrak{s}}\). Define \(v:G\to\ \mathbb{z}_{2}=\{\pm 1\}\) by
\(v(g)\ =\ 1\) (Respectively, \(-1\)) if \(g\in G_{+}\) (respectively, \(g\in G_{-}\)). In this case, we can define the \(G\)-character of \(=\bigoplus_{\lambda\in\Lambda}L_{\lambda}\), where \(dimL_{\lambda}\ <\infty\) for all \(\lambda\in\Lambda\), as follows
\[ch_{\Lambda}L=\begin{array}{c}\sum\\ \lambda\in\Lambda\end{array}(dimL_{\lambda})K_{G}\ (\lambda)e^{\lambda}\ \ \in\mathbb{Q}[G][[A]],\]
where \(\mathbb{Q}[\text{G}]\) is the group algebra of \(G\) with coefficients in \(\mathbb{Q}\) and \(\mathbb{Q}[G][[\Lambda]]\) is the completion of the semigroup algebra \(\mathbb{Q}[\text{G}][\Lambda]\). For \(\lambda\in\Lambda\), we set \(sdimL_{\lambda}=v(\kappa_{G}(\lambda))dimL_{\lambda}\) and color super dimension \(csdimL_{\lambda}=\kappa_{G}(\lambda)sdimL_{\lambda}\). Now, the twisted dilation is defined by
\[[m]:\sum_{\lambda\in\Lambda}\ r_{\lambda}g_{\lambda}E^{\lambda}\rightarrow\!\!\sum_{ \lambda\in\Lambda}\!r_{\lambda}g_{\lambda}E^{m\lambda}\,r_{\lambda}\in\mathbb{Q},\lambda_{g_{\lambda}}\in G,and\ m\in\mathbb{N},\]
Where \(E_{\lambda}=\upsilon\big{(}\kappa_{G}(\lambda)\big{)}\kappa_{G}(\lambda)e^{\lambda}.\) The character of \(L\) can also be written as
\[ch_{\Lambda}L=\sum_{\lambda\in\Lambda}\ (sdimL_{\lambda})E^{\lambda}.\]
We have the following properties of the twisted dilation operator.
**2.3.1. Lemma**
1. The dilation \(f\mapsto f^{[m]}\) is an endomorphism of the algebra Q[G][[\(\Lambda\)]],
2. \(\big{(}f^{[m]}\big{)}^{[n]}=\big{(}f^{[n]}\big{)}^{[m]}=f^{[mn]}\)_for all \(m,n\ \in\mathbb{N}.\)_
3. \(\big{(}\sum_{\lambda\in\Lambda}r_{\lambda}g_{\lambda}e^{\lambda}\big{)}^{[m]} =\ \sum_{\lambda\in\Lambda}r_{\lambda}g_{\lambda}^{m}\left(\nu\big{(} \kappa_{G}(\lambda)\big{)}\right)^{m+1}e^{m\lambda},r_{\lambda}\ \in\ \mathbb{Q},g_{\lambda}\ \in\ G.\)__
_Proof._ It is clear that the first two properties hold. Hence it remains to prove the last claim.
\[\left(\sum_{\lambda\in\Lambda}r_{\lambda}g_{\lambda}e^{\lambda} \right)^{[m]} =\ \left(\sum_{\lambda\in\Lambda}r_{\lambda}g_{\lambda}^{v}\left( \kappa_{G}(\lambda)\big{)}\big{(}\kappa_{G}(\lambda)\big{)}^{-1}E^{\lambda} \right)^{[m]}\] \[=\ \sum_{\lambda\in\Lambda}r_{\lambda}v\left(\kappa_{G}(\lambda) \right)g_{\lambda}^{m}\big{(}\kappa_{G}(\lambda)\big{)}^{-m}E^{m\lambda}\] \[=\] \[= \sum_{\lambda\in\Lambda}r_{\lambda}(v\left(\kappa_{G}(\lambda) \right))^{m+1}g_{\lambda}^{m}e^{m}.\]
We introduce the following two operators over formal power series:
\[\begin{array}{l}\varepsilon_{G}\colon\mathbb{Q}[G][[\Lambda]]\to\,1\,+\, \mathbb{Q}[G][[\Lambda]]:\,f\mapsto exp\,(\sum_{m=1}^{\infty}\frac{1}{m}f^{[m]}), \\ \mathcal{L}_{G}:1\,+\mathbb{Q}[G][[\Lambda]]\,\to\,\mathbb{Q}[G][[\Lambda]]:f \mapsto\sum_{n=1}^{\infty}\frac{\mu(n)}{n}lnf^{[n]}.\end{array}\]
We can easily prove the following lemma.
**2.3.2. Lemma**
1. _The mappings_ \(\varepsilon_{G}\) _and_ \(\mathcal{L}_{G}\) _are well-defined and are mutually inverse._
2. \(\varepsilon_{G}(f_{1}\,+\,f_{2})=\,\varepsilon_{G}(f_{1})\,\varepsilon_{G}(f_ {2}),f_{1},f_{2}\in\mathbb{Q}[G][[\Lambda]],\)__
3. \(\mathcal{L}_{G}(f_{1}f_{2})=\,\mathcal{L}_{G}(f_{1})+\,\mathcal{L}_{G}(f_{2}), f_{1},f_{2}\in 1\,+\,\mathbb{Q}[G][[\Lambda]].\)__
**2.3.3. Theorem**_Let \(L=\bigoplus_{\lambda\in\Lambda}L_{\lambda}\) be a \(\Lambda\)-graded color Lie superalgebra and U(\(L\)) be its enveloping algebra. Then_
1. \(ch_{\frac{G}{\Lambda}}^{G}U(L)=\varepsilon_{G}(ch_{\frac{G}{\Lambda}}^{G}L),\)__
2. \(ch_{\frac{G}{\Lambda}}^{G}L=\mathcal{L}_{G}(ch_{\frac{G}{\Lambda}}^{G}U(L)).\)__
_Proof._ According to PBW-Theorem, we have
\[ch_{\frac{G}{\Lambda}}^{G}U(L)=\frac{\Pi}{\lambda\in\Lambda}\left(1-E^{ \lambda}\right)^{-sdimL_{\lambda}}.\]
Then we see that
\[ch_{\frac{G}{\Lambda}}^{G}U(L)=\exp\left(-\sum_{\lambda\in\Lambda}\left(sdimL _{\lambda}\right)\left(1-E^{\lambda}\right)\right).\]
Using \(ln(1\,+\,x)=\sum_{n=1}^{\infty}(-1)^{n+1}\,\frac{x^{n}}{n},\) we obtain
\[ch_{\Lambda}^{G}U(L)=\exp\left(-\sum_{\lambda\in\Lambda}\left(sdimL_{\lambda}\right) \sum_{m=1}^{\infty}\frac{E^{m\lambda}}{m}\right).\]
Then,
\[ch_{\Lambda}^{G}U(L) =\exp\left(\sum_{m=1}^{\infty}\frac{1}{m}\sum_{\lambda\in\Lambda} \left(sdimL_{\lambda}\right)^{E^{m\lambda}}\right)\] \[=\exp\left(\sum_{m=1}^{\infty}\frac{1}{m}\left(ch^{G}\right)^{[m ]}L\right)\] \[=\ \varepsilon_{G}ch_{\Lambda}^{G}L.\]
To prove the second relation, note that
\[ch_{\Lambda}^{G}L=\ \mathcal{L}_{G}\varepsilon_{G}\left(ch_{\Lambda}^{G}L \right)=\ \mathcal{L}_{G}(\varepsilon_{G}\left(ch_{\Lambda}^{G}L\right))=\ \mathcal{L}(\ ch_{\Lambda}^{G}U(L)).\]
_2.4 Character Formula of Free Color Lie Superalgebras_
By a \(\Lambda\)-graded set, we mean a disjoint union \(X=\bigcup_{\lambda\in\Lambda}X_{\lambda}.\) If in addition, we have \(|X_{\lambda}|<\infty\) for all \(\lambda\in\Lambda,\) then we define its character
\[ch_{\Lambda}X=\ \sum_{\lambda\in\Lambda}|X_{\lambda}|e^{\lambda}\in\mathbb{Q}[[ \Lambda]],\]
For an element\(x\in X_{\lambda}\subseteq X,\) we say \(\Lambda\)-weight of \(x\) is \(\lambda,\) and we write \(wt_{\Lambda}x=\lambda.\) We call such a set \(\Lambda\)_-finitely graded_ (if \(\Lambda=\mathbb{N},\) then we say \(X\) is a finitely graded set). For any monomial \(y=x_{1}\ldots x_{n},\) where\(x_{j}\in X,\) we set \(wt_{\Lambda}y=wt_{\Lambda}x_{1}+\ldots+wt_{\Lambda}x_{n}.\) Suppose \(Y\) is a set of all monomials (associative, Lie, \(\ldots\)) in \(X.\) We denote
\[Y_{\lambda}\ =\ \left\{y\ \in\ Y\ |wt_{\Lambda}y=\lambda\right\}.\]
Also, the \(\Lambda\)-generating function of \(Y\) is
\[ch_{\Lambda}Y=\ \sum_{\lambda\in\Lambda}|Y_{\lambda}|e^{\lambda}\in\mathbb{Q}[[ \Lambda]],\]
**2.4.1. Lemma** Let \(X=\bigcup_{\lambda\in\Lambda}X_{\lambda}\) be a \(\Lambda\)-graded set with \(|X_{\lambda}|<\infty,\lambda,\in\Lambda,\) and let \(F\)\((X)\) be the free associative algebra generated by \(X.\) Then
\[ch_{\overline{\kappa}}F\ (X)=\ \sum_{n=0}^{\infty}\ (ch_{\Lambda}X)^{n}=\ \frac{1}{1-\ ch_{\Lambda}X}.\]
Petrogradsky proved the following Theorem in the context of Lie superalgebras in [11].
**2.4.2. Theorem**_Let \(L=L(X)\) be the free color Lie superalgebra generated by a \(\Lambda\)-graded set \(X=\bigcup_{\lambda\in\Lambda}X_{\lambda}\) with \(|X_{\lambda}|<\infty\) for all \(\lambda\in\Lambda.\) Then_
\[ch_{\Lambda}L(X)=\ -\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\ln(1-\mathrm{ch}\ \genfrac{[}{]}{0.0pt}{}{[N]}{\Lambda}X).\]
_Proof._ The universal enveloping algebra \(U(L)\) is isomorphic to the free associative algebra \(F\)\((X)\) generated by \(X.\) Thus
\[ch_{\overline{\kappa}}U(L)=\ \frac{1}{1-\ ch_{\Lambda}X}.\]
Applying Theorem 2.2.4, we have
\[ch_{\Lambda}L=\ \mathcal{L}\big{(}\xi(ch_{\Lambda}L)\big{)}=\mathcal{L}\ (\frac{1}{1-ch_{\Lambda}X})=-\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\ln\ (1-\mathrm{ch}\ \genfrac{[}{]}{0.0pt}{}{[N]}{\Lambda}X),\]
as desired.
We are going to discuss several corollaries of the above result.
If \(|G|=r\), we can make any finite set \(X\) a \(\Lambda\)-graded set for \(\overline{\kappa}=\ \mathbb{N}_{0}^{r}\). Write \(G=\ G_{+}\cup G_{-}\) where \(G_{+}=\{g_{1},\ldots,g_{k}\}\) and \(G--\ \{g_{k+1},\ldots,g_{r}\}\) (of course, \(|G|=|G_{+}|\) or \(|G_{+}|=|G_{-}|\)) is an abelian
group, and \(L\) is a free color Lie superalgebra freely generated by a set \(X=X_{g1}\ \cup\cdots\cup\ X_{gr}\), with \(|X_{gi}\ |=s_{i}\ \geq\ 1,i\ =\ 1,\ldots,r\). Consider the case \(\Lambda=\mathbb{N}_{0}^{r}\). We define a weight function
\[wt:\ X\ \rightarrow\ \mathbb{N}_{0}^{r}:\ x\ \mapsto\ \lambda_{i}\,,for\ i\ =\ 1, \ldots,r\ and\ x\ \in\ X_{gi}\,,\]
where \(\lambda_{i}=(0,\ldots,0,1,0,\ldots,0)\) with \(1\) in the \(i\)th place. We define the homomorphism \(\kappa:\ \mathbb{N}_{0}^{r}\rightarrow\mathbb{z}_{2}\ =\{\pm 1\}\ by\ \kappa( \lambda_{i})=1\) for \(1\leq i\leq\ k\) and \(\kappa(\lambda_{i})=-1\ for\ k\ +\ 1\ \leq\ i\ \leq\ r\). We denote \(t_{i}=e^{\lambda_{i}}\), so the algebra \(\mathbb{Q}[[\Lambda]]\) turns into the formal power series \(\mathrm{ring}\mathbb{Q}[[t]]\ =\ \mathbb{Q}[[t_{1},\ldots,t_{r}]]\). In this case, the character of a \(\Lambda\)-graded Lie superalgebra, \(L\), is the multivariable Hilbert-Poincare series, \(H(L,t)=H(L;t_{1},\ldots,t_{r})\), of \(L\). We have the following result.
**2.4.3. Corollary** Suppose _that \(G=G_{+}\ \cup\ G_{-}\) is an abelian group, where \(G_{+}\ =\ \{g_{1},\ldots,g_{k}\}\) and \(G_{-}=\{g_{k+1},\ldots,g_{r}\}\)\((r=k\ or\ r=2k)\), and \(L\) is a free color Lie superalgebra freely generated by a set \(X=X_{g1}\ \cup\cdots\cup\ X_{gr}\) with \(|X_{gi}\ |\ =\ si\ \geq\ 1,i\ =\ 1,\ldots,r\). Then_
\[H(L;t_{1}\ldots,t_{k},t_{k+1},\ldots,t_{r})\ =-\sum_{n=1}^{\infty}\frac{\mu(n)}{n} \ln\ (1\mbox{-}\sum_{i=1}^{k}s_{i}t_{i}^{n}+\ \sum_{j=k+1}^{r}\ s_{j}\bigl{(}-t_{j}\bigr{)}^{n}).\]
_Proof._ In this case \(ch_{\Lambda}L=\sum_{i=1}^{r}s_{i}t_{i}\), and so \(ch^{[n]}X=\sum_{i=1}^{k}s_{i}t_{i}^{n}-\sum_{j=k+1}^{j}s_{j}\bigl{(}-t_{j} \bigr{)}^{n}\).
The formula follows from Theorem 2.4.2
The weight function \(wt:X\rightarrow\mathbb{N}_{0}^{r}\) defines the multidegree \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\in\mathbb{N}_{0}^{r}\) for elements of \(L\), and the degree\(|\alpha|=\alpha_{1}+\cdots+\alpha_{r}\). Also, we write \(|\alpha|_{+}=\alpha_{1}+\cdots+\alpha_{k}\) and\(|\alpha|_{-}=\alpha_{k+1}\ +\cdots\ +\alpha_{r}\). By \(n|\alpha\) we denote that n divides all components \(\alpha_{i}\) of \(\alpha\). Then we have the following result.
**2.4.4. Corollary** Suppose \(G=G_{+}\ \cup\ G_{-}\) and \(L=L(X)\) as in Corollary 2.4.3. Then
\[\mathrm{dimL}_{\alpha}\ =\ \frac{(-1)^{|\alpha|_{-}}}{|\alpha|}\sum_{n|\alpha} \mu(n)\ \frac{\Bigl{(}\alpha|\over n\Bigr{)}\!\ (-1)^{|\alpha|_{-}\over n}}{\left( \alpha_{1}\over n\right)\!\...\ \Bigl{(}\alpha_{r}\over n\Bigr{)}\!\ }\ s_{1}^{\alpha_{1}}\...\ s_{r}^{n}.\]
_In particular, if \(L\) is a free Lie algebra, we get the classical Witt's formula._
_Proof._ We apply the formula for \(H(L;t_{1},\ldots,t_{r})\) from the corollary above. We have
\[H(L;\ t)=-\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\ \ln\ (1-\sum_{i=1}^{k}s_{i}t_{i}^{n}+ \ \sum_{j=k+1}^{r}\ s_{j}\big{(}-t_{j}\big{)}^{n})\] \[=\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\ \sum_{s=1}^{\infty}\frac{\left(s_{1}\ t_{1 +\ldots+s_{k}}^{n}\ t_{k}^{n}-s_{k+1}(-t_{k+1})^{n}\right)^{s}}{s}.\]
Applying the multinomial formula, we get
\[H(L;\ t)=\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\ \sum_{s=1}^{\infty} \frac{1}{s}\ \sum_{|\beta|=s}\frac{|\beta|!}{\beta_{1}!\ldots\beta_{r}!}\ (s_{1}t_{1}^{n})^{\beta_{1}}\...\ (s_{k}\ t_{k}^{n}\ )^{\beta_{k}}\ ((-s_{k+1})(-t_{k+1}^{n}\ )^{ \beta_{k+1}}\] \[...\ ((-s^{r})(-t_{r}^{n}\ )^{\beta_{r}}.\]
Hence,
\[H(L;\ t)=\sum_{n=1}^{\infty}\frac{\mu(n)}{n}\ \sum_{s=1}^{\infty} \frac{1}{s}\ \sum_{|\beta|=s}\frac{|\beta|!(-1)^{(n+1)|\beta|-}}{\beta_{1}!\ldots\beta_{r}! }\ s_{1}^{\beta^{1}}\...\ s_{r}^{\beta^{r}}t_{1}^{n\beta_{1}}\...\ t_{r}^{n\beta_{r}}\] \[=\sum_{\alpha}\sum_{\alpha}\sum_{\mathbb{N}_{0}^{r}}^{r}\ \backslash\{0\}\ \frac{1}{|\alpha|}\ \sum_{n|\alpha}\mu(n)\ \frac{\big{(} \frac{|\alpha|}{n}\big{)}!(-1)^{|\alpha|-+}\frac{|\alpha|-}{n}}{\big{(}\frac{ \alpha_{1}}{n}\big{)}!\ldots\big{(}\frac{\alpha_{r}}{n}\big{)}!}\ s_{1}^{\frac{ \alpha_{1}}{n}}\...\ s_{r}^{\frac{\alpha_{r}}{n}}t_{1}^{\alpha_{1}}\...\ t_{r}^{\alpha_{r}}.\]
On the other hand, \((L;\ t)=\sum_{\alpha}\in\mathbb{N}_{0}^{r}\ \backslash\{0\}\ dimL_{\alpha}t^{\ \alpha}\.\) Therefore
\[dimL_{\alpha}=\ \frac{(-1)^{|\alpha|-}}{|\alpha|}\sum_{n|\alpha}\mu(n)\ \frac{\Big{(} \frac{|\alpha|}{n}\Big{)}!\ (-1)^{\frac{|\alpha|-}{n}}}{\Big{(}\frac{\alpha_{1}}{n}\big{)}!\...\ \Big{(}\frac{\alpha_{r}}{n}\big{)}!}\ s_{1}^{\frac{ \alpha_{1}}{n}}\...\ s_{r}^{\frac{\alpha_{r}}{n}},\]
as desired.
Let \(X\) be a finite generating set of the free color Lie superalgebra \(L(X)\) with the weight functions
\[wt:\ X\ \rightarrow\ \mathbb{N}^{2}\,\]
defined by
\[x\mapsto(1,0)if\ x\ \in\ X_{+}\ and\ x\mapsto(0,1)if\ x\ \in\ X_{-}.\]
If we denote \(t_{+}=e^{(1,0)}\) and \(t_{-}=e^{(0,1)}\), then the algebra \(\mathbb{Q}[[\mathbb{N}_{0}^{2}\ ]]\) is the formal power series ring \(\mathbb{Q}[[t_{+},t_{-}]]\). We have the following corollary.
**2.4.5 Corollary** Let \(L=L(X)\) be a free color Lie superalgebra freely generated by the set \(X=X_{+}\ \cup\ X_{-}\), where \(X_{+}=\{x_{1},\ldots,x_{k}\}\) and \(X_{-}=\{x_{k+1},\ldots,x_{r}\}\). Then
1. \(H(L;t_{+},t_{-})=\ -\ \sum_{n=1}^{\infty}\frac{\mu(n)}{n}ln\ (1\ -\ kt_{+}^{n}\ +(r\ -\ k)(-t_{-})^{n})\).
2. \(H(L,t)=\ H(L;t_{+},t_{-})|_{t_{+}=t_{-}=t}=-\sum_{n=1}^{\infty}\frac{\mu(n)}{n }\ln(1-(k-(-1)^{n}\,(r-k))t^{n})\).
**2.4.6 Corollary**\(L(X)\) _be a free color Lie superalgebra freely generated by the set \(X=X_{+}\ \cup\ X_{-}\), where \(X_{+}=\{x_{1},\ldots,x_{k}\}\) and \(X_{-}=\{x_{k+1},\ldots,x_{r}\}\). Consider the weight functionwt\(:X\rightarrow\mathbb{N};\ x\mapsto 1\). Then_
\[dimL_{n}=\ \frac{1}{n}\ \sum_{m|n}\mu(m)\big{(}k-(-1)^{m}(r-k)\big{)}^{ \frac{n}{m}}.\]
Let us return to the general setting. Let \(\Lambda\) and \(\Gamma\) be two additive abelian semigroups satisfying the finiteness condition, \(\kappa:\Lambda\rightarrow\mathbb{z}_{2}\) and \(\kappa^{\prime}:\Gamma\rightarrow\mathbb{z}_{2}\) are homomorphisms. Suppose that \(\varphi:\Lambda\rightarrow\Gamma\) is a semigroup homomorphism such that \(\kappa=\kappa^{\prime}\ \bullet\ \emptyset\) and for each \(\gamma\in\Gamma\) the set \(\{\lambda\ \in\ \Lambda\ |\ \emptyset(\lambda)=\gamma\}\) is finite. Let \(L=L_{+}\ \bigoplus\ L_{-}\) be a free \(\Lambda\)-graded algebra generated by \(X=\bigcup_{\lambda\in\Lambda}X_{\lambda}\). Using the homomorphism\(\emptyset\), we can also regard \(L\) as \(\Gamma\)-graded. Then
\[ch_{r}L=\ \sum_{\gamma\in\Gamma}dimL_{\gamma}e^{\gamma}=\begin{pmatrix}
For such grading, we will use superscripts instead of subscripts. As a result, we have the following corollary.
**2.4.7 Corollary**\(dimL\)\({}^{(n,g)}=\sum\limits_{a_{1}+\cdots\alpha_{r}=n}\ dimL_{(a1,\ldots,ar)}\).
\(g_{1}^{\alpha_{1}\ldots}g_{r}^{\alpha_{r}=g}\)
_Proof._ The result is the formula 2.1 applied to
\[\varphi:\ \mathbb{N}^{r}\ \rightarrow\ \mathbb{N}\ \times\ G:\ \lambda_{i}\ \mapsto\ (1,g_{i}).\]
**2.4.8 Example** Consider the free \((\mathbb{z}_{2}\ \bigoplus\mathbb{z}_{2}\,\gamma)\) -color Lie superalgebra \(L=L(X)\) over the field \(F=\mathbb{C}\) where
\[\gamma:\ (\mathbb{z}_{2}\ \bigoplus\mathbb{z}_{2})\ \times\ (\mathbb{z}_{2}\ \bigoplus\ \mathbb{z}_{2})\ \rightarrow\ \mathbb{C}^{*}\colon((a_{1},a_{2}),(b_{1},b_{2}))\mapsto(-1)^{(a_{1}+a_{2})( b_{1}+b_{2})}\,.\]
Hence, \(G_{+}=\{(0,0),(1,1)\}\) and \(G_{-}=\{(0,1),(1,0)\}\). Let \(g_{1}=(0,0),g_{2}=(1,1),g_{3}=(0,1)\), _and_\(g_{4}=(1,0)\), and let \(|X_{g1}\ |=1,|X_{g2}\ |=2,|X_{g3}\ |=|X_{g4}\ |=1\). According to the above Theorem, we have
\[dimL\,^{(3,(1,1))}\ =\ dimL_{(0,3,0,0)}\ +\ dimL_{(2,1,0,0)}\ +\ dimL_{(1,0,1,1)}.\]
Now, if we apply the formula given in Corollary 2.4.4, we have
\[dimL_{(0,3,0,0)}=\ \frac{(-1)^{0}}{3}\bigg{(}u(1)\frac{(3!)(-1)^{0}}{3!}2^{3}+ u(3)\frac{(1!)(-1)^{0}}{1!}2^{1}\bigg{)}=2.\]
Similarly, we obtain \(dimL_{(2,1,0,0)}=2\), and \(dimL_{(1,0,1,1)}=2\). Hence \(dimL^{(3,(1,1))}=2\ +\ 2\ +\ 2\ =\ 6\).
### Characters of Free Restricted Color Lie Superalgebras
Let \(L=L_{+}\ \bigoplus\ L_{-}\) be a free color restricted Lie superalgebra generated by \(X\) where \(L_{\pm}=\bigoplus_{\lambda\in A_{\pm}}L_{\lambda}\) with \(dimL_{\lambda}<\infty\ \forall\lambda\in\Lambda\) over a field \(F\). We can now deduce the formula that relates the character of Lie color \(p\ -\)superalgebra to that of its restricted enveloping algebra.
**2.5.1. Lemma**_Let \(L=L_{+}\ \bigoplus L_{-}\) be a \(\Lambda\)-graded color Lie \(p\)-superalgebra. Then_
\[ch_{\bar{\lambda}}u(L)=\ \Pi_{\lambda\in\Lambda_{-}}\left(1\ +\ e^{\lambda} \right)^{dimL_{\lambda}}\Pi_{\lambda\in\Lambda_{+}}\left(1\ +\cdots\ +e^{(p-1)\lambda}\ \right)^{dimL_{\lambda}}.\]
_Proof._ For color Lie p-superalgebras, the PBW-theorem must be used, as in Lemma 2.2.1. The specifics are omitted.
The remainder of this section will look at a -graded color Lie p-superalgebra satisfying \(G=G_{+}\); remember that the ordinary restricted Lie algebra is a special case. (Recall that color Lie p-superalgebras are also known as the color Lie p-algebras.)
Petrogradsky has defined functions \(1_{p},\mu_{p}:\mathbb{N}\rightarrow\ \mathbb{N}\) by:
\[1_{p}(n)=\ \begin{cases}1,&\text{if }(p,n)=1\\ 1-p\text{ if }(p,n)=p,\end{cases}\]
and
\[\mu_{p}(n)=\ \begin{cases}\mu(n),&\text{if }(p,n)=1\\ \mu(m)(p^{s}-p^{s-1}),&\text{if }n=mp^{s},(p,m)=1,s\geq 1.\end{cases}\]
Recall that a function \(f:\mathbb{N}\rightarrow\mathbb{N}\) is multiplicative if \(f(nm)=f(n)f(m)\) for any coprime \(n\), \(m\). One can easily show that \(1_{p}\) and \(\mu_{p}\) are multiplicative functions. In addition, we have the following property [12].
**2.5.2 Lemma**\(\ \sum_{ab=n}\ 1_{p}(b)\mu_{p}(a)=0\) for all \(n>1\).
Proof. We fill in the details of the proof in a study. First, we assume n is not divisible by p. Let a, \(b\in\mathbb{N}\) with \(ab=n\). Then \(a\) and \(b\) are not divisible by \(p\). Hence \(1_{p}(b)=1\) and \(\mu_{p}(a)=\mu(a)\)[12]. Now, the statement follows from the property of the Mobius function. Next, we suppose n is divisible by p. Write \(n=n^{\prime}p^{k}\), \(k\ \geq\ 1\), where \(n^{\prime}\) is not divisible by \(p\). For all \(a\), \(b\in\mathbb{N}\) with \(ab=n\), we write accordingly \(a=a^{\prime}p^{r}\) and \(b\ =\ b^{\prime}p^{s}\) where \(r+s=k\). Then
\[\sum_{ab=n}1_{p}(b) \mu_{p}(a)=\sum_{a^{\prime}b^{\prime}=n^{\prime}}1_{p}(b^{\prime })\mu_{p}(a^{\prime})\sum_{r+s=k}1_{p}(p^{\prime})\mu_{p}(p^{\prime})\] \[=\sum_{a^{\prime}b^{\prime}=n^{\prime}}\mu(a^{\prime})(1(p^{k}-p ^{k-1})+(1-p)(p^{k-1}-p^{k-2})+\...+(1-p)1)\] \[=\sum_{a^{\prime}b^{\prime}=n^{\prime}}\mu S(a^{\prime})((p^{k}-p ^{k-1})+(1-p)(p^{k-1}-1)\] \[+(1-p))\] \[=0,\]
where in the first line, we used the fact that \(1_{p}\) and \(u_{p}\) are multiplicative functions.
We introduce the following two operators on formal series, which were defined in the case of \(\overline{\alpha}=\mathbb{N}_{0}^{m}\).
\[\begin{array}{l}\mathcal{E}_{p}:\ \mathbb{Q}[[\Lambda]]\ \to\ 1\ +\ \mathbb{Q}[[\Lambda]]:\ f\ \mapsto\ exp\ \left(\sum_{m=1}^{\infty}\frac{1_{p}\ (m)}{m}\ f^{[m]}\right),\\ \mathcal{L}_{p}:\ 1\ +\ \mathbb{Q}[[\Lambda]]\ \to\ Q[[\Lambda]]:\ f\ \mapsto\ \sum_{n=1}^{\infty}\frac{\mu_{p}\ (n)}{n}\ln f^{[n]}.\end{array}\]
Now we show that these operators are similar to the exponential and logarithm.
**2.5.3. Theorem**_Let \(L=L(X)\) be the free color Lie \(p\)-algebra (\(G=G_{+}\)) generated by a \(\Lambda\)-graded set \(X=\bigcup_{\lambda\in\Lambda}X_{\lambda}\), with \(|X_{\lambda}|<\infty\) for all \(\lambda\in\Lambda=\Lambda_{+}\). Then_
\[ch_{\Lambda}L(X)=-\ \sum_{n=1}^{\infty}\frac{u_{p}(n)}{n}\ln\left(1-ch_{[n] \lambda}X\right).\]
**2.5.4. Lemma**
2. \(\mathfrak{E}_{p}(f_{1}+f_{2})=\mathfrak{E}_{p}(f_{1})\,\mathfrak{E}_{p}\,(f_{2}),f_{ 1},f_{2}\in\mathbb{Q}[[\Lambda]]\),
3. \(\mathcal{L}_{p}(f_{1}f_{2})=\mathcal{L}_{p}(f1)\,+\,\mathcal{L}_{p}(f_{2}),f_{ 1},f_{2}\in 1+\mathbb{Q}[[\Lambda].\)
It follows from the finiteness condition of \(\Lambda\) that \(\mathfrak{E}_{p}\)and \(\mathcal{L}_{p}\) are well defined. Let \(\mathfrak{f}\in\mathbb{Q}\) [[\(\Lambda\)]]. Then
\[\mathcal{L}_{p}\left(\mathfrak{E}_{p}(f)\right) =\,\mathcal{L}_{p}\,\left(\exp\left(\sum_{m=1}^{\infty}\tfrac{1_{ p}\,(m)}{m}\,f^{[m]}\right)\right)\text{(Definition of $\mathfrak{E}_{p}$)}\] \[= \sum_{n=1}^{\infty}\tfrac{\mu_{p}\,(n)}{n}\,ln\left(\exp\left( \sum_{m=1}^{\infty}\tfrac{1_{p}\,(m)}{m}\,f^{[m]}\right)\right)^{[n]}\text{ (Definition of $\mathcal{L}_{p}$)}\] \[= \sum_{n=1}^{\infty}\tfrac{\mu_{p}\,(n)}{n}\,ln\left(\prod_{m=1}^ {\infty}\exp\left(\tfrac{1_{p}\,(m)}{m}\,f^{[m]}\right)\right)^{[n]}\] \[= \sum_{n=1}^{\infty}\tfrac{\mu_{p}\,(n)}{n}\,ln\left(\prod_{m=1}^ {\infty}\exp\left(\tfrac{1_{p}\,(m)}{m}\,f^{[m]}\right)^{[n]}\right)\text{ (Lemma \ref{lem:L1})}\] \[= \sum_{n=1}^{\infty}\tfrac{\mu_{p}\,(n)}{n}\sum_{m=1}^{\infty} \tfrac{1_{p}\,(m)}{m}\,\left(f^{[m]}\right)^{[n]}\] \[= \sum_{n-1}^{\infty}\sum_{m-1}^{\infty}\tfrac{f^{[mn]}}{mn}1_{p}( m)\mu_{p}(n)\text{ (Lemma \ref{lem:L1})}\] \[= \sum_{k=1}^{\infty}\tfrac{f^{[k]}}{k}\sum_{mn=k}1_{p}(m)\mu_{p}(n)\] \[= f^{[1]}\text{ (Lemma \ref{lem:L1})}\] \[= f\text{.}\]
In a similar way, we can prove \(\mathfrak{E}_{p}(\mathcal{L}_{p}(f))=f,f\in 1\,+\mathbb{Q}[[\Lambda]]\). The relations (2) and (3) are clear.
**2.5.5 Theorem**_Let \(L=\bigoplus_{\lambda\in\Lambda}L_{\lambda}\) be a \(\Lambda\)-graded color Lie p-algebra (G = G+) and u(l) be its restricted enveloping algebra. Then_
1. \(ch_{\overline{\lambda}}u(L)=\mathfrak{E}_{p}(ch_{\Lambda}L)\),
2. \(ch_{\Lambda}u(L)=\mathcal{L}_{p}(ch_{\overline{\lambda}}u(L))\).
By Lemma 2.5.1, we have
\[ch_{\overline{\lambda}}u(L)=\,\prod_{\lambda\in\overline{\Lambda}}(1+e^{ \lambda}+\cdots+e^{(p-1)^{\lambda}})\,\,\text{dim}L^{\lambda}.\]
Now, as \(\big{(}1-e^{p\lambda}\big{)}=\big{(}1-e^{\lambda}\big{)}(1+e^{\lambda}+\cdots+e^{( p-1)^{\lambda}})\), \(ch_{\widetilde{\kappa}}u(L)\) can be written as:
\[ch_{\widetilde{\kappa}}u(L)=\ \prod_{\lambda\in\Lambda}\left(\frac{1-e^{p \lambda}}{1-e^{\lambda}}\right)^{dimL^{\lambda}}.\]
Therefore,
\[ch_{\widetilde{\kappa}}u(L)=\exp\big{(}\sum_{\lambda\in\Lambda}dimL^{\lambda} \left((-\ln\big{(}1-e^{\lambda}\big{)}+ln(1-e^{p\lambda}))\right)\big{)}.\]
Using \(ln(1+x)=\sum_{n=1}^{\infty}(-1)^{n+1}\frac{x^{n}}{n}\), we obtain
\[ch_{\widetilde{\kappa}}u(L)=\exp\big{(}\sum_{\lambda\in\Lambda}dimL_{\lambda} \left(\sum_{n=1}^{\infty}\frac{e^{n\lambda}}{n}-\sum_{n=1}^{\infty}\frac{e^{ pn\lambda}}{n}\right)\big{)}.\]
Then we see that
\[ch_{\widetilde{\kappa}}u(L) =\exp\big{(}\sum_{\lambda\in\Lambda}dimL_{\lambda}\left(\sum_{n=1 }^{\infty}\frac{e^{n\lambda}}{n}-\sum_{n=1}^{\infty}\frac{e^{pn\lambda}}{n} \right)\big{)}\] \[=\exp\big{(}\sum_{\lambda\in\Lambda}dimL_{\lambda}\left(\sum_{n=1,p|n}^{\infty}\frac{e^{n\lambda}}{n}+\sum_{n=1}^{\infty}(\frac{e^{np\lambda}}{ np}-\frac{e^{np\lambda}}{n})\right)\big{)}.\] \[=\exp\big{(}\sum_{\lambda\in\Lambda}dimL_{\lambda}\left(\sum_{n=1,p|n}^{\infty}\frac{e^{n\lambda}}{n}+\sum_{n=1}^{\infty}(\frac{e^{np\lambda}- pe^{np\lambda}}{np})\right)\] \[=\exp(\sum_{\lambda\in\Lambda}dimL_{\lambda}\left(\sum_{n=1}^{ \infty}e^{n\lambda}\frac{1_{p}(n)}{n}\right)\] \[=exp(\sum_{n=1}^{\infty}\frac{1_{p}(n)}{n}\sum_{\lambda\in\Lambda }dimL_{\lambda}\,e^{n^{\lambda}})\] \[=exp(\sum_{n=1}^{\infty}\frac{1_{p}(n)}{n}(ch_{\Lambda}L)^{[n]})\] \[=\mathcal{E}_{p}(ch_{\Lambda}L).\]
2. This relation follows directly from Lemma 2.5.4 and (1):
\[ch_{\Lambda}L=\ \mathcal{L}_{p}\mathcal{E}_{p}(ch_{\Lambda}L)=\ \mathcal{L}_{p}(\mathcal{E}_{p}(ch_{\Lambda}L))=\mathcal{L}_{p} \big{(}ch_{\widetilde{\kappa}}u(L)\big{)}.\]
**2.5.6. Remark** One can also extend the definition of \(\mathcal{E}_{p}\) to the general case \(\Lambda=\Lambda_{+}\cup\Lambda_{-}\) as follows:
\[\mathcal{E}_{p}\colon\mathbb{Q}[[\Lambda]]\to 1+\mathbb{Q}[[\Lambda]]\colon f=f_{+}+f_{-} \mapsto exp\ (\sum_{m=1}^{\infty}\frac{1_{p}(m)}{m}f_{+}^{[m]})exp(\sum_{n=1}^{\infty}\frac {1}{n}\,\mathrm{f}^{[n]}).\]
Again \(\mathsf{E}_{p}\) is well defined operator. Also, it is easy to see that
1. \(\mathsf{E}_{p}(f_{1}+f_{2})=\mathsf{E}_{p}(f_{1})\ \mathsf{E}_{p}\ (f_{2}),f_{1},f_{2}\in \mathbb{Q}[[\Lambda]]\),
2. \(ch_{\bar{\chi}}u(L)=\mathsf{E}_{p}(ch_{\Lambda}L)\).
_Theorem_ _Let \(L=L(X)\) be the free color Lie \(p\)-algebra \((G=G_{+})\) generated by a \(\Lambda\)-graded set \(X=\cup_{\lambda\in\Lambda}X_{\lambda}\), with \(|X_{\lambda}|<\infty\) for all \(\lambda\in\Lambda=\Lambda_{+}\). Then_
\[ch_{\Lambda}L(X)=\ -\sum\frac{\mu_{p}(n)}{n}\ln\Big{(}1-\ ch_{\Lambda}^{[n]}X \Big{)}_{n=1}^{\infty}.\]
_Proof._ For the restricted color Lie superalgebra \(L=L(X)\), we denote the restricted enveloping algebra of \(L\) by \(u(L)\). Let \(F(X)\) be the free associative algebra on X. It is well known that \(u(L(X))\) is isomorphic to \(F(X)\)[13]. Thus,
\[ch_{\bar{\Lambda}}u(L)=\ \frac{1}{(1-ch_{\Lambda}X)}.\]
Using Theorem 2.5.5, we get
\[ch_{\Lambda}L=\mathcal{L}_{p}ch_{\bar{\chi}}u(L)=\mathcal{L}_{p}\frac{1}{(1-ch _{\Lambda}X)}=-\sum\frac{\mu_{p}(n)}{n}\ln\Big{(}1-\ ch_{\Lambda}^{[n]}X\Big{)} _{n=1}^{\infty}.\]
**Corollary**_Let \(L=L(X)\) be the free color Lie \(p\)-algebra generated by at most countable set \(X=\{x_{i}\ |\ i\ \in\ I\}\). Then_
\[H(L,ti\ |\ i\in I)=-\ \sum_{n=0}^{\infty}\frac{\mu_{p}(n)}{n}\ln\ (1-\sum_{i\ \in\ I}t_{i}^{n}\ ).\]
_In particular, if \(L\) is generated by \(X=\ \{x_{1},\ldots,x_{r}\}\), then_
\[H(L;t_{1},\ldots,t_{r})=\ -\sum_{n=1}^{\infty}\frac{\mu_{p}(n)}{n}\ln(1-t_{1}^{n }-\cdots-t_{r}^{n}).\]
Consider the particular case \(\Lambda=\mathbb{N}\) and \(wt:X\rightarrow\mathbb{N}:x\mapsto 1\). Then we have the following result.
**2.5.9. Corollary** Let \(L\) be a free color Lie p-algebra freely generated by \(X=\{x_{1},\ldots,x_{r}\}.\) Then
\[H(L,t)=-\sum_{n=1}^{\infty}\frac{\mu_{p}(n)}{n}\ln(1-\mbox{rt}^{n}).\]
Suppose that \(L\) is a free color Lie p-superalgebra generated by \(X=\{x_{1},\ldots,x_{r}\},\) and is multihomogeneous with respect to the set \(X.\) For elements of \(L\) we introduce the multidegree \(=(\alpha_{1},\ldots,\alpha_{r})\in\mathbb{N}_{0}^{r},\) and the degree \(|\alpha|=\alpha_{1}\,+\cdots\,+\,\alpha_{r}.\) We have the following analogue of the Witt formula for the dimension of the multihomogeneous components of \(L.\)
**2.5.10. Corollary**_Let \(L\) be a free color Lie p-algebra freely generated by \(X=\{x_{1},\ldots,x_{r}\}.\) Then_
\[dimL_{n}\,=\,\frac{1}{n}\sum_{m|\alpha}\mu_{p}(m)r^{\frac{n}{m}},\]
\[dimL_{\alpha}=\frac{1}{|\alpha|}\sum_{m|\alpha}\mu_{p}(m)\frac{(|\alpha|/m)!} {(\alpha_{1}/m)!\,...\,(\alpha_{r}/m)!}.\]
_When \(L\) is the ordinary free Lie p-algebra, we get Petrogradsky's formulas [12]._
Petrogradsky initially proved the following Theorem for Lie superalgebras in [10, 11].
**2.5.11. Theorem** Let \(L=L(X)\,=\bigoplus_{n=1}^{\infty}L_{n}\) be a free color Lie p-algebra \((G=G_{+})\) generated by a \(\Lambda\)-graded set \(X=\bigcup_{\lambda\in\Lambda}X_{\lambda}.\) Then
\[ch_{\Lambda}L_{n}=\,\frac{1}{n}\sum_{k|n}\mu_{p}(k)\left(ch_{[k]}X\right)^{ \frac{n}{k}}.\]
_Proof._ We consider the new semigroup.
\[\Lambda^{\prime}\,=\,\Lambda\,\times\,\mathbb{N}.\]
Define a weight function
\[wt:\ X\rightarrow\Lambda^{\prime}:\ x\mapsto(\lambda,1),x\in X_{\lambda}.\]
Then, we consider \(L\) as a \(\Lambda^{\prime}\)-graded. If we denote \(t=e^{(0,1)}\) and \(e^{\lambda}=e^{(\lambda,0)}\), then
\[ch_{\Lambda^{\prime}}X =\sum_{(\lambda,i)\in\Lambda^{\prime}}\left|X_{(\lambda,i)}\right| e^{(\lambda,i)}\] \[=\sum_{\lambda\in\Lambda}\left|X_{(\lambda,1)}\right|e^{(\lambda,1)}\] \[=\sum_{\lambda\in\Lambda}\left|X_{(\lambda,1)}\right|e^{(\lambda,0)}e^{(0,1)}\] \[=tch_{\Lambda}X.\]
Using Theorem 2.5.7 and the operator of dilation, we see that.
\[ch_{\Lambda^{\prime}}L =-\sum\frac{\mu_{p}(k)}{k}\ln\left(1-\ ch_{\Lambda}^{[k]}X\right) _{k=1}^{\infty}\] \[= -\sum\frac{\mu_{p}(k)}{k}\ln\left(1-\ t^{k}ch_{\Lambda}^{[k]}X \right)_{k=1}^{\infty}.\]
By the expansion of the logarithm, we have
\[ch_{\Lambda^{\prime}}L=\ \sum_{k=1}^{\infty}\frac{\mu_{p}(k)}{k}\sum_{m=1}^{ \infty}\frac{t^{mk}\left(ch_{[k]}X\right)^{m}}{m}.\]
Hence,
\[ch_{\Lambda^{\prime}}L=\ \sum_{n=1}^{\infty}\frac{t^{n}}{n}\sum_{k|n}\mu_{p}(k) \left(ch_{[k]}X\right)^{\frac{n}{k}}.\]
However, it is undeniable that
\[ch_{A^{\prime}}L=\sum_{n=1}^{\infty}ch_{A}L_{n}t^{n}.\]
Therefore,
\[ch_{A}L_{n}=\frac{1}{n}\sum_{k|n}\mu_{p}(k)\left(ch_{\begin{subarray}{c}[k]X\\ A\end{subarray}}\right)^{\frac{n}{k}},\]
as desired
Suppose that \(G=G_{+}=\{g_{1},\ldots,g_{r}\}\) is an abelian group, and \(L\) is a free color Lie p-superalgebra freely generated by a set \(X=X_{g1}\ \cup\cdots\cup\ X_{gr}\) with \(|X_{gi}\ |=s_{i}\geq\ 1\ i=1,\ldots,r\). We define a weight function
\[wt:\ X\rightarrow\mathbb{N}^{r}:\ x\mapsto\lambda_{i}\,,for\ i\ =\ 1,\ldots,r\ and\ x\in X_{gi},\]
where \(\lambda_{i}=(0,\ldots,0,1,0,\ldots,0)\) with \(1\) in the \(i\)th place. Again, we denote \(t_{i}=e^{\lambda i}\), and so we have the following result.
**2.5.12. Theorem**_Suppose that \(G=G_{+}\ =\ \{g_{1},\ldots,g_{r}\}\) is an abelian group, and \(L\) is a free color Lie p-algebra freely generated by a set \(X=X_{g1}\ \cup\cdots\cup\ X_{gr}\) with \(|X_{gi}\ |=\ s_{i}\geq\ 1\ i=1,\ldots,r\). Then_
1. \(H(L;t_{1},\ldots,t_{r})=\ -\sum_{n=1}^{\infty}\frac{\mu_{p}(n)}{n}\ln(1-\sum_{i=1 }^{r}s_{i}t_{i}^{n}),\)__
2. \(dimL_{\alpha}=\frac{1}{|\alpha|}\ \frac{\Sigma}{n|\alpha}\,\mu_{p}(n)\frac{ \left(\frac{|\alpha|}{n}\right)!}{\left(\frac{\alpha_{1}}{n}\right)!\cdots\left( \frac{\alpha_{r}}{n}\right)!}s_{1}^{\frac{\alpha_{1}}{n}}\,...\,s_{r}^{\frac{ \alpha_{r}}{n}}.\)__
3. \(dimL\,^{(n,g)}=\sum_{\begin{subarray}{c}\alpha_{1}+\cdots\alpha_{r}=n\\ g_{1}^{\alpha_{1}}\,...\,g_{r}^{ar}=g\end{subarray}}dimL_{(\alpha 1,...,ar)}.\)__
**4. CONCLUSION**
The Hilbert series of a Lie superalgebra representation is a generating function that encodes essential information about the representation. In particular, it provides a way to count the number of states in the representation with a given degree or energy. For common Lie superalgebra representations, Hilbert series has been computed explicitly in many cases, including for the basic representations of the Lie superalgebras. This series is a strong tool for comprehending the structure and properties of common Lie superalgebra representations. The computation of this topic relies on techniques from algebraic geometry and combinatorics, and it has important applications in mathematics.
## Recommendations
To strengthen the applications of Lie superalgebras. It will be helpful to give different methods for creating Hom-Lie superbialgebras. In addition, we could add research on triangular and coboundary Hom-Lie bialgebras. This algebra class is a generalization of both restricted Hom-Lie algebras and restricted Lie superalgebras [14]. In this paper, we could show how to get restricted Hom-Lie superalgebras from classical restricted Lie superalgebras using algebra endomorphisms.
## Acknowledgements
The author wishes to express his sincere gratitude to Dr. Yuri Bahturin, who initially suggested the subject matter. This research would not have been possible without his invaluable ideas. Additionally, the author extends his appreciation to his Ph.D. supervisor, Dr. Mikhail Kotchetov, for his invaluable guidance, insightful conversations, and helpful recommendations throughout the research process. Also, the author would like to thank Dr. Victor Petrogradsky for his suggestions. Their contributions have been instrumental in the completion of this work, and the author is deeply thankful for their support.
|
2306.17735 | Exploring the Equivalence between Two-dimensional Classical and Quantum
Turbulence through Velocity Circulation Statistics | We study the statistics of velocity circulation in two-dimensional classical
and quantum turbulence. We perform numerical simulations of the incompressible
Navier-Stokes and the Gross-Pitaevskii (GP) equations for the direct and
inverse cascades. Our GP simulations display clear energy spectra compatible
with the double cascade theory of two-dimensional classical turbulence. In the
inverse cascade, we found that circulation intermittency in quantum turbulence
is the same as in classical turbulence. We compare GP data to Navier-Stokes
simulations and experimental data from [Zhu et al. Phys. Rev. Lett. 130,
214001(2023)]. In the direct cascade, for nearly incompressible GP-flows,
classical and quantum turbulence circulation displays the same self-similar
scaling. When compressible effects become important, quasi-shocks generate
quantum vortices and the equivalence of quantum and classical turbulence only
holds for low-order moments. Our results establish the boundaries of the
equivalence between two-dimensional classical and quantum turbulence. | Nicolás P. Müller, Giorgio Krstulovic | 2023-06-30T15:25:15Z | http://arxiv.org/abs/2306.17735v3 | # Are 2D classical and quantum turbulence equivalent? Insights from velocity circulation statistics
###### Abstract
We study the statistics of velocity circulation in two-dimensional classical and quantum turbulence. We perform numerical simulations of the incompressible Navier-Stokes and the Gross-Pitaevskii (GP) equations for the direct and inverse cascades. Our GP simulations display clear energy spectra compatible with the double cascade theory of two-dimensional classical turbulence. In the inverse cascade, we found that circulation intermittency in quantum turbulence is the same as in classical turbulence. We compare GP data to Navier-Stokes simulations and experimental data from [Zhu et al. _Phys. Rev. Lett._**130**, 214001(2023)]. In the direct cascade, classical and quantum turbulence circulation statistics coincide at low but strongly differ at high orders. We associate this difference with the presence of quantized vortices, which makes enstrophy ill-defined mathematically. Our results establish the boundaries of the equivalence between two-dimensional classical and quantum turbulence.
The chaotic spatiotemporal motion of turbulent flows is a complex multi-scale phenomenon ocurring in a wide variety of systems in nature [1; 2; 3]. One of the most fascinating properties of three-dimensional (3D) turbulence is that energy is transferred from large to small structures at a constant energy rate, in a process known as direct energy cascade. Some geophysical flows, like atmospheres or oceans, present a quasi two-dimensional (2D) behavior due to the suppression of motion in one direction induced by rotation or stratification [4; 5]. Contrary to the 3D case, two-dimensional (2D) turbulence exhibits an inverse energy cascade (IEC), in which energy is transferred towards large scales leading to the formation of large-scale coherent structures [6; 7]. Moreover, enstrophy \(\Omega\) -- defined as one-half the mean-squared vorticity \(\Omega=\langle\omega^{2}\rangle/2\) -- is transferred towards smaller scales in a process known as direct enstrophy cascade (DEC) [8; 9; 10].
Turbulence also takes place in superfluids, such as \({}^{4}\)He and Bose-Einstein condensates (BEC) [11; 12; 13]. Due to quantum mechanics, low-temperature superfluids are characterized by the complete absence of viscous effects. In 2D quantum fluids, vorticity is concentrated in topological point-like defects with a quantized circulation. The mutual interaction of these structures, known as quantum vortices, leads to the out-of-equilibrium state known as quantum turbulence (QT) [14]. Experiments in 2D BECs and quantum fluids of exciton-polaritons have shown evidence of an IEC through the formation of Onsager vortex clusters [15; 16; 17]. Direct numerical simulations (DNS) of 2D and quasi-2D quantum turbulence have shown the development of an IEC with the presence of a Kolmogorov energy spectrum [18; 19; 20]. The vorticity field in quantum fluids is a superposition \(\delta\)-Dirac supported terms, making enstrophy ill-defined mathematically. However, it can be phenomenologically related to the total number of vortices, which is not conserved due to vortex annihilation [21]. Despite these facts, studies on the point-vortex model observed the presence of a DEC [22].
Another very interesting property of 2D turbulence is the lack of intermittency in the IEC. Velocity increments \(\delta\mathbf{v}_{r}=\mathbf{v}(\mathbf{x}+\mathbf{r})-\mathbf{v}(\mathbf{x})\) at a length scale \(r\) in 2D turbulent flows follow close-to-Gaussian statistics [23; 24], in stark contrast with 3D turbulence where velocity fluctuations are strong [1; 25]. As a consequence, the structure functions of order \(p\) defined as \(S_{p}=\langle\delta v_{r}^{p}\rangle\) follow a self-similar scaling within the inertial range \(S_{p}\sim r^{\zeta_{p}}\) with \(\zeta_{p}^{\rm IEC}=p/3\). The DEC is also non-intermittent as the velocity field in this regime is smooth, and the scaling exponents follow \(\zeta_{p}^{\rm DEC}=p\)[26].
An alternative way of studying turbulence intermittency is through the velocity circulation around an area \(A\) enclosed by a loop \(\mathcal{C}\), defined as \(\Gamma=\oint_{\mathcal{C}}\mathbf{v}\cdot{\rm d}\mathbf{l}\). High-resolution DNS of 3D classical turbulence (CT) have shown that circulation moments in the inertial range are less intermittent than velocity increments when compared with the self-similar Kolmogorov prediction [1; 27; 28; 29; 25]. Recent experimental studies in quasi-2D CT showed that circulation in the DEC is non-intermittent, while in the IEC, it surprisingly presents anomalous deviations. The study of circulation in QT turns out to be very convenient due to the discrete nature of quantum vortices. Indeed, DNS and experiments of 3D QT have shown that circulation statistics is very similar to 3D CT [30; 31; 32]. This result implies that the nature of circulation at small scales becomes irrelevant in the inertial scales and motivates the use of quantum fluids and circulation statistics as a discrete system to understand intermittency in CT.
In this Letter, we compare the statistics of velocity circulation in two-dimensional quantum and classical turbulence, both in the inverse and direct cascades. By means of DNS, we characterize the intermittent behavior of these two regimes, finding differences and similarities
between 2D CT and QT.
The dynamics of an incompressible two-dimensional classical fluid is described by the NS equation, which in terms of the vorticity field \(\omega(\mathbf{r},t)=-\nabla^{2}\phi\) is written as
\[\partial_{t}\omega+\{\omega,\phi\}=\nu\nabla^{2}\omega-\alpha\omega+f \tag{1}\]
with \(\phi\) the stream function such that the velocity field is \((u,v)=(\partial_{y}\phi,-\partial_{x}\phi)\), the Poisson brackets are defined as \(\{\omega,\phi\}=\partial_{x}\omega\partial_{y}\phi-\partial_{y}\omega\partial _{x}\phi\), \(\nu\) is the kinematic viscosity, \(\alpha\) is a linear friction preventing the formation of a large-scale condensate, and \(f\) an external forcing. The dynamics of a quantum fluid composed by weakly interacting bosons at zero temperature is described by the GP equation
\[i\partial_{t}\psi=\frac{c}{\sqrt{2\xi}}\left(-\xi^{2}\nabla^{2}\psi+\frac{| \psi|^{2}}{n_{0}}\psi-\psi\right) \tag{2}\]
where \(\psi\) is the condensate wave function, \(n_{0}\) is the ground state particles density, \(c=\sqrt{gn_{0}/m}\) the speed of sound and \(\xi=\hbar/\sqrt{2mgn_{0}}\) the healing length, which is proportional to the quantum vortex core size. Here, \(m\) is the mass of the bosons, and \(g\) the coupling constant. It is important to notice that in the NS equation, circulation takes real values, while in the GP equation it is discrete as \(\Gamma=n\kappa\), with \(n\in\mathbb{Z}\) the vortex charge and \(\kappa=h/m=2\pi\sqrt{2}c\xi\) the quantum of circulation.
Equations (1) and (2) are solved using a standard pseudospectral method in a periodic two-dimensional domain. We use a Runge-Kutta temporal scheme of order 2 for NS and order 4 for GP. For each equation, we optimize parameters to achieve the largest possible scale separation for each cascade. For NS, we use \(6144^{2}\) grid points and \(8192^{2}\) for GP. To generate the IEC in NS, we force at small scales and dissipate by the friction term and by viscous dissipation. For the DEC, forcing is applied at large scales and no friction is included. For both cascades, we average several hundred fields from the stationary state. For the GP equation, total energy is conserved, but incompressible energy (vortices) is irreversibly converted into sound. Therefore, GP simulations can be seen as decaying turbulent runs. We analyze data when turbulence is the strongest. For both cascades, we generate an ensemble of initial conditions with most of their energy concentrated at a target wave number. These flows are obtained by a minimization method that reduces the acoustic contribution [33, 20] (see Supplemental Material (SM) for details on parameter values and initial conditions). Relevant length scales in the turbulent regimes are shown in Table 1. Figure 1 shows a typical visualization of the vorticity in a two-dimensional classical and quantum flow in the DEC regime. Both systems display the typical large-scale thin elongated structures of the enstrophy cascade, in spite of the fundamental small-scale difference of vortices.
According to the Kraichnan-Leith-Batchelor (KLB) theory [8, 9, 10], the energy spectra in the inverse and direct cascade regimes, neglecting logarithmic corrections,
Figure 1: Visualization of vorticity in classical and quantum turbulence in the enstrophy cascade. For Navier–Stokes, we show the vorticity field \(\omega(x,y)\) (RUN NS-dir). For Gross–Pitaevskii we show the sign and position of individual vortices (RUN GP-dir).
\begin{table}
\begin{tabular}{l c c c c c} RUN & N & \(L_{\text{I}}/L_{0}\) & \(L_{0}/\eta\) & \(\ell/L_{0}\) & \(L_{0}/\xi\) \\ \hline NS-inv & 6144 & 176 & - & - & - \\ NS-dir & 6144 & 0.65 & 3788 & - & - \\ GP-inv & 8192 & 25.1 & - & 1.18 & 28.57 \\ GP-dir & 8192 & 5.30 & - & 0.042 & 1250 \\ \end{tabular}
\end{table}
Table 1: Typical length scales of numerical simulations of the NS and GP equations, with \(N\) the linear collocation points. \(L_{0}\) corresponds to the forcing scale \(L_{f}\) in NS, and the initial condition characteristic length scale \(L_{\text{IC}}\) in GP. \(L_{\text{I}}\) is the integral length scale, \(\eta\) the Kolmogorov length scale, \(\ell\) the inter-vortex distance and \(\xi\) the healing length.
follow
\[E(k) = C_{E}\epsilon^{2/3}k^{-5/3}\quad\text{for}\quad k_{\text{I}}<k<k_{f} \tag{3}\] \[E(k) = C_{\Omega}\beta^{2/3}k^{-3}\quad\text{for}\quad k_{f}<k<k_{\eta}, \tag{4}\]
where \(\epsilon\) and \(\beta\) are the energy and enstrophy dissipation rates, respectively, and \(C_{E}\) and \(C_{\Omega}\) are dimensionless universal constants. The inertial range for the IEC lays between the integral scale wave number \(k_{\text{I}}=2\pi/L_{\text{I}}\), with \(L_{\text{I}}=2\pi\int k^{-1}E(k)\text{d}k/\int E(k)\text{d}k\), and the forcing wave number \(k_{f}\). The DEC takes place between the forcing and the dissipation wave numbers \(k_{\eta}\), with \(\eta\) the enstrophy dissipation length scale \(\eta=\nu^{1/2}/\beta^{1/6}\). Figure 2(a) shows the energy spectra of all four simulations. The subscript \({}_{0}\) denotes the forcing scale for NS or the initial condition scale for GP. For small wave numbers \(k/k_{0}<1\), we observe the \(k^{-5/3}\) scaling law of the IEC in both, classical and quantum 2D turbulence. For large wave numbers \(k/k_{0}>1\) the energy spectra exhibit a \(k^{-3}\) scaling law corresponding to the DEC, which in quantum turbulence takes place between \(k_{0}<k<k_{\ell}\), with \(\ell=2\pi/k_{\ell}\) the inter-vortex distance. In the GP case, we also observe the development of two other scaling laws. Between the inter-vortex distance \(\ell\) and healing length \(\xi\) (\(k_{\ell}\) and \(k_{\xi}\) wave numbers, respectively), the dynamics is governed by single quantum vortices having an azimuthal velocity field \(v(r)=\kappa/(2\pi r)\), which leads to a \(k^{-1}\) energy spectrum [34; 18]. Note that in 3D QT, this is the range of scales in which Kelvin waves are observed [35; 36; 37]. For \(k>k_{\xi}\), there is a \(k^{-3}\) scaling law due to the core of quantum vortices [34].
We now focus on the statistics of velocity circulation in 2D classical and quantum turbulence. We compute the circulation \(\Gamma_{r}=\oint_{\mathcal{C}_{r}}\mathbf{v}\cdot\text{d}\mathbf{l}\) around squared planar loops of linear size \(r\). Integrals are performed in Fourier space to take advantage of the spectral accuracy of the simulations [38]. For the GP equation, we obtain the velocity field from the condensate wave function as \(\mathbf{v}=-\sqrt{2}\epsilon\xi\text{Im}(\psi\mathbf{\nabla}\psi^{*})/\rho\), after performing a Fourier interpolation of \(\psi\) to a resolution \(32678^{2}\) to better resolve the vortex density profiles [30]. Figure 2(b) shows the circulation variance \(\langle\Gamma_{r}^{\text{c}}\rangle\) as a function of \(r/L_{0}\). For small scales \(r/L_{0}<1\), the circulation variance in CT follows the \(r^{4}\) scaling expected for a smooth field that extends for the DEC and the diffusive scales. In QT, it follows the \(r^{4}\) scaling for \(\ell<r<L_{0}\), and there is a second \(r^{2}\) scaling given by the probability of finding a quantum vortex inside a loop for \(r<\ell\)[30]. The IEC inertial range takes place at large scales \(L_{0}<r<L_{\text{I}}\), where the circulation variance follows a \(r^{8/3}\) scaling consistent with KLB theory.
To characterize the intermittent behavior of the inverse and direct cascades, we compute the circulation moments \(\langle|\Gamma_{r}|^{p}\rangle\) up to order \(p=16\) in QT (see Fig. 3) and CT (see SM). The good statistical convergence of high-order moments is shown in the SM. For the IEC in the inertial range \(L_{0}<r<L_{\text{I}}\), circulation moments display scaling laws that deviate from the self-similar prediction of \(\lambda_{p}^{\text{IEC}}=4p/3\), obtained by dimensional arguments. This behavior is better observed in the local slopes displayed in the insets, defined as the logarithmic derivatives \(\text{d}\log(|\Gamma_{r}|^{p})/\text{d}\log r\), which become flat in the inertial range. For the largest scales of the system \(r>L_{\text{I}}\), circulation moments follow a scaling \(r^{p/2}\), which is smaller than the scaling of a system of randomly distributed vortices [31]. Such an exponent suggests an anti-correlation between vortices that could be induced by a gas of vortex dipoles. The behavior in this range of scales might also depend on the initial conditions, and is likely to be non-universal. Further studies of this regime are left for a future work. In the DEC, we plot the extended-self-similar (ESS) moments with respect to the circulation variance, and obtain clear deviations from the self-similar scaling \(\lambda_{p}^{\text{DEC}}=2p\).
The circulation scaling exponents of our QT and CT simulations are presented in Fig. 4. For the IEC, both systems follow the same intermittent behavior within error bars, defined as the maximum and minimum value of the local slopes in the inertial range. These results are consistent with recent experimental measurements in quasi-2D turbulence [39], also reported in the figure. Moreover, the dotted line shows the monofractal fit \(\lambda_{p}^{\text{fit}}=1.14p+0.58\) for \(p>3\), with Holder exponent \(h=1.14\) and fractal dimension \(D=1.42\) proposed in [39]. Similar to 3D turbulence [30; 31; 32], CT and QT share the same statistics in 2D for the IEC. On the contrary,
Figure 2: (a) Energy spectra and (b) circulation variance in NS and GP for both the IEC and DEC runs. \(L_{0}\) indicates the forcing scale in NS and the initial injection length scale in GP.
the circulation exponents in the DEC clearly differ. In CT, they follow a self-similar scaling, as observed in experiments [39] and in our NS simulations, while in QT, they strongly deviate from this prediction at high orders. We recall that, enstrophy is not properly defined in quantum fluids due to the singular character of the vorticity field. As a consequence, discrete vortices have an impact at high-order moments leading to the breakdown of the QT and CT equivalence.
An alternative multifractal interpretation of the intermittent behavior of velocity circulation was given in [31] by introducing a modified version of Obukhov-Kolmogorov 1962 (mOK62) theory [40; 41]. Circulation scaling exponents are proposed to follow \(\lambda_{p}=(h+1)p+\tau((h+1)p/4)\), where \(h\) is the Holder exponent of the velocity field, which can be related to vortex polarization [31]. For the IEC, \(h=1/3\) and for the DEC \(h=1\). The correction to the self-similar scaling \(\tau(\cdot)\) are introduced through the anomalous scaling of the coarse-grained energy dissipation moments \(\langle\epsilon_{r}^{p}\rangle\sim r^{\tau(p)}\)[42]. Here, we use the random-\(\beta\) model of fractal dimension \(D\), which reads \(\tau(p)=(2-D)\left[(\beta-1)p+1-\beta^{p}\right]\), with \(0<\beta<1\) a free parameter [43; 44]. For the inverse cascade, a best fit [45] leads to \(D=1.4\), in agreement with the monofractal fit of [39]. For the DEC in QT, the fit yields \(D=0\), suggesting isolated quantum vortices are responsible for the intermittent behavior in this regime. Note that the self-similar behavior of the CT DEC corresponds to \(D=2\).
In this Letter, we reported numerical simulations of classical and quantum 2D turbulence in the direct and inverse cascade settings. Whereas a number of studies have been devoted to study the inverse energy cascade in quantum turbulence [15; 16; 18; 19; 20], the enstrophy cascade have only been observed using a dissipative version of the point-vortex model [22]. Here we used the Gross-Pitaevskii equation, which naturally includes vortex annihilation and interaction with sound. The observation of the DEC in GP simulations was possible thanks to the use of very high resolutions and well controlled initial conditions. Indeed, the enstrophy cascade only makes sense in a coarse-grained manner, as the enstrophy is not mathematically defined. Therefore, it requires a large number of vortices arranged to produce a large-scale flow. Moreover, we studied high-order statistics of velocity circulation in 2D classical and quantum turbulence. For the IEC, the intermittent behavior of CT and QT are equivalent, reminiscent of recent studies in 3D turbulence [30; 31; 32]. However, for the DEC, the equivalence only holds for low-order statistics, while the singular character of quantum vortices strongly affects high-orders. Based on a multifractal theory, we obtained that the fractal dimension of the most dissipative structures for this cascade is \(D=0\) (points), which is in strong contrast with the DEC in CT, for which self-similarity implies \(D=2\) (space-filling structures). The characterization of these differences and similarities between 2D quantum and classical turbulence could be useful for the development of future theories of intermittency.
We are grateful to Guido Boffetta for providing Navier-Stokes numerical data on the inverse energy cascade that we used in preliminary studies. This work was supported by the Agence Nationale de la
Figure 3: Circulation moments in two-dimensional quantum turbulence for (a) the inverse energy cascade as a function of \(r/L_{0}\) and (b) the direct enstrophy cascade as a function of the circulation variance. The insets display the local slopes defined as \(\mathrm{d}\log\langle|\Gamma_{r}|^{p}\rangle/\mathrm{d}\log x\), with \(x=r\) or \(x=\langle\Gamma_{r}^{2}\rangle\).
Figure 4: Circulation scaling exponents in the inverse and direct cascade inertial ranges for both classical and quantum turbulence. Black dashed and solid lines correspond to the self-similar scaling for the inverse and direct cascade regimes, respectively. Experimental data and the dotted-line fit were extracted from [39]. Dotted-dashed lines show the fit based on the mOK62 theory of [31] using the random-\(\beta\) model [43].
Recherche through the project GIANTE ANR-18-CE30-0020-01. GK acknowledges financial support from the Simons Foundation Collaboration grant Wave Turbulence (Award No. 651471). This work was granted access to the HPC resources of CINES, IDRIS and TGCC under the allocation 2019-A0072A11003 made by GENCI. Computations were also carried out at the Mesocentre SIGAMM hosted at the Observatoire de la Cote d'Azur.
|
2310.06860 | Asymptote-based scientific animation | This article discusses a universal way to create animation using Asymptote
the language for vector graphics. The Asymptote language itself has a built-in
library for creating animations, but its practical use is complicated by an
extremely brief description in the official documentation and unstable
execution of existing examples. The purpose of this article is to eliminate
this gap. The method we describe is based on creating a PDF file with frames
using Asymptote, with further converting it into a set of PNG images and
merging them into a video using FFmpeg. All stages are described in detail,
which allows the reader to use the described method without being familiar with
the used utilities. | Migran N. Gevorkyan, Anna V. Korolkova, Dmitry S. Kulyabov | 2023-09-30T16:12:30Z | http://arxiv.org/abs/2310.06860v1 | # Asymptotic-based scientific animation
###### Abstract
This article discusses a universal way to create animation using Asymptote the language for vector graphics. The Asymptote language itself has a built-in library for creating animations, but its practical use is complicated by an extremely brief description in the official documentation and unstable execution of existing examples. The purpose of this article is to eliminate this gap. The method we describe is based on creating a PDF file with frames using Asymptote, with further converting it into a set of PNG images and merging them into a video using FFmpeg. All stages are described in detail, which allows the reader to use the described method without being familiar with the used utilities.
vector graphics, TeX, asymptote, scientific graphics
Introduction
In this paper we study the creation of animation animation using the vector graphics language Asymptote [1; 2; 3; 4].
Asymptote is an interpreted language, that is a translator into the PostScript vector graphics language. Designed to create vector images for mathematical publications. It is closely integrated with the TeX system and is an integral part of the TeX Live [5] distribution. It has a C-like syntax, supports the creation of functions, custom data structures, and comes with an extensive set of modules for various tasks. Unlike PGF/TikZ [6], Asymptote is more imperative, so it is easier to implement complex program logic on it.
In the official documentation of this language, only a few paragraphs are devoted to the animation creation process and the user is referred to the source code examples located in the animations directory.
Asymptote creates animation in two steps. At the first step, a multi-page PDF file is created containing images that will become frames of future animation. Then, using the external utility Imagemagick [7] (command convert), this PDF file is converted into a GIF image. If the Imagemagick utility is not installed on the user's system, all examples will stop at creating a multi-page pdf file with a set of images and a GIF image with animation will not be received.
In this article, we are considering a universal way to create animation in video format using the frmpeg [8; 9] and Ghostscript [10] utilities. All external programs will be called explicitly from the command line. With the help of Asymptote, only a multi-page PDF file with frames for the future video will be created.
The reader should be familiar with the basic capabilities of the Asymptote language. For an introduction to the basics of the language, we recommend the manual [11]. The information from it will be enough to understand this work.
### Article structure
As an example, we chose the animation of the process of constructing epitrochoids and hypotrochoids. In the first part of the work, we will recall the definitions of these curves, some of their properties and reduce their construction to a composition of two rotations. In the second part of the article, we will describe in detail the implementation of their construction using Asymptote. And in the third part we will focus on the technical side of the issue and describe the process of creating a multi-page PDF file, converting it into PNG images using Ghostscript and converting these images into video using ffmpeg.
## II Task description
Consider the task of animating the process of constructing cycloidal curves, namely hypotrochoids and epitrochoids. We will not use the parametric equation of these curves, but will reduce everything to the composition of two rotations applied to the starting point of the curve. This will better illustrate the capabilities of the Asymptote language.
### Definition of epitrochoids
_Epitrochoid_ is defined as a trajectory plotted by a fixed point \(P\) lying on a radial line of circle with radius \(r\), which rolls along the _outer side_ of the circle with radius \(R\) (fig. 1). The parametric equation of the curve has the following form:
\[\begin{cases}x(t)=(R+r)\cos(\varphi)-d\cos\Big{(}\frac{R+r}{r}\varphi\Big{)}, \\ y(t)=(R+r)\sin(\varphi)-d\sin\Big{(}\frac{R+r}{r}\varphi\Big{)},\end{cases}\]
where \(d\) is the distance from the center of the rolling circle to the point of the curve, \(\varphi\) is the angle of rotation of the rolling circle relative to the axis \(Ox\).
Let use introduce the coefficient \(k=r/R\), then it will possible to change the parameterization and the equation will take the form:
\[\begin{cases}x(t)=R(k+1)\cos(kt)-d\cos((k+1)t),\\ y(t)=R(k+1)\sin(kt)-d\sin((k+1)t),\end{cases}\]
where the parameters \(t\) and \(\varphi\) are related as \(\varphi=kt\).
Some special cases of epitrochoids have proper names. So for \(r=R\), _Pascal's snail_ is obtained, for \(d=R+r\)_--rosy curve_ or _rose_, and for \(d=r\)_--epicycloid_ (fig. 2).
### Definition of a hypotrochoid
_Hypotrochoid_ -- the trajectory described by a fixed point \(P\) on a radial straight circle of radius \(r\), which rolls along the _inner_ side of the circle of radius \(R\) (fig. 3). The parametric equation of the curve has the following form:
\[\begin{cases}x(t)=(R-r)\cos(\varphi)+d\cos\Big{(}\dfrac{R-r}{r}\varphi\Big{)},\\ y(t)=(R-r)\sin(\varphi)-d\sin\Big{(}\dfrac{R-r}{r}\varphi\Big{)},\end{cases}\]
where, as in the case of the epitrochoid, \(d\) is the distance from the center of the rolling circle to the point \(P\). In particular, for \(d=r\), _hypocycloid_ is obtained (fig. 4).
It is also possible to parametrize \(\varphi=kt\), where \(k=r/R\), then the equation will take the form:
\[\begin{cases}x(t)=R(1-k)\cos(kt)+d\cos((1-k)t),\\ y(t)=R(1-k)\sin(kt)-d\sin((1-k)t),\end{cases}\]
### Reducing the problem to a composition of turns
The construction of cycloidal curves begins by specifying two circles: a fixed circle of radius \(R\) centered at point \(O_{R}\) and a moving circle of radius \(r\) centered at point \(O_{r}\).
A fixed circle will be conventionally called "large", and a moving one -- "small", since usually \(R>r\). On the radial line of a small circle, we fix the point of the curve \(P_{0}\).
From the definition of hypotrochoids and epitrochoids, it follows that a motion \(T(\varphi)\) is performed over the point \(P_{0}\), consisting of a composition of two turns (fig. 6-7).
1. \(T_{1}(\varphi)\) -- rotation around the point \(O_{R}\) by the angle \(\varphi\), at which the point \(O_{r}\) turns into \(O_{r}^{\prime}\), and the point \(P_{0}\) into the point \(P_{1/2}\).
2. \(T_{2}(\theta(\varphi))\) -- rotation around the point \(O_{r}^{\prime}\) by the angle \(\theta\), at which \(P_{1/2}\) turns into \(P_{1}\).
The rotation angle \(\theta\) is related to the angle \(\varphi\). A small circle must travel a distance equal to the length of the arc \(PP_{1/2}\), which means that the lengths of the arcs \(PP_{1/2}\) and \(P_{1/2}P_{1}\) are equal.
\[|PP_{1/2}|=R\varphi=|P_{1/2}P_{1}|=\theta r\Rightarrow\theta=\frac{R\varphi}{ r}=\frac{\varphi}{k},\;k=r/R.\]
Thus, to construct a curve, it is enough to set the parameters \(R\), \(r\) and \(d\), the initial positions of the circles and the points \(P_{0}\). It is usually assumed that the center of \(O_{R}\) coincides with the origin, and the center of \(O_{r}\) lies on the \(Ox\) axis. Then the coordinates of the center \(O_{r}\) are calculated as:
\[\mathbf{OO}_{r}=\mathbf{OO}_{R}+(R+s\cdot r,0)^{T},\quad s=\begin{cases}+1,\, \text{if\,\,\text{\em\,\text{\em\,\text{\em\,\text{\em\,\text{\em\,\text{\em\, \,\text{\em\,\,\text{\em\,\,\,\text{\em\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
guide xcycloid; transform T;
animation A; // (20) A.global = true;
draw(circle(c=0,R, r=R), p=BigCircle); // (22) dot(O_R, p=BigCircle);
for(real phi: uniform(0, 360turns, N)) { save(); // (24)
T = T1(phi, O_R) * T2(phi, sign, k, O_r); // (26)
xcycloid = xcycloid -- T*P; // (28)
draw(xcycloid, p=1bp+curve); // (30) dot(T*P, L=Label("P", align=NW)); // (32) draw(O_R -- T0_r, L=Label("R"+assign(sign)+"r")); // (34) draw(T*Q_r -- T*P, L=Label("d")); draw(circle(c=T*Q_r, r=r), p=littleCircle); // (36) dot(T*Q_r, p=littleCircle); // (38)
include "axes.asy"; // (40)
A.add(); // (42) restore(); // (44) }
A.movie(); // (46) currentpicture.erase();
This program creates a multi-page PDF file, each page of which is a future frame of the video. The main work on calculating the points of the curve is performed by the functions T1 (2) and T2 (4). These functions are defined for convenience, so that the code reflects the above formulas as much as possible. All the work is done by the built-in function rotate, which allows you to determine rotation around an arbitrary point (argument z) by an arbitrary angle value in degrees (argument angle).
Next, we set a set of variables-parameters (6). The variable sign -- is \(s\) from the formula (1), and the rest correspond to their mathematical notation. The variable N (8) sets the number of calculated points and, as a result, frames in the future video. The variable turns (10) sets the number of complete turns around the center of \(O_{R}\). Calling the built-in function usersetting (12) to override the value of any variable specified above via the command line argument -u.
Then, based on the above-defined parameters, the coordinates of the initial position of the center of the moving circle \(O_{r}\) (14), the points of the curve \(P\) (16) and the point of tangency \(Q\) of the moving circle with the stationary (18) are calculated.
Next, an object is created A (20), into which animation frames will be recorded (objects of the type picture or frame). A has several fields, in particular the global field of the bool type allows you to enable and disable saving the created images as an array in RAM and writing them as files to disk only after they are all built.
The curve points are calculated in a loop, but before that, a fixed circle (22) and its center are drawn. Then, at the beginning of each iteration of the cycle, all the current stationary elements of the image are saved (object picture) (24), all movable elements are built, the resulting image is added to the structure A (42) and the image state is reset (44) to the which was at the time of (24). The process continues until all frames are drawn and saved to A.
As the cycle progresses, the angle \(\varphi\) changes from 0 to \(2\pi n\) (in degrees). At each step, the rotation transformation \(T(\varphi)\) is calculated (28), applied to the starting point of \(P\) and added to the path (guide) xcycloid (28). With each iteration of the loop, new points are added to the path xcycloid and the curve grows.
The following drawing commands follow:
* of the already calculated part of the curve (30);
* of the new point position \(P\) (32);
* of a segment of length \(R+s\cdot r\) (34) connecting the center of \(O_{R}\) to the new position of the center of \(O_{r}\), as well as a segment of length \(d\) connecting the new center of \(O_{r}\) to the point \(P\) of the curve;
* directly the moving circle itself in its new position (36) and its center;
* touch point \(Q\) (38);
* coordinate grid, the settings of which are placed in a separate file (40).
Finally, after working out the loop, all created frames are recorded in a PDF file. To do this, Asymptote sequentially creates separate PDF files for each frame, then adds text processed by LaTeX (in our case LuaTeX) to them. It is this procedure that takes the main time of the program, the calculations themselves practically do not take up time in comparison with this.
We also note the peculiarity of the Asymptote syntax, which allows omitting the * operator when multiplying numeric literal constants and variables, for example 360turn (24).
## IV Creating a video clip
### Launching Asymptote
To run the program discussed above, run the following command.
The source code file xcycloid.asy is started for execution and as a result the file xcycloid.pdf will be created. Consider the options used.
* Options -noV and -nobatchView prevent the newly created image from opening automatically. The -noV option disables this function when executed from the command line, and -nobatchView when executing the script (as in our case).
* Option -f pdf indicates that you should immediately create a PDF file, bypassing the postscript file stage.
* option -globalwrite makes it possible to save the file xcycloid.pdf to any directory (in our case video), and not only to the one where the source file xcycloid.asy is located.
* Option -u allows you to interact with the usersetting() function and pass the values of variables inside the program. So we pass the values R=3, r=1, d=1 and N=100. This feature allows you to use a single source code file to build multiple images, flexibly adjusting any parameters. Note that this parameter takes exactly a text string, which is then processed by the usersetting() function, so the passed parameters must be taken in quotation marks.
### Converting to PNG using GhostScript
To convert the resulting multipage file into a video format, it is necessary to convert its pages into bitmaps. To do this, we suggest using the GhostScript [10] program. It is available for both Windows and Unix systems (GNU/Linux, macOS). It also comes with the TeXlive [5] distribution, as does Asymptote/
To convert a PDF file, run the command
* gs -sDEVICE=png16m -r600 -o video/xcycloid-%04d.png video/xcycloid.pdf
In the case of using GhostScript from the TeXlive distribution, you should call gs using the rungs script, which is located
* in the directory texlive\(\backslash\)2023\(\backslash\)bin\(\backslash\)win32 in the case of Windows OS,
* in the texlive/2023/bin/x86_64-linux in the case of GNU/Linux.
The 2023 directory corresponds to the version of the TeXlive distribution and may differ.
### Creating a video using FFmpeg
The process of gluing the resulting bitmap images into one video clip is carried out using FFmpeg [9]. This program is a command-line utility and has extensive functionality and, as a result, a huge number of options and settings. Let's give an example of creating a video clip from the PNG images generated in the previous step and give an explanation of the parameters used.
ffmpeg -r 30 -f image2 -start_number 1 -i video/xcycloid-%04d.png -c:v libx264 -vf - "pad=ceil(iw/2)*2:ceil(ih/2)*2" video/xcycloid.mp4
* parameter -r sets the frame rate.
* parameter -f sets the format of the input file.
* Since a lot of files are submitted to the input, you should specify the format of their names. The same notation is used as in the case of gs. The -start_number parameter sets the starting number.
* Parameter -c:v allows you to specify the video encoder used. In our case libx264, but many other formats are supported.
* The important parameter -vf sets the filter that is applied to the processed frame. In our case, we round the width and height of the frame in pixels to an even number. After converting to PNG, the width and height of the image may be odd, which is unacceptable for the vast majority of encoders. The specified filter allows you to avoid this error and rescale the frame by ffmpeg.
At the output we will get a video packed in a container mp4. The x264 format we have chosen is widespread and can be played by any modern browser, not to mention video player programs.
## V Conclusion
We have analyzed in detail the way to create vector graphics animation on a plane using the Asymptote language. This aspect of this language is poorly covered in the official manual and, in our opinion, this article fills this gap. Although the result is a video clip containing bitmaps, but thanks to the vector source (PDF), you can increase the resolution of the video almost limitlessly.
It should also be noted that this method of creating animation is universal, since almost any data visualization tool can be used to create a set of image frames. FFmpeg does all the work on creating a video file.
###### Acknowledgements.
This paper has been supported by the RUDN University Strategic Academic Leadership Program.
|
2309.12976 | Inverse-designed broadband low-loss grating coupler on thick
lithium-niobate-on-insulator platform | A grating coupler on 700-nm-thick Z-cut lithium-niobate-on-insulator platform
with high coupling efficiency, large bandwidth, and high fabrication tolerance
is designed and optimized by inverse design method. The optimized grating
coupler is fabricated with a single set of e-beam lithography and etching
process, and it is experimentally characterized to possess peak coupling
efficiency of -3.8 dB at 1574.93 nm, 1-dB bandwidth of 71.7 nm, and 3-dB
bandwidth of over 120 nm. | Yijun Xie, Mingming Nie, Shu-Wei Huang | 2023-09-22T16:18:08Z | http://arxiv.org/abs/2309.12976v1 | # Inverse-designed broadband low-loss grating coupler on thick lithium-niobate-on-insulator platform
###### Abstract
A grating coupler on 700-nm-thick Z-cut lithium-niobate-on-insulator platform with high coupling efficiency, large bandwidth, and high fabrication tolerance is designed and optimized by inverse design method. The optimized grating coupler is fabricated with a single set of e-beam lithography and etching process, and it is experimentally characterized to possess peak coupling efficiency of -3.8 dB at 1574.93 nm, 1-dB bandwidth of 71.7 nm, and 3-dB bandwidth of over 120 nm.
## 1 Introduction
Lithium niobate (LN) is a widely used material in different domains due to its large refractive indices, large transparency window, high second (\(\chi^{(2)}\)) and third (\(\chi^{(3)}\)) order nonlinearities as well as its excellent electro-optic (EO) property. The recently developed lithium niobate on insulator (LNOI) platform has brought new vigor and vitality to integrated photonics, which creates more opportunities for better performance and lower power consumption in various applications such as atomic clock, frequency synthesizer, LIDAR, and OCT-based bio-imaging [1]. These integrated devices are mainly demonstrated on thick LNOI platform with thickness ranging from 600 nm to 800 nm, where modes can be strongly confined and dispersion engineering can be flexibly achieved [2-9].
As for real-world applications, input and output couplers that are compatible with such integrated LNOI devices can help further improve the performance in different ways. Couplers with high coupling efficiency are essential to deliver high on-chip power for efficient nonlinear applications such as second harmonic generation (SHG) and electro-optic (EO) modulation [2-4]. Besides, high-efficient couplers are critical for chip-scale quantum applications demanding low loss. In addition, broadband couplers provide capability of retaining the spectral information and the accommodation of tunability for those broadband applications such as supercontinuum generation, femtosecond pulse generators, quadratic/Kerr combs/solitons generation and tunable lasers [5-9],. Therefore, couplers on such thick LNOI platform with not only high coupling efficiency but also large bandwidth are fundamentally important to fulfill the potential of those integrated LNOI devices.
Compared to the edge coupler requiring end-facet dicing, polishing and additional customized lensed fiber, grating coupler (GC) can typically provide high coupling efficiency with large placement flexibility. More importantly, GCs can be used along with fiber arrays which is convenient for multi-device testing and operation and thereby offers compelling advantages for high-volume production. In order to achieve high coupling efficiency, deep etching is usually required for GCs to increase the contrast between grating and trench regions, which is intrinsically difficult for LN. Conventionally, GCs with uniform periodicity and filling factor of the grating structures can only provide limited coupling efficiency with narrow bandwidth. Although lots of efforts have been made with manipulating the over cladding and forward design method where linear apodization and chirping is introduced and tuned [10-19], the demonstrated GCs cannot simultaneously provide high coupling efficiency and large bandwidth even with the compatibility with those photonic devices mentioned above sacrificed. The best GC that has been
experimentally demonstrated on thick LNOI platform via direct LN etch so far exhibits only -6.3 dB peak coupling efficiency with 1-dB bandwidth and 3-dB bandwidth of 40 nm and 90 nm, respectively [20].
In this paper, we design and optimize the performance of the GC by inverse design method. In contrast to forward design method where only limited parameter space is explored to provide provisional optimization, inverse design method based on the gradient descent algorithm comprehensively optimizes the figure of merit (FOM), which is usually defined as the overall coupling efficiency in a given bandwidth, by updating the structure iteratively according to the gradient of FOM in much expanded parametric space. Our final design is based on a 700 nm-thick Z-cut LNOI platform with -2.94 dB peak coupling efficiency at 1572.4 nm, 1-dB bandwidth of 69 nm, and 3-dB bandwidth of 113 nm from the fundamental transverse electric (TE) mode input of a single mode fiber at telecom band (SMF-28), which has comprehensive improvement compared to the forward design. The proposed GC is subsequently fabricated on a 700-nm-thick z-cut LNOI chip and experimentally characterized with -3.8 dB peak coupling efficiency at 1574.9 nm, 1-dB bandwidth of 71.7 nm, and 3-dB bandwidth of over 120 nm, which agrees well with the designed performance. We believe the proposed GC that is perfectly compatible with those integrated devices can benefit various emergent applications.
## 2 Design and Optimization
We start the initial optimization with sweeping apodization and filling factors, after which initial design is found for further optimization, and the initial design is further optimized using inverse design method by iteratively adjusting the widths of each grating pillar and trench independently. The length of the grating pillars \(l\) is fixed at 12 \(\upmu\)m in order to match the mode from SMF-28 along the out-of-plane direction (\(\gamma\)). Due to the large aspect ratio of the grating pillars, they can be considered as infinitely long slab waveguide along \(\gamma\) direction so that the simulation result from three-dimension FDTD (3D FDTD) method (Lumerical, Ansys) is almost identical to the result given by two-dimension FDTD (2D FDTD) method. Therefore, 2D FDTD is chosen for all the simulations throughout the design and optimization section in the consideration of time and computational resources saving.
Some general parameters of the GC are kept as constants throughout the design section (unless otherwise stated): the number of periods is chosen to be 15 and the last period ends with a straight waveguide with width of 6 \(\upmu\)m, the total thickness of LN \(h\) and the etched depth \(e\) are 700 nm and 450 nm, respectively. The base angle of the grating along the light propagation direction (\(x\)) is set to be 70 degrees based on our estimation from the SEM measurement while the base angle along \(\gamma\) direction is neglected. SMF-28 is located always at the center of the GC along \(\gamma\) direction and around 2.5 \(\upmu\)m above the top surface of the GC along \(z\) direction.
Figure 1: (a) 2D Schematics of the grating coupler simulation, (b) flow chart of inverse design method.
The schematic of the simulation is shown in figure 1 (a). In order to obtain a good initial design for further optimization, we introduce linear apodization and chirping to the GC whose periods (\(\Lambda_{i}\)) and filling factors (\(F_{i}\)) of the gratings are given by [Eq. (1)] and [Eq. (2)]:
\[\Lambda_{i} = \frac{\lambda_{c}}{a+b\times F_{i}}, \tag{1}\] \[F_{i+1} = F_{1}-R\times x_{i}, \tag{2}\]
in which \(F_{1}\) is the filling factor of the first grating period that is fixed at 0.35, \(R\) is the apodization factor that is fixed at 0.01/\(\upmu\)m, \(x_{i}\) is the end of the current grating, \(\lambda_{c}\) is the central wavelength that is set to be 1575 nm. For each period, the GC starts at \(x=0\) with the etched trench followed by the grating pillar and the width of the trench \(t_{i}\) and the grating \(w_{i}\) are given by \(\Lambda_{i}-w_{i}\) and \(\Lambda_{i}\times F_{i}\), respectively. The initial optimization is conducted by sweeping unitless parameters \(a\) and \(b\) with the sweep of the \(x\) position of the SMF-28 \(d_{f}\) and the angle \(\theta\) of the SMF-28 with respect to normal direction of the GC (\(z\)) for each pair of \(a\) and \(b\). The optimized design is selected with peak coupling efficiency -4.48 dB at central wavelength 1571 nm, 1-dB and 3-dB bandwidths of 34 nm and 84 nm, respectively, and the GC parameters are listed in table 1.
For further optimization, GC illustrated above is selected to be the initial design and full degrees of freedom on the width of each grating pillar \(w_{i}\) and trench \(t_{i}\) are granted while the \(x\) position \(d_{f}\) and the angle of the fiber \(\theta\), the length of the gratings \(l\), and the etched depth \(e\) remain fixed at 7 \(\upmu\)m, 6 degrees, 12 \(\upmu\)m, and 450 nm, respectively. FOM is defined as the negative value of the average coupling efficiency in the wavelength range of 1555 nm to 1595 nm as expressed in [Eq. (3)]:
\[FOM = -\frac{\sum_{i=1}^{N}CE_{i}}{N}, \tag{3}\]
where \(CE_{i}\) stands for the coupling efficiency at the reference wavelength. N is the number of reference wavelengths, which is set to be 51 so that the transmission data is recorded every 0.8 nm. Moreover, the constraint of the minimal feature size of 150 nm is added to avoid any super small structure that will impose excessive challenge on fabrication, and the stopping criteria is set to be FOM equals -0.5. To start with, FOM of the initial design is calculated by 2D FDTD simulation. Then the gradient of FOM with respect to each grating pillar and trench will be evaluated by adjoint method, which allows fast gradient evaluation regardless of number of parameters by only two FDTD simulations, if the stopping criteria is not satisfied. All the widths will be then updated accordingly by minimizer and FOM of the updated structure will be calculated iteratively. Eventually the iteration will end once the stopping criteria is met or minimal FOM is found, and the updated widths of each grating pillar and trench will be provided along with the optimal FOM as illustrated in figure 1 (b). The widths of gratings and trenches of the optimized GC are given in table 2.
By sweeping the x position \(d_{f}\) and the angle \(\theta\) of the SMF-28, the optimal position of the SMF-28 is subsequently found to be 6 \(\upmu\)m away from the onset of the first trench at 8 degrees, and the performance of -2.94 dB peak coupling efficiency at 1572.4 nm, 1-dB bandwidth of 69 nm, and 3-dB bandwidth of 113 nm from the fundamental TE mode input of the SMF-28 is predicted. As listed in table 2, there is no super small structure determined by the constraint or any structure with irregular shape existing in the proposed GC.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(F_{0}\) & \(a\) & \(b\) & \(R\) (\(\mu m^{-1}\)) & \(d_{f}\) (\(\mu m\)) & \(\theta\) (deg) \\ \hline
0.35 & 1.75 & 0.02 & 0.01 & 7 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters of the optimized GC with linear apodization and chirping and the SMF-28
In order to investigate the fabrication tolerance of the proposed GC, the variation of the peak coupling efficiency and 1-dB bandwidth are calculated with etched depth \(e\) ranging from 400 nm to 500 nm and base angle fixed at 70 degrees, and with base angle varying from 60 nm to 500 nm. The results are shown in Fig. 2. The effect of different etched depth \(e\) on (a) peak coupling efficiency, and (b) 1-dB bandwidth; the effect of base angle on (c) peak coupling efficiency, and (d) 1-dB bandwidth; and the effect of random dimension variation on (e) peak coupling efficiency, and (f) 1-dB bandwidth.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Trench \\ size (nm) \\ \end{tabular} } & \(t_{1}\) & \(t_{2}\) & \(t_{3}\) & \(t_{4}\) & \(t_{5}\) & \(t_{6}\) & \(t_{7}\) & \(t_{8}\) \\ \cline{2-10} & 285 & 542 & 859 & 726 & 684 & 681 & 685 & 670 \\ \cline{2-10} & \(t_{9}\) & \(t_{10}\) & \(t_{11}\) & \(t_{12}\) & \(t_{13}\) & \(t_{14}\) & \(t_{15}\) & \\ \cline{2-10} & 601 & 678 & 634 & 644 & 606 & 659 & 722 & \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Grating \\ size (nm) \\ \end{tabular} } & \(w_{1}\) & \(w_{2}\) & \(w_{3}\) & \(w_{4}\) & \(w_{5}\) & \(w_{6}\) & \(w_{7}\) & \(w_{8}\) \\ \cline{2-10} & 302 & 157 & 161 & 230 & 272 & 248 & 257 & 295 \\ \cline{2-10} & \(w_{9}\) & \(w_{10}\) & \(w_{11}\) & \(w_{12}\) & \(w_{13}\) & \(w_{14}\) & \(w_{15}\) \\ \cline{2-10} & 253 & 280 & 252 & 210 & 196 & 175 & 6000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters of The Optimized GC by Inverse Design
Figure 2: The effect of different etched depth \(e\) on (a) peak coupling efficiency, and (b) 1-dB bandwidth; the effect of base angle on (c) peak coupling efficiency, and (d) 1-dB bandwidth; and the effect of random dimension variation on (e) peak coupling efficiency, and (f) 1-dB bandwidth.
degrees to 80 degrees and etched depth \(e\) fixed at target value 450 nm in the consideration that different etching tools and recipes might render different base angles of the gratings. Furthermore, the random dimension variations from \(\pm\)1% to \(\pm\)5% from the designed values, which is common for E-beam lithography process, is also introduced to the size of each trench width \(t_{i}\) and grating width \(w_{i}\) separately while the etched depth \(e\) and the base angle are both set to be the target values to evaluate how random dimension variations affect the performance of the optimized GC. The results are obtained from 100 individual simulations for each level of variation. The effects of different etched depths, base angles, and the random dimension variations on peak coupling efficiency, and 1-dB bandwidth are shown in figure 2 (a) and (b), figure 2 (c) and (d), and figure 2 (e) and (f), respectively.
As shown in figure 2 (a) and (b), the peak coupling efficiency of the GC varies from -3.9 dB to -2.85 dB while the 1-dB bandwidth varies from 66 nm to 82 nm in the exact opposite trend when the base angle is fixed at 70 degrees. The performance of the proposed GC remains consistent when the base angle of the gratings varies from 65 degrees to 75 degrees when the etched depth \(e\) is kept at 450 nm as illustrated in figure 2 (c) and (d). However, the peak coupling efficiency starts to decay severely to -3.8 dB as the base angle approaches 80 degrees while the 1-dB bandwidth gets narrowed down by around 6 nm as the gratings get flatter.
On the other hand, the performance of the proposed GC is not prone to the random dimension variations as depicted in figure 2 (e) and (f). There is only about 0.3 dB peak coupling efficiency, and 8 nm 1-dB bandwidth reduction, respectively, for the most extreme cases where the random dimension variation is \(\pm\)5% for the size of each trench and grating pillar, which indicates the overall high fabrication tolerance of the proposed GC.
## 3 Fabrication and Measurement
The proposed GC is fabricated on a 1 cm\({}^{2}\) 700-nm-thick Z-cut LNOI (NanoLN) chip. After thoroughly cleaning the chip, a 1.2-um-thick layer of AR-P 6200 (Allresist) is spin-coated and baked at 180 \({}^{\circ}\)C for 2 minutes. An E-beam lithography is conducted using JEOL JBX-6300 fs to define the soft mask and a subsequent argon ion milling is conducted for 12 minutes and 40 seconds in order to achieve the etched depth of 450 nm. After the residual mask is removed by remover PG, the LNOI chip is soaked in RCA solution (1:1:5 of H\({}_{2}\)O\({}_{2}\), NH\({}_{4}\)OH, and DI water) overnight at room temperature for the re-deposition removal to reduce the sidewall roughness. Each device consists of a 100-um-long straight waveguide with the top width of 1.8 um, two 300-um-long tapers with the top width adiabatically expanding from 1.8 um to 12 um on both sides, and two proposed GCs. In order to compare the performance of the GC before and after the inverse design optimization, two GCs before the inverse design optimization with the exact same waveguide and tapers are also fabricated on the same chip. SEM images of the optimized GC by inverse design method are shown in figure 3.
Figure 3: SEM images of the fabricated GC (a) Top view, (b) Side view.
For the coupling efficiency characterization, two SMF-28s are cleaved and clamped by fiber holders that are mounted on rotational stages so that the fibers can be adjusted to the desired angle with respect to the normal direction of the GCs. Rotational stages are subsequently fixed onto 3-axis translation stages, which are placed on the opposite side of the fabricated device. A polarization controller connects the input SMF-28 and the tunable laser input from Santec TSL-710 while the output SMF-28 is connected to a photodetector whose output signal is collected by a digitizer (CSE1222 Razor Express, Gage) with 100Ks/s sampling rate for the data acquisition. Santec is continuously scanned from 1500 nm to 1640 nm at 100nm/s scan rate with 5 dBm nominal optical output power. The acquired data is converted to optical power by electrical-optical signal conversion and calibration and then the coupling efficiency of the GC can be further extracted. The simulated and the measured coupling efficiency after moving average process of the GC with linear apodization and chirping, and the GC optimized by inverse design are illustrated in figure 4 (a) and 4 (b), respectively.
As shown in figure 4(a), the GC with linear apodization and chirping has similar performance compared to the simulation results with 1-dB bandwidth of 45.7 nm and 3-dB bandwidth of 85.8 nm except for the peak coupling efficiency. The measured peak coupling efficiency is -6.33 dB (-7.1\(\pm\)0.77 dB from moving average process) at 1569.3 nm with almost -2 dB difference from the simulation result, and we attribute this discrepancy mainly to the fabrication imperfection due to the proximity of this device to the edge of the chip. Compared to the results with linear apodization and chirping, the performance of the GC has
Figure 4: Simulated and measured coupling efficiency of (a) the GC with linear apodization and chirping, (b) the GC optimized by inverse design method.
comprehensively improved after optimization with the peak coupling efficiency soared from -6.33 dB to -3.8 dB (-4.5\(\pm\)0.7 dB from moving average process), 1-dB bandwidth expanded from 45.7 nm to 71.7 nm, and 3-dB bandwidth from 85.8 nm to over 120 nm, respectively, as plotted in figure 4(b). Besides, the measured results of the GC by inverse design match well with the simulation results with even slightly larger bandwidths but lower peak coupling efficiency, which can be explained by the random dimension variations of grating structures as discussed at the end design and optimization section.
## 4 Conclusion and outlook
In summary, a GC on 700-nm-thick z-cut LNOI with high coupling efficiency and large bandwidth is proposed and optimized by inverse design method, and the optimized GC is fabricated with only a single set of E-beam lithography and etching process. Comparable performance predicted by the simulation of peak coupling efficiency -3.8 dB at central wavelength 1574.9 nm, 1-dB bandwidth 71.7 nm, and 3-dB bandwidth over 120 nm is experimentally demonstrated. Comprehensive improvement has been observed in both peak coupling efficiency and bandwidths compared to the initial GC design with linear apodization and chirping.
This is the first proposed GC design optimized by inverse design method on LNOI platform as well as the first experimental demonstration of GC on the thick LNOI platform that exhibits both high coupling loss and large 1-dB and 3-dB bandwidth at the same time. Moreover, the proposed GC has high fabrication tolerance without any additional over cladding patterning or the need of bottom reflector. In principle, the GC with improved performance on LNOI platform with different crystalline orientations, total thicknesses, and etched depths and even on all other platforms can also be achieved following the same optimization strategies. One can also target at maximizing bandwidth or manipulating central wavelength by customizing FOM for future research.
**Funding.** NSF OMA 2016244; ONR N00014-22-1-2224.
**Acknowledgements.** Y. Xie thanks L. Rukh, C. Tang, and V. Babicheva for fruitful discussions and A. R. James, J. Nogan, and D. Webb for process advice and N. Prakash, J. Musgrave, J. Bartos for the help. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy (DOE) Office of Science by Los Alamos National Laboratory (Contract 89233218CNA000001) and Sandia National Laboratories (Contract DE-NA-0003525).
**Disclosures.** Authors declare no conflicts of interest.
**Data availability.** Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2308.16811 | A predictive model for fluid-saturated, brittle granular materials
during high-velocity impact events | Granular materials -- aggregates of many discrete, disconnected solid
particles -- are ubiquitous in natural and industrial settings. Predictive
models for their behavior have wide ranging applications, e.g. in defense,
mining, construction, pharmaceuticals, and the exploration of planetary
surfaces. In many of these applications, granular materials mix and interact
with liquids and gases, changing their effective behavior in non-intuitive
ways. Although such materials have been studied for more than a century, a
unified description of their behaviors remains elusive.
In this work, we develop a model for granular materials and mixtures that is
usable under particularly challenging conditions: high-velocity impact events.
This model combines descriptions for the many deformation mechanisms that are
activated during impact -- particle fracture and breakage; pore collapse and
dilation; shock loading; and pore fluid coupling -- within a thermo-mechanical
framework based on poromechanics and mixture theory. This approach allows for
simultaneous modeling of the granular material and the pore fluid, and includes
both their independent motions and their complex interactions. A general form
of the model is presented alongside its specific application to two types of
sands that have been studied in the literature. The model predictions are shown
to closely match experimental observation of these materials through several
GPa stresses, and simulations are shown to capture the different dynamic
responses of dry and fully-saturated sand to projectile impacts at 1.3 km/s. | Aaron S. Baumgarten, Justin Moreno, Brett Kuwik, Sohanjit Ghosh, Ryan Hurley, K. T. Ramesh | 2023-08-31T15:40:17Z | http://arxiv.org/abs/2308.16811v1 | A predictive model for fluid-saturated, brittle granular materials during high-velocity impact events
###### Abstract
Granular materials -- aggregates of many discrete, disconnected solid particles -- are ubiquitous in natural and industrial settings. Predictive models for their behavior have wide ranging applications, e.g. in defense, mining, construction, pharmaceuticals, and the exploration of planetary surfaces. In many of these applications, granular materials mix and interact with liquids and gases, changing their effective behavior in non-intuitive ways. Although such materials have been studied for more than a century, a unified description of their behaviors remains elusive.
In this work, we develop a model for granular materials and mixtures that is usable under particularly challenging conditions: high-velocity impact events. This model combines descriptions for the many deformation mechanisms that are activated during impact -- particle fracture and breakage; pore collapse and dilation; shock loading; and pore fluid coupling -- within a thermo-mechanical framework based on poromechanics and mixture theory. This approach allows for simultaneous modeling of the granular material and the pore fluid, and includes both their independent motions and their complex interactions. A general form of the model is presented alongside its specific application to two types of sands that have been studied in the literature. The model predictions are shown to closely match experimental observation of these materials through several GPa stresses, and simulations are shown to capture the different dynamic responses of dry and fully-saturated sand to projectile impacts at 1.3 km/s.
keywords: constitutive behavior, granular material, porous material, mixture model, impact testing +
Footnote †: journal: arXiv
## 1 Introduction
The severity of landslides, subterranean explosions, earthquakes, and high-velocity impact events is significantly influenced by the strength and dynamic response of granular materials -- e.g., sands, soils, snow, and lunar dust. Despite their relatively simple composition, this class of materials exhibits surprisingly complex behaviors, especially during
dynamic loading events. These behaviors include phenomena such as pore collapse (Mandl et al., 1977), dilation or bulking (Rudnicki and Rice, 1975), liquefaction (Lade, 1994; Sawicki and Mierczynski, 2006), and material ejection (Melosh, 1989; Housen and Holsapple, 2011); each of which directly influences the motion and deformation of these materials as well as the loads they place on surrounding structures.
A key component of their behavior, particularly during dynamic loading, is the interaction of the bulk granular material with the interstitial (or pore) fluids that fill the space between individual particles (Lade, 1994; Jackson, 2000; Coussy, 2004; Boyer et al., 2011). Although granular materials are frequently considered in isolation, most granular sediments are porous, with a significant fraction of their apparent volume occupied by pore liquids and gases. Under quasi-steady loading conditions of laboratory-scale specimens, the presence of these interstitial fluids and their effects on the material response can be easily accounted for. However, under dynamic loading conditions, the motion and deformation of these pore fluids can have large, non-intuitive effects on the behavior of the bulk material (e.g., see Pailha and Pouliquen, 2009; Baumgarten and Kamrin, 2019a).
For more than a century, researchers have been developing models for the behavior of granular materials and fluid-sediment mixtures, producing many analytical and mathematical descriptions for simple loading conditions. For example, there are models for the pressure dependent yield strength of these materials during shear and tri-axial loading (e.g., Drucker and Prager, 1952; Lade and Duncan, 1975; Jop et al., 2006); for the tendency of granular materials to dilate toward a critical (steady) state while shearing (e.g., Roux and Radjai, 1998; Pailha and Pouliquen, 2009); for the drag force acting against pore fluids as they move through packed granular beds (e.g., Darcy, 1856; Carman, 1937); and for the evolution of material stresses within settling sediments (e.g., Biot, 1941; Terzaghi, 1943; Terzaghi et al., 1996). Despite the predictive power of these models, they are generally limited to specific geometries and regimes of material motion: frequently providing limited information to engineers about the complex dynamics of these materials.
In recent years, significant research has focused on the development of models for more general, dynamic loading conditions. Models of this type are constructed using the framework of continuum mechanics: where the bulk material is described using smoothly varying, aggregated fields (e.g., stress, density, and velocity) rather than modeling the motion and deformation of each individual grain and pore. Such models combine constitutive descriptions of the stress-strain response of the granular media with numerical simulation frameworks capable of solving the continuum equations of motion -- namely, conservation of mass, momentum, and energy. Examples of such models include low-pressure granular flow models (e.g., Dunatunga and Kamrin, 2015, 2022); high-pressure granular breakage and compaction models (e.g., Rubin and Einav, 2011; Cil et al., 2020; Herbold et al., 2020), as well as poromechanics and mixture theory models (e.g., Bandara and Soga, 2015; Gao et al., 2018; Baumgarten and Kamrin, 2019b). In the current literature, however, there are no models for the dynamic behavior of brittle granular materials -- including their complex interaction with pore fluids -- through their transition from dense, compacted, high-pressures states all the way to gaseous, stress-free states.
In this work, we are particularly interested in models that can be applied under conditions
relevant to high-velocity impact events (\(<\) 3 km/s; see Signetti and Heine, 2022). This condition presents a unique challenge in the field of granular material modeling due to the number of grain-scale mechanisms that become important -- shown in Figure 1. At the point of impact, pressures and strain-rates may exceed 10\({}^{9}\) Pa and 10\({}^{6}\) s\({}^{-1}\), respectively, leading to rapid compaction; particle fracture and fragmentation; and pore fluid compression. Moving outward from the impact site, within a radius of \(\sim\)10-20 times the size of the impactor, the process of crater formation leads to extreme material deformations at much lower stresses and strain-rates: 10\({}^{1}\)-10\({}^{7}\) Pa and 10\({}^{0}\)-10\({}^{3}\) s\({}^{-1}\), respectively. These deformations are accommodated by frictional granular shearing, dilation, pore fluid flow, and eventually material ejection. The transition between these different regimes is determined, in part, by the microscopic elastic deformations of the individual particles, which allow the transmission of stresses through the particle contact network (Radjai et al., 1999) and drive the formation of the stress concentrations that lead to particle fracture (Hurley et al., 2018).
The present work develops a predictive model for granular materials that can be used to study high-velocity impact events. This model combines elements from a range of disciplines in mechanics, including classical theories in soil mechanics (e.g., Carman, 1937), porome
Figure 1: Illustration of dominant deformation mechanisms during high-velocity impact into fluid-saturated granular material. In the impact zone, stresses and strain-rates may exceed 10\({}^{9}\) Pa and 10\({}^{6}\) s\({}^{-1}\), respectively, leading to pulverization and compaction of the granular particles. (Although impact heating and thermal effects are modeled in this work, melting and solid phase changes are not.) In the cratering region, the magnitude of stresses and strain-rates decreases significantly, leading to granular flow; shear dilation; material ejection (expansion and re-consolidation); and importantly, coupling with interstitial (pore) fluids.
chanics (e.g., Terzaghi, 1943), and granular flow (e.g., Drucker and Prager, 1952), with more contemporary theories for mixtures (e.g., Bedford and Drumheller, 1983) and shock physics (e.g., Drumheller, 1998). Further, we discuss the grain-scale deformation mechanisms that are activated during these impact events, and we build mathematical descriptions of these mechanisms into this multi-component continuum model.
## 2 Governing Equations
The model developed in this work is constructed within the framework of continuum mechanics, which describes the motion and deformation of materials using continuous fields rather than following individual particles and pores. Within this broad framework, we are particularly interested in the governing equations derived from the theory of mixtures and poromechanics. In this section, we present the kinematic rules and balance laws associated with these theories, and apply these governing equations to fluid-saturated sediments.
Consider a representative volume of granular media -- e.g. Figure 2a,b. This volume consists of two primary components: (i) a volume occupied by the solid material that composes the individual particles and (ii) a volume occupied by the interstitial fluid that fills the pore spaces between particles. In mixture theories, these two components are considered immiscible and are homogenized into separate, overlapping continua (e.g., see Baumgarten and Kamrin, 2019, and Figure 2c-f), each with their own density, velocity, stress, and internal energy fields.
The total volume \(V\) within a representative volume element or RVE is the sum of the volumes occupied by either solid material \(V_{s}\) or interstitial fluid \(V_{f}\): \(V=V_{s}+V_{f}\). The fraction of the total volume occupied by each material represents the respective volume fraction: \(\phi_{s}=V_{s}/V\), the solid volume fraction; and \(\phi_{f}=V_{f}/V\), the fluid volume fraction or _porosity_ (so that \(\phi_{s}+\phi_{f}=1\)). These volume fractions fundamentally connect the _effective
Figure 2: Illustration of fluid-saturated, granular material and associated representative volume elements (RVEs). The representative volume \(\Omega\) shown in (b) is separable into the solid component \(\Omega_{s}\) shown in (c) and the fluid component \(\Omega_{f}\) shown in (e). These two components of the RVE, their external boundaries \(\partial\Omega_{s}\) and \(\partial\Omega_{f}\), and their interior surface \(\partial\Omega^{*}\) are used to construct _effective_ material fields, which are continuous in the volume \(\Omega\) and shown in (d) and (f).
material fields of the continua shown in Figures 2d,f with the _true_ material fields shown in Figures 2c,e of the constituents we are interested in modeling. For example, we relate the effective densities (\(\bar{\rho}_{s}\) and \(\bar{\rho}_{f}\)) of the continua in Figures 2d,f with the true densities (\(\rho_{s}\) and \(\rho_{f}\)) of the constituents in Figures 2c,e:
\[\bar{\rho}_{s}=\phi_{s}\rho_{s},\qquad\bar{\rho}_{f}=\phi_{f}\rho_{f}. \tag{1}\]
Thus the solid particles shown in Figure 2c are homogenized into the continuous material called the _solid_ (or _granular_) _phase_ shown in Figure 2d, with an effective density, \(\bar{\rho}_{s}\); velocity, \(\mathbf{v}_{s}\); and specific (per unit mass) internal energy, \(\varepsilon_{s}\). Similarly, the interstitial fluid shown in Figure 2e is homogenized into the continuous material shown in Figure 2f -- called the _fluid phase_ -- which has an effective density, \(\bar{\rho}_{f}\); velocity, \(\mathbf{v}_{f}\); and specific internal energy, \(\varepsilon_{f}\).
Following the fluid-sediment mixture theories of Bedford and Drumheller (1983) and Jackson (2000), we express the conservation of mass, momentum, and energy at an arbitrary _spatial_ point \(\mathbf{x}\) within the two overlapping continua with the following governing equations. These equations are expressed in the _material_ reference frame for each continua and make use of their respective material time derivatives:
\[d^{s}/dt \equiv\partial/\partial t+\mathbf{v}_{s}\cdot\nabla, \tag{2a}\] \[d^{f}/dt \equiv\partial/\partial t+\mathbf{v}_{f}\cdot\nabla, \tag{2b}\]
with \(\nabla\) the _spatial_ gradient operator (i.e., \(\nabla\equiv\partial/\partial\mathbf{x}\)). Conservation of mass is therefore expressed as follows and defines the time-rate of change of the effective mass densities:
\[\frac{d^{s}\bar{\rho}_{s}}{dt} =-\bar{\rho}_{s}\ \text{div}(\mathbf{v}_{s}), \tag{3a}\] \[\frac{d^{f}\bar{\rho}_{f}}{dt} =-\bar{\rho}_{f}\ \text{div}(\mathbf{v}_{f}), \tag{3b}\]
with \(\text{div}()\) the _spatial_ divergence operator (i.e., \(\text{div}(\mathbf{v}_{s})\equiv\nabla\cdot\mathbf{v}_{s}\)).
Conservation of momentum defines the time-rate of change of the effective velocities:
\[\bar{\rho}_{s}\frac{d^{s}\mathbf{v}_{s}}{dt} =\text{div}(\mathbf{\sigma}_{s})+\bar{\rho}_{s}\mathbf{g}-\mathbf{f}_{d}-\phi _{s}\nabla p_{f}, \tag{4a}\] \[\bar{\rho}_{f}\frac{d^{f}\mathbf{v}_{f}}{dt} =\text{div}(\mathbf{\tau}_{f})+\bar{\rho}_{f}\mathbf{g}+\mathbf{f}_{d}-\phi_{ f}\nabla p_{f}. \tag{4b}\]
In these equations, \(\mathbf{\sigma}_{s}\) represents the _effective granular stress_ tensor, \(\mathbf{\tau}_{f}\) denotes the _effective fluid shear stress_ tensor, and \(p_{f}\) is the _fluid pore pressure_. These effective stresses are generally analogous to the Cauchy stress in classical continuum mechanics; however, unlike the Cauchy stress, the pore fluid pressure contributes to the motion of _both_ materials, not only the fluid continuum. Interactions between the two constituent materials along their shared interfaces (\(\partial\Omega^{*}\) in Figure 2c,e) give rise to internal interaction forces -- namely, the buoyant force (e.g., see Drumheller, 2000) and the _inter-phase drag force_, \(\mathbf{f}_{d}\). In addition to these internal forces, the motion of each material is affected by \(\mathbf{g}\), the gravitational acceleration vector.
Conservation of energy defines the time-rate of change of the internal energies:
\[\bar{\rho}_{s}\frac{d^{s}\varepsilon_{s}}{dt} =\mathbf{\sigma}_{s}:\nabla\mathbf{v}_{s}+\frac{\phi_{s}p_{f}}{\rho_{s}} \bigg{(}\frac{d^{s}\rho_{s}}{dt}\bigg{)}-\text{div}(\mathbf{q}_{s})+q_{s}-q_{i}, \tag{5a}\] \[\bar{\rho}_{f}\frac{d^{f}\varepsilon_{s}}{dt} =\mathbf{\tau}_{f}:\nabla\mathbf{v}_{f}+\frac{\phi_{f}p_{f}}{\rho_{f}} \bigg{(}\frac{d^{f}\rho_{f}}{dt}\bigg{)}-\text{div}(\mathbf{q}_{f})+q_{f}+q_{i}+\bm {f}_{d}\cdot(\mathbf{v}_{s}-\mathbf{v}_{f}). \tag{5b}\]
Finally, we may propose expressions for the imbalance of entropy, which defines the time-rate of change of the specific internal entropies, \(s_{s}\) and \(s_{f}\), within each continua:
\[\bar{\rho}_{s}\frac{d^{s}s_{s}}{dt} \geq-\text{div}(\mathbf{q}_{s}/T_{s})+q_{s}/T_{s}-q_{i}/T_{f}, \tag{6a}\] \[\bar{\rho}_{f}\frac{d^{f}s_{f}}{dt} \geq-\text{div}(\mathbf{q}_{f}/T_{f})+q_{f}/T_{f}+q_{i}/T_{s}. \tag{6b}\]
Equations (5) and (6) include the true internal temperatures \(T_{s}\) and \(T_{f}\) of the constituent materials; the heat flow vectors, \(\mathbf{q}_{s}\) and \(\mathbf{q}_{f}\); scalar rates of internal heat generation, \(q_{s}\) and \(q_{f}\); and the _inter-phase heat flow_ per unit volume, \(q_{i}\).
Together, (3)-(6) define a system of governing equations for modeling the motion and deformation of fluid-saturated granular materials. Solving this system of equations is only possible when they are coupled with specific constitutive models for the effective granular stress \(\mathbf{\sigma}_{s}\), the effective fluid shear stress \(\mathbf{\tau}_{f}\), the fluid pore pressure \(p_{f}\), the inter-phase drag force \(\mathbf{f}_{d}\), and the rates of heat flow and heat generation -- \(\mathbf{q}_{s}\), \(\mathbf{q}_{f}\), \(q_{s}\), \(q_{f}\), and \(q_{i}\). Further discussion and derivation of these equations can be found in C, as well as in Chapter 2 of Baumgarten (2021).
## 3 Granular Constitutive Model
The constitutive model proposed in this section is formulated using the theory of breakage mechanics (Einav, 2007), and incorporates the effects of non-linear elasticity (Nguyen and Einav, 2009); dilation and compaction (Rubin and Einav, 2011; Cil et al., 2020); critical state behavior (Pailha and Pouliquen, 2009; Tengattini et al., 2016); shock compression (Herbold et al., 2020); and importantly, coupling with pore fluids (Baumgarten and Kamrin, 2019).
We use a thermodynamic formulations founded on the _specific Helmholtz free energies_, \(\psi_{s}=\varepsilon_{s}-T_{s}s_{s}\) and \(\psi_{f}=\varepsilon_{f}-T_{f}s_{f}\), which describe the amount of available energy -- or _strain energy_ -- stored in each material. Combining constitutive equations for \(\psi_{s}\) and \(\psi_{f}\) with the first and second law of thermodynamics in (5) and (6), we formulate models for \(\mathbf{\sigma}_{s}\), \(\mathbf{\tau}_{f}\), \(p_{f}\), and \(\mathbf{f}_{d}\) that are thermodynamically sound across the range of loading conditions experienced during high-velocity impact events.
### Kinematics
Consider the motion of the overlapping continua shown in Figure 2d,f. In poromechanics and mixture theory, each continuum material moves through space following its own independent velocity field -- here, \(\mathbf{v}_{s}\) and \(\mathbf{v}_{f}\) (in general, \(\mathbf{v}_{s}\neq\mathbf{v}_{f}\)). We define the _effective
mesoscopic distortion rates_, \(\mathbf{L}_{s}\) and \(\mathbf{L}_{f}\), of each continuum material in terms of the gradient of their respective velocity fields:
\[\mathbf{L}_{s} \equiv\nabla\mathbf{v}_{s},\quad\text{and} \tag{7a}\] \[\mathbf{L}_{f} \equiv\nabla\mathbf{v}_{f}, \tag{7b}\]
with \(\nabla\mathbf{v}_{s}\) and \(\nabla\mathbf{v}_{f}\) the second-order, velocity gradient tensors. The corresponding _effective mesoscopic strain-rates_\(\mathbf{D}_{s}\) and \(\mathbf{D}_{f}\) are the symmetric parts of \(\mathbf{L}_{s}\) and \(\mathbf{L}_{f}\).
This mesoscopic picture of material deformation is identical to the standard picture of deformation in continuum mechanics. However, in poromechanics, soil mechanics, and breakage mechanics, we are also interested in _microscopic_ strains and distortions, which must be inferred from the _mesoscopic_ states of both continua.
### Granular Micromechanics
This microscopic-to-mesoscopic connection is developed by considering the behavior of the _material neighborhoods_ that surround individual particles. Figure 3 illustrates one such material neighborhood as it is taken from an (assumed) initially stress-free reference state in the body \(\mathcal{B}_{0}^{*}\) to its current deformed state in the body \(\mathcal{B}_{t}^{*}\). We characterize the deformation of this material neighborhood using the _deformation gradient_ tensor \(\mathbf{F}\), which admits the local multiplicative decomposition \(\mathbf{F}=\mathbf{F}^{e}\mathbf{F}^{p}\)(Lee, 1968), and obeys the following evolution rule: \(d^{s}\mathbf{F}/dt=\mathbf{L}_{s}\mathbf{F}\).
Figure 3: Illustration of a granular material in (a) the stress-free reference configuration \(\mathcal{B}_{0}^{*}\) and (b) the current, deformed configuration \(\mathcal{B}_{t}^{*}\). Highlighted region shows the material point \(\mathbf{X}^{*}\) in \(\mathcal{B}_{0}^{*}\), which maps to the spatial point \(\mathbf{x}\) in \(\mathcal{B}_{t}^{*}\). The illustrations in (c)–(e) show the proposed elasto-plastic mapping of vectors in the reference neighborhood around \(\mathbf{X}^{*}\) in (c) to an intermediate, stress-free neighborhood in (d) to the current, deformed neighborhood in (e). The plot in (f) shows a way to characterize the evolving shapes of particles shown in (c)–(d) using breakage mechanics theory: the _cumulative particle size distribution_.
The _elastic_ deformation gradient tensor \(\mathbf{F}^{e}\) describes the part of the total mesoscopic deformation that stores strain energy in the material. In principle, this deformation is completely reversible and associated with microscopic, elastic strains concentrated at particle-particle contact points (Figure 3d-e). The _plastic_ deformation gradient tensor \(\mathbf{F}^{p}\), on the other hand, describes the mesoscopic deformations that are _inelastic_ -- i.e., that do not store strain energy in the material. These deformations can be significant and are associated with inelastic mechanisms such as granular rearrangement, pore dilation and compaction, and particle fragmentation (Figure 3c-d).
Constitutive models for granular materials are frequently expressed in terms of strains and volume ratios (e.g., Nguyen and Einav, 2009; Rubin and Einav, 2011; Tengattini et al., 2016; Cil et al., 2020) which can be easily defined in terms of the deformation gradient tensor. For example, the _elastic volume ratio_\(J^{e}\), and a mesoscopic elastic strain tensor \(\mathbf{E}^{e}\) and its primary strain invariants -- the _elastic volumetric strain_\(\epsilon_{v}^{e}\), and the _elastic shear strain_\(\epsilon_{s}^{e}\) -- may be obtained from the elastic deformation gradient \(\mathbf{F}^{e}\):
\[\mathbf{E}^{e}\equiv\tfrac{1}{2}(\mathbf{F}^{e\top}\mathbf{F}^{e}-\mathbf{1}),\quad\epsilon_{ v}^{e}\equiv\operatorname{tr}(\mathbf{E}^{e}),\quad\epsilon_{s}^{e}\equiv\sqrt{ \tfrac{2}{3}\mathbf{E}_{0}^{e}:\mathbf{E}_{0}^{e}},\quad\text{and}\quad J^{e}\equiv \det(\mathbf{F}^{e}). \tag{8}\]
Here, the trace of a tensor \(\mathbf{A}\) is denoted by \(\operatorname{tr}(\mathbf{A})\), its transpose by \(\mathbf{A}^{\top}\), its determinant by \(\det(\mathbf{A})\), and its deviatoric part by \(\mathbf{A}_{0}\). Assuming an additive decomposition of the mesoscopic strain-rates in (7), \(\mathbf{F}^{e}\) and \(\mathbf{E}^{e}\) evolve according to the following rule: \(d^{s}\mathbf{F}^{e}/dt=\mathbf{L}^{e}\mathbf{F}^{e}\) with \(\mathbf{L}^{e}=\mathbf{L}_{s}-\mathbf{\tilde{D}}^{p}\), where \(\mathbf{\tilde{D}}^{p}\) denotes the _inelastic deformation rate_ tensor, which is defined later in this section.
The final component of this micromechanical picture of granular materials is highlighted in Figure 3f: the _cumulative particle size distribution_. In breakage mechanics theory (Einav, 2007), this is characterized by the auxiliary variable \(B\), the _relative breakage_. Originally introduced in Hardin (1985), \(B\) measures the pulverization of granular particles by comparing the current distribution of particle sizes with the hypothetical initial and ultimate distributions shown in Figure 3f (see C). In addition to the mesoscopic picture of inelastic deformations provided by \(\mathbf{F}^{p}\) and \(\mathbf{\tilde{D}}^{p}\), the relative breakage \(B\) allows us to incorporate information about the evolving sizes of the individual particles into our model.
All together, this micromechanical picture of granular materials allows us to predict the amount of elastic deformation experienced within the individual grains -- through \(B\), \(\epsilon_{v}^{e}\), \(\epsilon_{s}^{e}\), and \(J^{e}\) -- by measuring the rate at which the material is deforming (\(\mathbf{L}_{s}\) and \(\mathbf{D}_{s}\)), determining how much of that deformation is inelastic (\(\mathbf{\tilde{D}}^{p}\)), and connecting this inelastic deformation with changes in the particle size distribution (\(d^{s}B/dt\)). A thorough discussion of these strains, strain-rates, and auxiliary variables can be found in C.
### Helmholtz Free Energy
Following the thermomechanical models developed in Herbold et al. (2020) and Baumgarten et al. (2021), we propose expressions for the mass-specific Helmholtz free energies, \(\psi_{s}\) and \(\psi_{f}\). These are assumed to be functions of the mesoscopic elastic deformations as described by \(\epsilon_{v}^{e}\) and \(\epsilon_{s}^{e}\); the distribution of particle sizes, as captured by \(B\); the true densities
of the constituent materials \(\rho_{s}\) and \(\rho_{f}\); and their absolute temperatures, \(T_{s}\) and \(T_{f}\). We assume that these Helmholtz free energies can be written as
\[\psi_{s} =\hat{\psi}_{c}(\epsilon_{v}^{e},\epsilon_{s}^{e},B)+\hat{\psi}_{g }(\rho_{s},T_{s}), \tag{9a}\] \[\psi_{f} =\hat{\psi}_{f}(\rho_{f},T_{f}). \tag{9b}\]
Here \(\psi_{c}\) represents the component of \(\psi_{s}\) that is associated with strain energy stored at particle-particle contact points (Figure 4a; e.g., see Hiramatsu and Oka, 1966), while \(\psi_{g}\) and \(\psi_{f}\) represent the mechanically distinct _densification_ strain energies associated with volume and temperature changes within constituent grains and interstitial fluid (Figures 4b,c).
Together with the assumed expressions for \(\psi_{s}\) and \(\psi_{f}\) in (9), Equations (5) and (6) provide a set of thermodynamic constraints for the constitutive model presented here. First, the free energy functions \(\psi_{s}\) and \(\psi_{f}\) must satisfy,
\[\bar{\rho}_{s}\frac{d^{s}\psi_{s}}{dt} =\mathbf{\sigma}_{s}:\nabla\mathbf{v}_{s}+\frac{\phi_{s}p_{f}}{\rho_{s}} \bigg{(}\frac{d^{s}\rho_{s}}{dt}\bigg{)}-\bar{\rho}_{s}s_{s}\bigg{(}\frac{d^{ s}T_{s}}{dt}\bigg{)}-D_{s}, \tag{10a}\] \[\bar{\rho}_{f}\frac{d^{f}\psi_{f}}{dt} =\mathbf{\tau}_{f}:\nabla\mathbf{v}_{f}+\frac{\phi_{f}p_{f}}{\rho_{f}} \bigg{(}\frac{d^{f}\rho_{f}}{dt}\bigg{)}-\bar{\rho}_{f}s_{f}\bigg{(}\frac{d^{ f}T_{f}}{dt}\bigg{)}-D_{f}, \tag{10b}\]
with \(D_{s}\geq 0\) and \(D_{f}\geq 0\) the positive rates of _mechanical dissipation_. Second, the second law requires that heat flows from hot to cold regions in both materials and that drag forces act against relative motions -- i.e.,
\[\mathbf{q}_{s}\cdot\nabla T_{s}\leq 0,\quad\mathbf{q}_{f}\cdot\nabla T_{f}\leq 0, \quad q_{i}(T_{s}-T_{f})\geq 0,\quad\text{and}\quad\mathbf{f}_{d}\cdot(\mathbf{v}_{s}- \mathbf{v}_{f})\geq 0. \tag{11}\]
These conditions constrain the form of the constitutive equations for \(\mathbf{\sigma}_{s}\), \(\mathbf{\tau}_{f}\), \(p_{f}\), \(\mathbf{f}_{d}\), \(\mathbf{q}_{s}\), \(\mathbf{q}_{f}\), and \(q_{i}\) -- ensuring that our model predictions are physically reasonable and dissipative across the full range of conditions experienced during high-velocity impact events. The derivation of these equations can be found in C together with further discussions.
Figure 4: Illustration of strain energy storage mechanisms in fluid-saturated granular materials. (a) Under low–moderate confining stresses, highly porous granular materials store strain energy near particle–particle contact points. (b) After crushing, low porosity granular materials store energy more uniformly through direct compression of the solid material that composes the individual grains. (c) Within a connected pore network, the interstitial fluid primarily stores strain energy through bulk compression and internal temperature changes.
### Mechanical Model for Porosity
The porosity is constrained according to (1) with \(\phi_{s}\in[0,1]\); \(\phi_{f}\in[0,1]\); and \(\phi_{s}+\phi_{f}=1\). Here, we discuss two useful notations for porosity: (i) the _inelastic porosity_, \(\phi_{p}\); and (ii) the _true porosity_, \(\phi_{f}=1-\phi_{s}\).
The inelastic porosity, \(\phi_{p}\), defines the porosity of the granular sediment in the intermediate, inelastically deformed space shown in Figure 3d. It represents the porosity of the material in the absence of significant elastic deformations -- which tend to squeeze particles into open pore spaces. This measure of porosity is primarily used to model dilation and critical state behavior in the breakage mechanics literature (e.g., Tengattini et al., 2016; Cil et al., 2020) and follows the deformation theory from Collins et al. (2010):
\[d^{s}\phi_{p}/dt=-(1-\phi_{p})\ \text{tr}(\mathbf{\tilde{D}}^{p}). \tag{12}\]
Although broadly useful for characterizing how inelastic deformations change the available pore space, the inelastic porosity is mathematically distinct from the true porosity \(\phi_{f}\) that appears in (1)-(6).
The true porosity \(\phi_{f}\), on the other hand, is primarily used in the geomechanics literature and defines the porosity of the granular material in its current deformed space (shown in Figure 3e). This measure of porosity is frequently determined using the solid volume fraction \(\phi_{s}\), which is calculated from the effective density \(\bar{\rho}_{s}\) and true solid density \(\rho_{s}\)(e.g., see Danielson and Sutherland, 1986):
\[\phi_{f}=1-\phi_{s},\quad\text{and}\quad\phi_{s}=\bar{\rho}_{s}/\rho_{s}. \tag{13}\]
Here, the effective density \(\bar{\rho}_{s}\) is easily determined by bulk kinematics in (3) and (7); however, the true solid density \(\rho_{s}\) must be determined using a constitutive equation. At low-moderate confining stresses (Figure 4a), the solid constituent is generally treated as _incompressible_ -- i.e., \(\rho_{s}\) has a constant value \(\rho_{0}\) -- greatly simplifying the calculation in (13). At high confining stresses (Figure 4b), on the other hand, this assumption is no longer valid, and the constituent solid must be treated as _compressible_.
To complete the model for \(\phi_{f}\) in (13), we propose the following mechanical model for the solid density \(\rho_{s}\):
\[\rho_{s}/\rho_{0}=1+\hat{\alpha}(\phi_{s})(J^{e-1}-1), \tag{14}\]
with \(\rho_{0}\) the stress-free solid reference density (at room temperature) and \(\hat{\alpha}(\phi_{s})\in[0,1]\) a constitutive function that depends on the solid volume fraction. At relatively high porosities (Figure 4a), the solid constituent is reasonably modeled as incompressible with \(\hat{\alpha}(\phi_{s})\approx 0\). On the other hand, at relatively low porosities (Figure 4b), the solid constituent must compress to accommodate mesoscopic deformations with \(\hat{\alpha}(\phi_{s})\approx 1\). In this work, we assume a simple power law model for \(\hat{\alpha}\):
\[\hat{\alpha}(\phi_{s})=\phi_{s}^{b}, \tag{15}\]
with \(b\) a fitting parameter. Together with (1) and (14), this mechanical model uniquely determines the solid volume fraction -- and thus the porosity -- in terms of the effective
density \(\bar{\rho}_{s}\), the mesoscopic volume ratio \(J^{e}\), and the reference density \(\rho_{0}\) according to the implicit equation:
\[\phi_{s}=\bar{\rho}_{s}/\rho_{0}-\phi_{s}^{b+1}(J^{e-1}-1). \tag{16}\]
Note that this purely mechanical model for porosity does not consider thermal expansion or the influence of extreme pore fluid pressures (e.g., Biot, 1941).
### Effective Granular Stress
The constitutive equation for the effective granular stress \(\mathbf{\sigma}_{s}\) can be deduced from (7), (8), (9), (10), and (14) following the Coleman-Noll procedure (Coleman and Noll, 1963):
\[\mathbf{\sigma}_{s} =\frac{\bar{q}}{3\epsilon_{s}^{e}}\mathbf{B}_{0}^{e}\mathbf{B}^{e}-\bar{p} \mathbf{B}^{e}-\phi_{s}p^{*}\hat{A}(\phi_{s},J^{e})\mathbf{1}, \tag{17a}\] \[\bar{p} =-\bar{\rho}_{s}\frac{\partial\hat{\psi}_{c}}{\partial\epsilon_{ v}^{e}},\quad\bar{q}=\bar{\rho}_{s}\frac{\partial\hat{\psi}_{c}}{\partial \epsilon_{s}^{e}},\quad\text{and}\quad p^{*}=\rho_{s}^{2}\frac{\partial\hat{ \psi}_{g}}{\partial\rho_{s}}, \tag{17b}\]
with \(\mathbf{B}^{e}=\mathbf{F}^{e}\mathbf{F}^{e\top}\), and \(\hat{A}(\phi_{s},J^{e})=-(J^{e}/\rho_{s})(d\rho_{s}/dJ^{e})|_{\bar{\mathbf{D}}^{p} =\mathbf{0}}\). An explicit expression for \(\hat{A}(\phi_{s},J^{e})\) is provided in C. In (17), \(\bar{p}\) and \(\bar{q}\) denote the pressure and shear stresses associated with elastic deformations at the particle-particle contact points.
We assume the nonlinear free energy function, \(\psi_{c}\) (e.g., see Nguyen and Einav, 2009) to be in the form:
\[\hat{\psi}_{c}(\epsilon_{v}^{e},\epsilon_{s}^{e},B) =(1-\theta B)\frac{p_{r}}{\rho_{0}}\biggl{(}\frac{-\bar{K}^{2} \epsilon_{v}^{e3}}{12}-\frac{3\bar{G}\bar{K}\epsilon_{v}^{e}\epsilon_{s}^{e2}} {4}+\frac{\bar{G}\sqrt{3\bar{G}\bar{K}}\epsilon_{s}^{e3}}{4}\biggr{)}, \tag{18a}\] \[\bar{p} =\frac{\bar{\rho}_{s}}{\rho_{0}}(1-\theta B)p_{r}\biggl{(}\frac{ \bar{K}^{2}\epsilon_{v}^{e2}}{4}+\frac{3\bar{G}\bar{K}\epsilon_{s}^{e2}}{4} \biggr{)},\] (18b) \[\bar{q} =\frac{\bar{\rho}_{s}}{\rho_{0}}(1-\theta B)3\bar{G}p_{r}\biggl{(} \frac{-\bar{K}\epsilon_{v}^{e}\epsilon_{s}^{e}}{2}+\frac{\sqrt{3\bar{G}\bar{K} }\epsilon_{s}^{e2}}{4}\biggr{)}. \tag{18c}\]
where \(\theta\) denotes the constant grading index from Einav (2007); \(p_{r}\) denotes the non-linear reference pressure; \(\bar{K}\) and \(\bar{G}\) denote the dimensionless, reference bulk modulus and shear modulus from Nguyen and Einav (2009); and the remaining variables have already been defined.
The third term in \(\mathbf{\sigma}_{s}\) in (17)a is the solid pressure term, \(p^{*}\), which is associated with strain energy stored during direct compression of the constituent solid material and defined by the constituent free energy function, \(\psi_{g}\) (e.g., see Herbold et al., 2020). Any thermodynamically valid equation of state (EOS) may be used, but here we use the Mie-Gruneisen EOS to define \(\psi_{g}\)(Mie, 1903; Gruneisen, 1912):
\[\hat{\psi}_{g}(\rho_{s},T_{s}) =\hat{e}_{c}(\rho_{s})+c_{v}T_{s}-T_{s}\biggl{[}c_{v}\text{ln} \biggl{(}\frac{T_{s}}{T_{0}}\biggr{)}-c_{v}\Gamma_{0}\biggl{(}1-\frac{\rho_{0} }{\rho_{s}}\biggr{)}\biggr{]}, \tag{19a}\] \[p^{*} =\hat{p}_{H}(\rho_{s})\biggl{[}1-\frac{\Gamma_{0}}{2}\biggl{(}1- \frac{\rho_{0}}{\rho_{s}}\biggr{)}\biggr{]}+\rho_{0}\Gamma_{0}\bigl{(}\hat{e}_ {c}(\rho_{s})+c_{v}(T_{s}-T_{0})\bigr{)}, \tag{19b}\]
where (19b) is computed using the solid heat capacity \(c_{v}\); the reference temperature \(T_{0}\); the Gruneisen parameter \(\Gamma_{0}\); and the constitutive functions, \(\hat{e}_{c}(\rho_{s})\) and \(\hat{p}_{H}(\rho_{s})\), which define the cold energy and shock Hugoniot curves, respectively. In the first-order Mie-Gruneisen EOS, these functions are determined by the reference sound speed, \(C_{0}\), and the slope of the \(U_{s}\)-\(U_{p}\) curve, \(S_{0}\). Explicit expressions for these functions are provided in D.
### Yielding and Dissipation
The inelastic response of the granular material is considered in terms of the inelastic deformation rate \(\mathbf{\tilde{D}}^{p}\), and the rate of change of the relative breakage, \(d^{s}B/dt\). The inelastic flow rules that define \(\mathbf{\tilde{D}}^{p}\) and \(d^{s}B/dt\) are _non-associative_ -- i.e., they are defined by both a _yield function_ and by a _flow direction_.
As in Baumgarten and Kamrin (2019a), we define the flow direction for \(\mathbf{\tilde{D}}^{p}\) using the _stress conjugate to yielding_, \(\mathbf{\sigma}_{y}\), and the stress invariants \(p_{y}\) and \(q_{y}\) as follows:
\[\mathbf{\tilde{D}}^{p}=\frac{3\xi_{s}^{p}}{2q_{y}}\mathbf{\sigma}_{y0}+\tfrac{1}{3} \big{(}\xi_{v}^{p}+\xi_{2}^{p}+\xi_{3}^{p}\big{)}\mathbf{1},\quad\text{with}\quad p _{y}=-\tfrac{1}{3}\text{tr}(\mathbf{\sigma}_{y}),\quad q_{y}=\sqrt{\tfrac{3}{2}( \mathbf{\sigma}_{y0}:\mathbf{\sigma}_{y0})}. \tag{20}\]
Here, \(\mathbf{\sigma}_{y0}\) denotes the deviatoric component of \(\mathbf{\sigma}_{y}\) and the scalar rates \(\xi_{s}^{p}\), \(\xi_{v}^{p}\), \(\xi_{2}^{p}\), and \(\xi_{3}^{p}\) describe the four dominant, inelastic deformation mechanisms shown in Figure 1: the _granular shear rate_, \(\xi_{s}^{p}\); the _dilation/compaction rate_, \(\xi_{v}^{p}\); the _free expansion rate_, \(\xi_{2}^{p}\); and the _consolidation rate_, \(\xi_{3}^{p}\).
In (20), the stress conjugate to yielding, \(\mathbf{\sigma}_{y}\), defines the stress components that dissipate energy during inelastic flow (\(\mathbf{\tilde{D}}^{p}\neq\mathbf{0}\)). This stress measure is nearly identical to the effective granular stress, \(\mathbf{\sigma}_{s}\), and is defined as follows:
\[\mathbf{\sigma}_{y}=\frac{\bar{q}}{3\epsilon_{s}^{e}}\mathbf{B}_{0}^{e}\mathbf{B}^{e}-\bar {p}\mathbf{B}^{e}-\phi_{s}p^{*}\hat{C}(\phi_{s},J^{e})\mathbf{1}, \tag{21}\]
with \(\bar{q}\), \(\bar{p}\), and \(p^{*}\) defined in (17) and \(\hat{C}(\phi_{s},J^{e})=-(J^{e}/\rho_{s})(d\rho_{s}/dJ^{e})|_{\mathbf{D}_{s}=\mathbf{0}}\). The distinction between the stress conjugate to yielding \(\mathbf{\sigma}_{y}\) and the effective granular stress \(\mathbf{\sigma}_{s}\) is required to satisfy the second law of thermodynamics and is discussed further in C.
To complete the flow rule in (20), we define a set of yield functions that uniquely determine the scalar inelastic rates, \(\xi_{s}^{p}\), \(\xi_{v}^{p}\), \(\xi_{2}^{p}\), \(\xi_{3}^{p}\), and \(d^{s}B/dt\), subject to the following dissipation condition from (10a):
\[D_{s}=q_{y}\xi_{s}^{p}-p_{y}(\xi_{v}^{p}+\xi_{2}^{p}+\xi_{3}^{p})+E_{B}\frac{d ^{s}B}{dt},\quad\text{with}\quad D_{s}\geq 0,\quad E_{B}=-\bar{\rho}_{s} \frac{\partial\hat{\psi}_{c}}{\partial B}, \tag{22}\]
with \(E_{B}\) the _breakage energy_ that is dissipated through particle pulverization (Einav, 2007).
### Onset of Yielding
First, we determine the granular shear rate \(\xi_{s}^{p}\), the dilation/compaction rate \(\xi_{v}^{p}\), and the rate of breakage \(d^{s}B/dt\) using the scalar multiplier \(\lambda_{1}\) (i.e., \(\xi_{s}^{p}\), \(\xi_{v}^{p}\), and \(d^{s}B/dt\propto\lambda_{1}\)) according to the yield function proposed in Rubin and Einav (2011):
\[y_{1}=\frac{E_{B}(1-B)^{2}}{E_{c}}+\frac{q_{y}^{2}}{(Mp_{y})^{2}}-1,\quad\text{ with}\quad y_{1}\leq 0,\quad\lambda_{1}\geq 0,\quad y_{1}\lambda_{1}=0. \tag{23}\]
Here, \(E_{c}\) denotes the _critical breakage energy_, and \(M\) denotes the _internal friction coefficient_. This form of the yield function combines mathematical models for frictional granular flow (\(q_{y}=Mp_{y}\); Drucker and Prager, 1952) and particle fragmentation (\(E_{B}(1-B)^{2}=E_{c}\); Einav, 2007) into a single rule for dense, inelastic deformation.
The second yield function determines the rate of free expansion \(\xi_{2}^{p}\) using the scalar multiplier \(\lambda_{2}\) (i.e., \(\xi_{2}^{p}\propto\lambda_{2}\)) according to the yield function proposed in Baumgarten and Kamrin (2019a):
\[y_{2}=-p_{y},\quad\text{with}\quad y_{2}\leq 0,\quad\lambda_{2}\geq 0,\quad y _{2}\lambda_{2}=0. \tag{24}\]
This form of the second yield function ensures pressure positivity and defines a granular material in which the particles are free to separate and are unable to support tension.
Finally, the third yield function determines the consolidation rate \(\xi_{3}^{p}\) using the scalar multiplier \(\lambda_{3}\) (i.e., \(\xi_{3}^{p}\propto\lambda_{3}\)) according to a modified yield function from Baumgarten and Kamrin (2019a):
\[y_{3}=\begin{cases}\phi_{f}-\phi_{\text{max}}&\text{if}\quad\phi_{f}\leq\phi_{ \text{max}},\\ 0&\text{if}\quad\phi_{f}>\phi_{\text{max}},\end{cases}\quad\text{with}\quad y _{3}\leq 0,\quad\lambda_{3}\geq 0,\quad y_{3}\lambda_{3}=0, \tag{25}\]
with \(\phi_{\text{min}}=\phi_{l}(1-B)^{l}\) and \(\phi_{\text{max}}=\phi_{u}(1-B)^{u}\) the limiting inelastic porosities proposed in Rubin and Einav (2011). This form of the third yield function ensures that disconnected granular sediments (\(\phi_{f}\geq\phi_{\text{max}}\)) are unable to support any stresses.
### Inelastic Flow
At the onset of dense, inelastic yielding (i.e., \(y_{1}=0\)), \(d^{s}B/dt\), \(\xi_{v}^{p}\), and \(\xi_{s}^{p}\) are allowed to have non-zero values obeying the modified flow rules from Tengattini et al. (2016) and Cil et al. (2020). These flow rules are determined by the scalar multiplier \(\lambda_{1}\geq 0\), which ensures that \(y_{1}=0\) while the material is yielding:
\[\frac{d^{s}B}{dt} =\lambda_{1}\frac{E_{B}(1-B)^{2}}{E_{B}E_{c}}\text{cos}^{2}( \omega), \tag{26a}\] \[\xi_{v}^{p} =\lambda_{1}\frac{E_{B}(1-B)^{2}}{E_{c}}\frac{-p_{y}}{(p_{y}^{2}+ q_{y}^{2})}\text{sin}^{2}(\omega)+\lambda_{1}M_{d}\frac{q_{y}}{(Mp_{y})^{2}},\] (26b) \[\xi_{s}^{p} =\lambda_{1}\frac{E_{B}(1-B)^{2}}{E_{c}}\frac{q_{y}}{(p_{y}^{2}+ q_{y}^{2})}\text{sin}^{2}(\omega)+\lambda_{1}\frac{q_{y}}{(Mp_{y})^{2}}. \tag{26c}\]
Here, \(\omega\) is the _coupling angle_ from Einav (2007), and \(M_{d}\) is the _dilation coefficient_. The first terms in (26a), (26b), and (26c) define the components of inelastic deformation associated with particle fragmentation and are coupled together by the coupling angle \(\omega\). These terms capture how fragmentation simultaneously changes the distribution of particle sizes (\(d^{s}B/dt\)) _and_ relaxes stress concentrations at particle-particle contact points (\(\xi_{v}^{p}\) and \(\xi_{s}^{p}\)). The second terms in (26b) and (26c), on the other hand, are associated with frictional granular rearrangement and shear dilation (\(\xi_{v}^{p}\) and \(\xi_{s}^{p}\)).
An important component of (26) is the incorporation of the critical state theories of Pailha and Pouliquen (2009); Tengattini et al. (2016); and Cil et al. (2020). In particular, \(\omega\), \(M\), and \(M_{d}\) are all functions of the _relative density_, \(\tau\), and its critical value, \(\tau_{cs}\):
\[\tau=\frac{\phi_{\rm max}-\phi_{p}}{\phi_{\rm max}-\phi_{\rm min}},\quad\mbox{ and}\quad\tau_{cs}=\sqrt{\frac{E_{B}}{E_{c}}}\frac{(1-B)}{\gamma} \tag{27}\]
with \(\phi_{\rm min}\) and \(\phi_{\rm max}\) defined in Rubin and Einav (2011), and \(\gamma\in[0,1]\) the constant dilation parameter from Tengattini et al. (2016). From these, we define:
\[M=M_{0}+M_{d},\quad M_{d}=\gamma(\tau-\tau_{cs})\bigg{(}\frac{6\ \sin(\theta_{p})}{3 -\sin(\theta_{p})}-M_{0}\bigg{)},\quad\mbox{and}\quad\omega=\frac{\pi}{2}(1- \tau), \tag{28}\]
where \(M_{0}\) is the _critical state friction coefficient_ and \(\theta_{p}\) is the _peak dilation angle_. Here, \(\theta_{p}=\pi/15+\sin^{-1}(3M_{0}/(6+M_{0}))\) is assumed from Cil et al. (2020).
At the onset of tensile yielding (i.e., \(y_{2}=0\)), \(\xi_{3}^{p}\) is allowed to have a non-zero value obeying the flow rule from Baumgarten and Kamrin (2019). This flow rule is determined by the scalar multiplier \(\lambda_{2}\geq 0\), which ensures that \(y_{2}=0\) while the material is yielding:
\[\xi_{2}^{p}=\lambda_{2}. \tag{29}\]
Similarly, at the onset of disconnected yielding (i.e., \(y_{3}=0\)), \(\xi_{3}^{p}\) is allowed to have a non-zero value determined by the scalar multiplier \(\lambda_{3}\geq 0\), which ensures that \(y_{3}=0\) while the material is yielding:
\[\xi_{3}^{p}=-\lambda_{3}. \tag{30}\]
All together, (20)-(26), (29), and (30) define the inelastic behavior of the granular material model proposed in this work. Further discussion of these flow rules, along with proof that the dissipation inequality in (22) is satisfied, is provided in C.
### Fluid Equation of State and Viscous Stresses
The behavior of the interstitial fluid is dominated by the pore fluid stresses as captoued by the effective fluid shear stress, \(\boldsymbol{\tau}_{f}\), and the pore fluid pressure, \(p_{f}\). The constitutive equations for these stresses are deduced from (7), (9), and (10) following the Coleman-Noll procedure:
\[p_{f}=\rho_{f}^{2}\frac{\partial\psi_{f}}{\partial\rho_{f}},\quad\mbox{and} \quad\boldsymbol{\tau}_{f}:\boldsymbol{D}_{s}\geq 0. \tag{31}\]
For the effective fluid shear stress, \(\boldsymbol{\tau}_{f}\), there are many admissible constitutive equations available in the literature (e.g., Eilers, 1941; Krieger and Dougherty, 1959; Morris and Boulay, 1999), and here, we adopt the simple model proposed in Einstein (1906):
\[\boldsymbol{\tau}_{f}=2\eta_{0}(1+5\phi_{s}/2)\boldsymbol{D}_{f0}, \tag{32}\]
with \(\eta_{0}\) the _fluid viscosity_ and \(\mathbf{D}_{f0}\) the deviator of the effective fluid strain-rate \(\mathbf{D}_{f}\). For the fluid pore pressure \(p_{f}\) -- which depends on the density and temperature of the fluid constituent, \(\rho_{f}\) and \(T_{f}\) -- we use the Tillotson EOS (Tillotson, 1962) as implemented in Brundage (2013); however any thermodynamically valid EOS may also be used (e.g., the Mie-Gruneisen EOS or the Sackur-Tetrode EOS; see Gruneisen, 1912; Sackur, 1913). Note that the choice of the Tillotson EOS here is motivated by the possibility of vaporization in the fluid phase during high-velocity impact.
This model constructs the fluid pressure using constitutive equations that depend on the specific internal energy \(\varepsilon_{f}\), which is defined as follows:
\[\varepsilon_{f}=\hat{e}_{cf}(\rho_{f})+c_{vf}(T_{f}-T_{0}), \tag{33}\]
where \(\hat{e}_{cf}(\rho_{f})\) defines the fluid cold energy curve and \(c_{vf}\) denotes the _specific fluid heat capacity_. The fluid pore pressure \(p_{f}\) is then defined piece-wise in energy-density space. The full form of this EOS is provided in the D; however, for compressed states, the following expression may be used:
\[p_{f1}=\bigg{[}a_{f}+\frac{b_{f}}{\varepsilon_{f}/(E_{0}\eta^{2})+1}\bigg{]} \rho_{f}\varepsilon_{f}+A_{f}\mu+B_{f}\mu^{2},\quad\text{for}\quad\rho_{f} \geq\rho_{f0}, \tag{34}\]
with \(\eta=\rho_{f}/\rho_{f0}\) and \(\mu=\eta-1\). In this equation, \(a_{f}\), \(b_{f}\), \(A_{f}\), \(B_{f}\), and \(E_{0}\) are constant fitting parameters, while \(\rho_{f0}\) denotes the stress-free, reference density at \(T_{0}\). The full form of the model includes the additional fitting parameters \(\alpha_{f}\) and \(\beta_{f}\), along with the density of _incipient vaporization_, \(\rho_{\text{IV}}\); the energy of incipient vaporization, \(E_{\text{IV}}\); the energy of _complete vaporization_\(E_{\text{CV}}\); and the _cavitation pressure_\(P_{\text{cav}}\). Further discussion of the model and its implementation may be found in Brundage (2013).
### Inter-phase Drag Force
The flow of a viscous fluid through the interstitial space between particles produces a volumetric drag force which is represented in this model using the inter-phase drag force vector, \(\mathbf{f}_{d}\). The constitutive equation for this force is constrained by the thermodynamic restrictions in (11); in particular: \(\mathbf{f}_{d}\cdot(\mathbf{v}_{s}-\mathbf{v}_{f})\geq 0\). There are many admissible constitutive equations available in the literature (e.g., Darcy, 1856; van der Hoef et al., 2005; Baumgarten, 2021), and here we adopt the Carman-Kozeny drag model from Carman (1937):
\[\mathbf{f}_{d}=\frac{180\phi_{s}^{2}\eta_{0}}{d_{50}^{2}(1-\phi_{s})}(\mathbf{v}_{s}- \mathbf{v}_{f}). \tag{35}\]
In this equation, \(\eta_{0}\) is the pore fluid viscosity, and \(d_{50}\) is a characteristic grain size. Here, we let \(d_{50}\) denote the sieve diameter that allows 50% (by mass) of the granular particles to pass through.
### Adiabatic and Isothermal Behavior
The final constitutive equations that must be defined in order to solve the governing equations in (3)-(5) are the heat flow rates \(\mathbf{q}_{s}\), \(\mathbf{q}_{f}\), and \(q_{i}\). Any model for these heat flows is constrained by the thermodynamic restrictions in (11): namely, \(\mathbf{q}_{s}\cdot\nabla T_{s}\leq 0\); \(\mathbf{q}_{f}\cdot\nabla T_{f}\leq 0\); and \(q_{i}(T_{s}-T_{f})\geq 0\). Although there are many admissible models that could be chosen, here we consider two limiting conditions: _isothermal_ conditions with \(T_{s}=T_{f}=T_{0}\); and _adiabatic_ conditions with \(\mathbf{q}_{s}=\mathbf{q}_{f}=\mathbf{0}\) and \(q_{i}=0\).
For many laboratory tests performed on granular media (e.g., oedometer testing, triaxial loading, etc.), the loading rate is relatively slow, which allows temperatures to equilibrate during an experiment. For model fitting and comparison with laboratory data, the isothermal limit should be used with \(T_{s}=T_{f}=T_{0}\). In these cases, \(\mathbf{q}_{s}\), \(\mathbf{q}_{f}\), and \(q_{i}\) are defined implicitly to satisfy the conditions in (5), (19), and (33).
On the other hand, during the high-velocity impact events considered in this work, the loading rate is relatively high, and likely does not allow for significant heat flow or thermal dissipation. For predicting the material response under high-velocity impact conditions, the adiabatic limit should be used with \(\mathbf{q}_{s}=\mathbf{q}_{f}=\mathbf{0}\) and \(q_{i}=0\). In these cases, \(T_{s}\) and \(T_{f}\) must be computed by integrating (5) and solving for the thermal component of the internal energies: \(\varepsilon_{s}=\hat{e}_{c}(\rho_{s})+c_{v}T_{s}\), and \(\varepsilon_{f}=\hat{e}_{cf}(\rho_{f})+c_{vf}(T_{f}-T_{0})\).
## 4 Model Summary and Example Parameters
The model described in this work provides a unified constitutive description of fluid-saturated granular materials that can be used for impact applications. This model is governed by the system of physical conservation laws in (3)-(6) and is formulated to capture the complex, coupled dynamics of granular sediments and interstitial fluids during impact.
The constitutive equations for the effective granular stress \(\mathbf{\sigma}_{s}\) are based on a multiplicative decomposition of deformation (i.e., \(\mathbf{F}=\mathbf{F}^{e}\mathbf{F}^{p}\)) with an elastic stress response based on a strain-energy formulation -- i.e., \(\psi_{s}=\hat{\psi}_{c}(\epsilon_{v}^{e},\epsilon_{s}^{e},B)+\hat{\psi}(\rho_{s },T_{s})\). To determine the elastic deformations used in the model, the corresponding additive decomposition of the deformation rates (i.e., \(\mathbf{L}_{s}=\mathbf{L}^{e}+\mathbf{\tilde{D}}^{p}\)) is used, which incorporates analytical models for the four dominant, inelastic deformation mechanisms: granular shearing; dilation and compaction; free granular separation; and disconnected reconsolidation. The corresponding model parameters for two sands (Dog's Bay and Ottawa) are presented in Table 1.
The remaining constitutive equations for the interstitial fluid are based on a dissipative, energy-density formulation -- i.e., \(\psi_{f}=\hat{\psi}_{f}(\rho_{f},T_{f})\) with \(\mathbf{\tau}_{f}:\mathbf{D}_{f}\geq 0\) and \(\mathbf{f}_{d}\cdot(\mathbf{v}_{s}-\mathbf{v}_{f})\geq 0\). These constitutive equations incorporate analytical models for the compression of the pore fluid and its viscous interaction with surrounding particles, which are primarily based on available models from the literature. The pore fluid pressure \(p_{f}\), the viscous shear stress \(\mathbf{\tau}_{f}\), and the drag force \(\mathbf{f}_{d}\) are characterized for water using the model parameters presented in Table 1.
### Example: Uniaxial Strain Compression of Ottawa Sand
The basic capabilities of the model are demonstrated through an example compression test on dry Ottawa sand using the model parameters from Table 1. Here, an initially mono
disperse sample of Ottawa sand (\(d_{50}=300\)\(\mu\)m, \(\rho_{0}=2650\) kg/m\({}^{3}\), \(\phi_{s}=0.65\), \(B_{0}=0.0\)) is compressed along one axis while the other two axes remain fixed -- i.e., uniaxial strain compression, a condition that is typically attained in plate impact experiments. For this example, we consider two possible implementations of the constitutive model: a _compressible_ version and an _incompressible_ version. The compressible version is the complete model, including both strain-energy contributions, \(\psi_{c}\) and \(\psi_{g}\) from (9). The incompressible version
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Parameter & Dog’s Bay & Ottawa & Units & Description \\ \hline \multicolumn{4}{l}{Elastic Stress Parameters} \\ \hline \(\theta\) & 0.83 & 0.83 & – & grading index (Einav, 2007) \\ \(p_{r}\) & 1000 & 1000 & Pa & reference pressure (Nguyen and Einav, 2009) \\ \(\bar{K}\) & 16330 & 15340 & – & dimensionless, reference bulk modulus \\ \(\bar{G}\) & 25710 & 9200 & – & dimensionless, reference shear modulus \\ \(b\) & – & 10 & – & density–elasticity coupling parameter \\ \(\rho_{0}\) & 2710 & 2650 & kg/m\({}^{3}\) & reference solid density \\ \(T_{0}\) & 298 & 298 & K & reference temperature \\ \(c_{v}\) & – & 736 & J/kg\(\cdot\)K & solid specific heat capacity \\ \(\Gamma_{0}\) & – & 0.67 & – & reference Grüneisen parameter (Boehler et al., 1979) \\ \(C_{0}\) & – & 3630 & m/s & reference solid bulk sound speed \\ \(S_{0}\) & – & 0.89 & – & \(U_{s}\)–\(U_{p}\) slope for Mie–Grüneisen EOS (Wackerle, 1962) \\ \hline \multicolumn{4}{l}{Inelastic Deformation Parameters} \\ \hline \(E_{c}\) & 280 & 5.0\(\times 10^{5}\) & J/m\({}^{3}\) & critical breakage energy (Einav, 2007) \\ \(M_{0}\) & 1.65 & 1.02 & – & critical state friction coefficient \\ \(\gamma\) & 0.95 & 0.15 & – & dilation parameter (Tengattini et al., 2016) \\ \(\phi_{l}\) & 0.80 & 0.31 & – & lower porosity at \(B=0\) (Rubin and Einav, 2011) \\ \(\phi_{u}\) & 0.90 & 0.45 & – & upper porosity at \(B=0\) (Rubin and Einav, 2011) \\ \(l\) & 0.26 & 0.22 & – & lower porosity power law parameter \\ \(u\) & 0.21 & 0.17 & – & upper porosity power law parameter \\ \hline \multicolumn{4}{l}{Fluid EOS Parameters (Water; Brundage, 2013)} \\ \hline \(\eta_{0}\) & – & 8.9\(\times 10^{-4}\) & Pa\(\cdot\)s & dynamic fluid viscosity \\ \(\rho_{f0}\) & – & 998 & kg/m\({}^{3}\) & reference fluid density \\ \(c_{vf}\) & – & 3690 & J/kg\(\cdot\)K & fluid specific heat capacity \\ \(E_{0}\) & – & 7\(\times 10^{6}\) & J/kg & reference internal energy \\ \(a_{f}\) & – & 0.7 & – & Tillotson EOS parameter \\ \(b_{f}\) & – & 0.15 & – & Tillotson EOS parameter \\ \(A_{f}\) & – & 2.18\(\times 10^{9}\) & Pa & Tillotson EOS parameter \\ \(B_{f}\) & – & 1.325\(\times 10^{10}\) & Pa & Tillotson EOS parameter \\ \(\alpha_{f}\) & – & 10 & – & Tillotson EOS parameter \\ \(\beta_{f}\) & – & 5 & – & Tillotson EOS parameter \\ \(\rho_{\rm IV}\) & – & 958 & kg/m\({}^{3}\) & density of incipient vaporization \\ \(E_{\rm IV}\) & – & 4.2\(\times 10^{5}\) & J/kg & energy of incipient vaporization \\ \(E_{\rm CV}\) & – & 2.5\(\times 10^{6}\) & J/kg & energy of complete vaporization \\ \(P_{\rm cav}\) & – & -2.5\(\times 10^{7}\) & Pa & cavitation pressure (Herbert et al., 2006) \\ \hline \multicolumn{4}{l}{Inter-phase Drag Parameters} \\ \hline \(d_{50}\) & – & 3\(\times 10^{-4}\) & m & characteristic grain size \\ \hline \end{tabular}
\end{table}
Table 1: List of constitutive model parameters, with values presented for Dog’s Bay and Ottawa sands.
is a simplified implementation which only considers the contribution of \(\psi_{c}\) to the effective granular stress \(\boldsymbol{\sigma}_{s}\). The predicted material response to this loading condition is shown in Figure 5, and highlights important characteristic behaviors of the model.
Figure 5a,b show that the model exhibits three distinct stages of compaction, which are analogous to the canonical stages of powder compaction described in Reed (1995). Stage I is the low-pressure stage, dominated by granular rearrangement and the growth of contact stresses (see Figures 4a and 5a). In this stage, the model predicts almost no evolution of the relative breakage, \(B\), or compaction of pore space (Figure 5c). Stage II is the compaction stage, dominated by particle fracture, fragmentation, and pore collapse (Figure 5b,c). Almost all of the particle fracture predicted by the model occurs in this stage. Finally, stage III is the fully compacted stage, dominated by elastic compression of the constituent solid (Figures 4b and 5a,b).
In this final stage, we begin to see important differences between the compressible and incompressible model implementations. Below 100 MPa, there is negligible difference between the predictions of these two models, and the individual sand grains may be reasonably modeled as elastically incompressible -- i.e., \(\rho_{s}=\rho_{0}\) and \(p^{*}=0\). However, at higher confining stresses, the model predictions begin to deviate, and the simplified incompressible implementation becomes non-physical (Figure 5c). Under these conditions the full compressible model should be used -- i.e., \(\rho_{s}=\rho_{0}+\rho_{0}\)\(\hat{\alpha}(\phi_{s})(J^{e-1}-1)\) and \(p^{*}\geq 0\) -- to avoid erroneous predictions of available pore space between individual particles.
Although the uniaxial strain compression curves presented in this section highlight key features of the model, they of course do not capture the full range of potential applications of
Figure 5: Example model predictions for uniaxial compression of an initially monodisperse Ottawa sand sample (\(\phi_{s0}=0.65\), \(B_{0}=0.0\)). (a) Comparison of predicted stress-strain response for incompressible and complete model implementations; black-dashed lines separate three canonical compaction stages from Reed (1995). (b) Comparison of predicted stress-breakage response for incompressible and complete model implementations. (c) Comparison of predicted stress-porosity response for both model implementations; black-dashed line denotes theoretical minimum value. The incompressible model predicts a non-physical porosity above 700 MPa.
the model. In Section 6, we implement this constitutive model in the full system of governing equations from Section 2 to predict the response of fluid-saturated granular materials along multi-stage loading paths and under the complex, dynamic conditions of impact.
## 5 Materials and Methods
Laboratory data for two sands (Dog's Bay and Ottawa) are considered to calibrate and validate this constitutive model. Dog's Bay sand is a weak, biogenic carbonate sand from western Ireland that consists of foraminifera and mollusc shells (Coop, 1990), and Ottawa sand is a quartz-based sand mined from deposits located in Ottawa, Illinois (Erdogan et al., 2017). The model parameters that characterize these two sands are presented in Table 1 and calibrated using the procedure described in A.
The primary data sources for Dog's Bay sand are the isotropic compression and bender element tests performed in Jovicic and Coop (1997), which are used to calibrate \(p_{r}\), \(\bar{K}\), and \(\bar{G}\); the compression tests performed in Coop (1990), used to calibrate \(E_{c}\); and the multi-stage triaxial tests performed in Bandini and Coop (2011), used to calibrate \(M_{0}\) and \(\gamma\). Calibration of \(\theta\), \(\phi_{u}\), \(\phi_{l}\), \(u\), and \(l\) is based on previous modeling work from Tengattini et al. (2016). There is insufficient data to calibrate model parameters for Dog's Bay sand in the high-pressure, low-porosity regime -- namely, \(b\), \(\rho_{0}\), \(c_{v}\), \(\Gamma_{0}\), \(C_{0}\), and \(S_{0}\). Future uniaxial strain compression experiments or plate impact experiments could be used for this purpose.
As discussed in Section 4.1, the individual sand grains may be modeled as elastically incompressible for pressures up to 100 MPa. Although the model parameter \(b\) could not be calibrated for Dog's Bay sand, we apply the model to problems within the domain of calibration and assume that \(\rho_{s}=\rho_{0}\). Several triaxial loading experiments reported in Bandini and Coop (2011) for Dog's Bay sand are used to validate the model in this low-pressure, high-porosity regime. (Note that select data points from these experiments were also used to calibrate \(M_{0}\) and \(\gamma\); here the complete loading path is used for model validation.)
The primary data sources for Ottawa sand are the grading tests performed in Youd (1973), which are used to calibrate \(\phi_{u}\), \(\phi_{l}\), \(u\), and \(l\); the bender element tests performed in Robertson et al. (1995), used to calibrate \(p_{r}\) and \(\bar{G}\); the uniaxial compression tests performed in Kuwik et al. (2022), used to calibrate \(\bar{K}\), \(E_{c}\), and \(b\); the ring shear tests performed in Wijewickreme (1986) and Dakoulas and Sun (1992), used to calibrate \(M_{0}\); the ring shear tests performed in Sadrekarimi and Olson (2011), used to calibrate \(\gamma\); and the quartz crystal characterization tests performed in Wackerle (1962), Boehler et al. (1979), Lyzenga et al. (1983), and Heyliger et al. (2003), used to calibrate \(\rho_{0}\), \(c_{v}\), \(\Gamma_{0}\), \(C_{0}\), and \(S_{0}\).
To validate the application of this model to Ottawa sand in conditions relevant for high-velocity impact events, additional experimental data are considered -- including original data collected in this study. The uniaxial compression experiments reported in Kuwik et al. (2022) are used to calibrate the stress-strain response of the model and validate the stress-breakage (\(B\)) behavior. The triaxial loading experiments reported in Shahin and Hurley (2022) are used to validate the model along a two-stage loading path in the high-pressure, low-porosity regime. Additional projectile penetration and cratering data were collected for several high-velocity impact experiments performed on Ottawa sand at the
Hopkins Extreme Materials Institute HyFIRE (Hypervelocity Facility for Impact Research Experiments) facility.
### Experimental Methods
The high-velocity impact experiments reported in this study were performed using a two-stage light gas gun in the HyFIRE facility (shown in Figure 6). Target specimens of Ottawa sand (\(d_{50}=300\)\(\mu\)m, \(\rho_{0}=2650\) kg/m\({}^{3}\), \(\phi_{s}=0.37\); Shahin and Hurley, 2022) were prepared in 7.62 cm diameter polycarbonate tubes measuring 15.24 cm in length. Dry samples were prepared with 1.16-1.17 kg of material and sealed with Parafilm\({}^{\mathrm{TM}}\) M wax film. Saturated samples were prepared with 1.16-1.17 kg of material added to 0.26-0.27 liters of water and sealed with Parafilm\({}^{\mathrm{TM}}\) M wax film. To remove trapped gases, the saturated samples were also vibrated for 30 minutes and allowed to settle for 48 hours before the experiment. Each specimen was mounted horizontally in the target chamber and impacted by a 3 mm, 440C stainless steel sphere at between 1.16 and 1.70 km/s (average 1.28 km/s). A summary of the impact test conditions is presented in Table 2, including experiments previously reported in Kuwik et al. (2023).
During the impact event, synchronous and asynchronous flash X-ray images were captured using a two-channel, dual-output X-ray system from L3 Communications, with images captured on Carestream Industrex digital imaging plates. These plates were scanned using a ScanX Discover HC scanner. Simultaneously, high-speed imaging of the impact was captured using a Shimadzu HPV-X2 high-speed video camera, which was synchronized with an AMOtronics Saturn System test sequencer and a Physics Applications International break beam sensor. An illustration of the experimental configuration within HyFIRE is shown in Figure 6.
### Numerical Methods
The model predictions presented in this work are numerical approximations of solutions to the system of governing equations in (3)-(5). For the uniaxial and triaxial tests simulated
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Test \# & Condition & Chamber Pressure (Torr) & Right, Left X-Ray Flash (\(\mu\)s) & Velocity (km/s) \\ \hline \(1^{*}\) & Dry & 100 & 50, & 50 & 1.22 \\ \(2^{*}\) & Dry & 100 & 150, & 150 & 1.32 \\ \(3^{*}\) & Dry & 500 & 50, & 50 & 1.21 \\ \(4^{*}\) & Dry & 760 & 89, & 89 & 1.17 \\ \(5^{*}\) & Saturated & 760 & 54, & 54 & 1.25 \\ \(6^{*}\) & Saturated & 760 & 90, & 90 & 1.70 \\ \(7^{*}\) & Saturated & 760 & 150, & 150 & 1.16 \\ \(8^{\dagger}\) & Saturated & 500 & 54, & 154 & 1.26 \\ \(9^{\dagger}\) & Saturated & 500 & 23, & 98 & 1.28 \\ \(10^{\dagger}\) & Dry & 500 & 28, & 103 & 1.27 \\ \(11^{\dagger}\) & Dry & 500 & 29, & 154 & 1.29 \\ \hline \multicolumn{6}{l}{\({}^{*}\)Samples interspersed with 2mm lead spheres to visualize deformations (see Kuwik et al., 2023).} \\ \multicolumn{6}{l}{\({}^{\dagger}\)Asynchronous X-ray flashes used for multiple depth and crater measurements.} \\ \end{tabular}
\end{table}
Table 2: List of impact tests for Ottawa sand performed at the Hopkins Extreme Materials Institute.
in Section 4.1 and Sections 6.1-6.3, isothermal, quasi-steady solutions are computed for single material points using a MATLAB implementation of the stress update algorithm discussed in B. For the dynamic simulations presented in Section 6.4, we use a custom implementation of the material point method (MPM) based on the numerical algorithm previously published in Baumgarten and Kamrin (2019).
## 6 Results
In this section, we use the numerical methods summarized in Section 5.2 to validate the model and study its application in conditions relevant to high-velocity impact events. Using the model parameters presented in Table 1, we simulate the triaxial compression and shearing of Dog's Bay sand from Bandini and Coop (2011) as well as the uniaixial compression and triaxial shearing of Ottawa sand from Kuwik et al. (2022) and Shahin and Hurley (2022), respectively. Additionally, we evaluate the predictive capabilities of the model by simulating both projectile dynamics and crater development during 1.3 km/s impacts into dry and water-saturated Ottawa sand.
### Triaxial Loading of Dog's Bay Sand
In Bandini and Coop (2011), a series of triaxial tests are performed on Dog's Bay sand (\(d_{50}=200\)\(\mu\)m, \(\rho_{0}=2710\) kg/m\({}^{3}\), \(\phi_{s}=0.35\)-0.39) to study the the combined crushing and critical state response of the material. During these tests, samples were isotropically compressed to pressures between 500 kPa and 4 MPa, leading to significant inelastic compaction of the bulk material. After this initial compression, samples were sheared, unloaded, reconfigured, and sheared a second time. This multi-stage loading path allows for a unique evaluation of model predictions: starting from the same initial state, we assess how well the model follows experimental observations during the entire loading cycle. Although we fit the
Figure 6: (a) Illustration of the experimental configuration for the tests listed in Table 2 at the Hopkins Extreme Materials Institute (HEMI) Hypervelocity Facility for Impact Research Experiments (HyFIRE). (b) Digital photograph of Ottawa sand target sample mounted in target chamber; X-ray image plates are shown mounted in casings on the lower left and right of image.
parameters \(M_{0}\) and \(\gamma\) to data presented in Bandini and Coop (2011), no additional fitting is performed to match the multi-stage loading path reported here.
In this work, we are particularly interested in six of the drained tests reported in Bandini and Coop (2011): OR2, OR7, OR8, OR9, OR10, and OR11, which are summarized in Table 3. Each of these tests involves an initial compression followed by two shearing stages: one at the initial confining pressure and a second at a reduced confining stress. This multi-stage loading path is used to highlight the general applicability and robustness of the model, which is able to closely follow the compression, shearing, and volumetric changes observed in the experimental data (see Figure 7).
To simulate these six experiments, we implement the stress update algorithm described in B and model a single material point under drained loading conditions -- i.e., without confined, interstitial fluid. The deformation rate \(\mathbf{L}_{s}\) is assigned to follow the loading histories summarized in Table 3, using a numerically computed stress-gradient to satisfy the constant pressure and constant radial stress boundary conditions. In both the experiments and the simulations, the only differences between the six tests are the initial void ratio (\(e=\phi_{f}/\phi_{s}\)), the confining stresses, and the applied deformations. All six samples have the the same initial particle size distribution (\(d_{50}=200\)\(\mu\)m, \(B_{0}=0.52\)) and are simulated using the same model parameters.
In Figure 7a,b, the compaction curves reported in Bandini and Coop (2011) are compared with the predictions of the constitutive model proposed in this work. The void ratios
\begin{table}
\begin{tabular}{l l l l} \hline \hline Test & Initial Breakage, \(B_{0}\) & Final Breakage, \(B_{\mathrm{exp}}\) & Final Breakage, Model \\ \hline OR2 & 0.52 & 0.61 & 0.85 \\ OR7 & 0.52 & 0.74 & 0.94 \\ OR8 & 0.52 & 0.75 & 0.94 \\ OR9 & 0.52 & 0.88 & 0.99 \\ OR10 & 0.52 & 0.68 & 0.98 \\ OR11 & 0.52 & 0.89 & 0.98 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of the breakage, \(B\), measured before the triaxial compression tests reported in Bandini and Coop (2011), \(B_{0}\); after the completion of these tests, \(B_{\mathrm{exp}}\); and as simulated using the proposed model.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Test & Initial Void & Initial Loading & Axial Strain (\%), & Second Loading & Axial Strain (\%), \\ & Ratio (–) & (kPa) & First Shearing & (kPa) & Second Shearing \\ \hline OR2 & 1.758 & 500 & 21.07 & 100\({}^{\dagger}\) & 19.71 \\ OR7 & 1.591 & 1000 & 29.19 & 350\({}^{\dagger}\) & 14.75 \\ OR8 & 1.807 & 1000 & 25.07 & 500 & 21.79 \\ OR9 & 1.787 & 4000 & 19.56 & 1600\({}^{\dagger}\) & 19.49 \\ OR10 & 1.695 & 4000 & 22.28 & 1600 & 15.15 \\ OR11 & 1.737 & 4000 & 22.59 & 800 & 14.29 \\ \hline \hline \end{tabular}
\end{table}
Table 3: List of simulated, triaxial compression tests for Dog’s Bay sand, following Bandini and Coop (2011).
measured at the end of each test -- highlighted by enlarged markers in both plots -- have strikingly similar values across the range of loading paths considered. Importantly, these final points are not fit: they are predicted using the parameters calibrated in A, following the loading paths in Table 3. The observed similarity along these types of loading paths is unmatched by similar models proposed in the literature.
In Figure 7c, we compare the predicted pressure-shear-stress path simulated for test OR8 with the experimentally measured path reported in Bandini and Coop (2011). Here, the shear stress (\(q=\sqrt{(3/2)\boldsymbol{\sigma}_{s0}:\boldsymbol{\sigma}_{s0}}\)) develops as the samples are sheared at constant pressure to the same final axial strain (shown in Table 3). Although the compression curves shown in Figure 7a,b are remarkably similar, the simulated shear-stress response generally under-predicts experimental measurements. Additionally, as shown in Table 4, the relative
Figure 7: (a) Experimental compression curves for the Dog’s Bay sand tests listed in Table 3 from Bandini and Coop (2011). (b) Simulated compression curves following experimental loading paths. (c) Comparison of experimental (black, solid line) and simulated (red, dashed line) stress path for test OR8 — shearing stages are strain controlled at a fixed pressure. The measured and simulated values of relative breakage (\(B\)) at the end of each test in (a),(b) are listed in Table 4.
breakage (\(B\)) predicted at the end of each simulation is significantly larger than the relative breakage measured at the end of the experiments (\(B_{\rm exp}\)). This trend has been observed with similar breakage mechanics models (e.g., see Kuwik et al., 2022) and suggests that the assumed coupling angle \(\omega\) in (26) and (28) is generally too small -- over-predicting the evolution of \(B\). These differences are discussed further in Section 7.
### Uniaxial Loading of Ottawa Sand
We continue the analysis of the model by considering the behavior of a second material system: Ottawa sand. In Kuwik et al. (2022), a series of uniaxial compression tests -- or _oedometer_ tests -- are performed on Ottawa sand (\(d_{50}=300\)\(\mu\)m, \(\rho_{0}=2650\) kg/m\({}^{3}\), \(\phi_{s}=0.63\), \(B_{0}=0.14\)) to study the stress-strain-breakage response of the material. During these tests, samples were prepared in a thick-walled test cell and subjected to axial compression up to 30%, leading to significant inelastic compaction of the bulk material and axial stresses measured up to 1.2 GPa. This series of tests allows us to evaluate the high-pressure predictions of the model in a simple geometry: starting from a stress-free state through to a fully compressed, pulverized sediment.
The compression data reported in Kuwik et al. (2022) is some of first to highlight the three canonical stages of powder compaction (Figures 5a,b and 8a,b; see Reed, 1995) for Ottawa sand and considers strain rates from \(10^{-3}\) to \(10^{3}\) s\({}^{-1}\) and pressures between \(10^{6}\) to \(10^{9}\) Pa. In this work, we are particularly interested in the data collected on the MTS Criterion 43, which have complete stress-strain histories. From this set of experiments, we highlight the stress-strain data collected at \(10^{-1}\) s\({}^{-1}\) (shown in Figure 8a,b), which is used for partial model calibration. This curve is chosen to fit \(\bar{K}\), \(E_{c}\), and \(b\) following the procedure discussed in A. The remaining model parameters in Table 1 are calibrated from other data sources, as listed in Section 5.
To simulate this experiment, we implement the stress update algorithm described in B and model a single material point under uniaxial loading conditions -- i.e., \(\mathbf{L}_{s}=-\dot{\epsilon}_{11}\ \mathbf{e}_{1}\otimes\mathbf{e}_{1}\) with \(\dot{\epsilon}_{11}\) the compressive strain-rate and \(\mathbf{e}_{1}\) the unit vector aligned with the loading axis. Here, we again consider two possible implementations of the constitutive model: a compressible version and an incompressible version. As discussed in Section 4.1, below 100 MPa, the individual sand grains may be reasonably modeled as elastically incompressible. However, the predictions of this simplified implementation are expected to be poor and potentially non-physical at higher pressures. Under these conditions the full compressible model should be used.
In Figure 8a,b, the compaction curves reported in Kuwik et al. (2022) are compared with the predictions of both the compressible and incompressible implementations described above. In both plots, the three canonical stages of compaction are labeled (see Reed, 1995). Stage I indicates the low-pressure stage, dominated by granular rearrangement and the growth of contact stresses (see Figure 4a). Stage II indicates the compaction stage, dominated by particle fracture, fragmentation, and pore collapse. Finally, stage III indicates the compacted stage, dominated by elastic compression of the constituent solid (see Figure 4b). In stages I and II, the predicted behavior of both model implementations closely matches
the data reported in Kuwik et al. (2022); however, in stage III, the incompressible model begins to deviate, highlighting its limited application to pressures below 100 MPa.
Additionally, in Figure 8c, we compare the relative breakage (\(B\)) predicted by both model implementations with the particle size distribution data reported in Kuwik et al. (2022). As in Section 6.1, both models over-predict the measured relative breakage while following a similar general trend. This is discussed further in Section 7 and suggests that the assumed coupling angle \(\omega\) in (26) and (28) should be investigated further.
### Triaxial Loading of Ottawa Sand
To validate the calibration of the model from the previous section, we analyze data from a set of experiments reported in Shahin and Hurley (2022). There, a series of triaxial
Figure 8: Comparison of predicted stress–strain–breakage response of Ottawa sand with uniaxial compression (oedometer) data reported in Kuwik et al. (2022). (a) Plot of predicted axial stress against engineering strain — in compression — for compressible (red, dashed line) and incompressible (red, dotted line) models, overlaid on experimental measurements. (b) Plot of predicted theoretical density (\(\bar{\rho}_{s}/\rho_{0}\)) against compressive axial stress highlighting three canonical stages of compaction (Reed, 1995). (c) Comparison of simulated and experimental relative breakage \(B\) along uniaxial compression path.
compression and shearing tests are performed on Ottawa sand (\(d_{50}=175\)-\(300\)\(\mu\)m, \(\rho_{0}=2650\) kg/m\({}^{3}\), \(\phi_{s}=0.65\)-\(0.70\)) to study the stress-strain-dilation response of the material, including analysis of shear band formation. During these tests, samples were prepared in a high-pressure triaxial compression instrument, which is capable of applying several MPa pressures and shear stresses simultaneously. At the start of each test, samples are compressed isotropically to pressures between 10 and 45 MPa. The compressed samples are then sheared at fixed radial stresses to between 10% and 16% axial compression. This series of experiments allows us to evaluate the predictive capabilities of the model when no additional calibration of model parameters is performed.
Here, we focus on the stress-strain-dilation data reported in Shahin and Hurley (2022) for tests OS-10, OS-15, OS-20, OS-25, OS-30, OS-35, and OS-45, which are summarized in Table 5. To simulate these seven experiments, we implement the stress update algorithm described in B and model a single material point under drained loading conditions. The deformation rate \(\mathbf{L}_{s}\) is assigned to follow the loading histories summarized in Table 2, using a numerically computed stress-gradient to satisfy the constant radial stress boundary conditions. In both the experiments and the simulations, the only differences between the seven tests are the initial porosity (\(\phi_{f}\)), the initial confining stress, and the initial particle size distribution (\(d_{50}\) and \(B_{0}\)). All seven samples are simulated using the same model parameters.
In Figure 9a,b, the deviatoric stresses (\(q=\sqrt{(3/2)\mathbf{\sigma}_{s0}:\mathbf{\sigma}_{s0}}\)) reported in Shahin and Hurley (2022) are compared with the predicted stress-strain response of the constitutive model. The stress drops observed in Figure 9a are stress relaxation occurring during pauses in the experimental loading (for X-ray scanning of the samples) and are not features of the steady loading conditions simulated in Figure 9b. Although the simulated stresses shown in Figure 9b tend to under-predict the measurements shown in Figure 9a, similar strain-hardening and pressure-dependent yielding trends are observed (e.g., compare tests OS-10 and OS-45).
Additionally, in Figure 9c,d, we compare the measured volumetric strains reported in Shahin and Hurley (2022) with the predicted volumetric strains determined from the simulations. Here, the model predictions are much closer to the experimentally measured values, with dilation angles (\(\epsilon_{v}/\epsilon_{s}\)) ranging from -0.15 to -0.48 for the data reported in Shahin and Hurley (2022) and from -0.13 to -0.36 in the simulations reported here.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Test & Initial Porosity (–) & Initial Breakage (–) & Initial \(d_{50}\) (\(\mu\)m) & Initial Loading (MPa) \\ \hline OS-10 & 0.30 & 0.25 & 300 & 10 \\ OS-15 & 0.35 & 0.125 & 175 & 15 \\ OS-20 & 0.32 & 0.25 & 300 & 20 \\ OS-25 & 0.34 & 0.125 & 175 & 25 \\ OS-30 & 0.31 & 0.25 & 300 & 30 \\ OS-35 & 0.34 & 0.125 & 175 & 35 \\ OS-45 & 0.32 & 0.123 & 175 & 45 \\ \hline \hline \end{tabular}
\end{table}
Table 5: List of simulated, triaxial compression tests for Ottawa sand, following Shahin and Hurley (2022).
### High-Velocity Impact into Ottawa Sand
We now consider the application of the model to the primary problem of interest in this work: prediction of projectile dynamics and crater development during high-velocity impact events. To validate these model predictions, we also report a series of tests performed on Ottawa sand (\(d_{50}=300~{}\mu\)m, \(\rho_{0}=2650~{}\)kg/m\({}^{3}\), \(\phi_{s}=0.63\), \(B_{0}=0.14\)) that studies the influence of confined interstitial fluids on the cratering process and the penetration depth of high-speed projectiles. In this section, no data fitting or calibration is performed. All model predictions are determined from simulations that use the model parameters listed in Table 1, which are calibrated following the procedure described in A.
As discussed in Section 5.1, the experiments reported in this section were performed at the Hopkins Extreme Materials Institute (HEMI) using a two-stage light gas gun to launch 3 mm, 440C stainless steel spheres into dry and water-saturated Ottawa sand. The
Figure 9: (a) Experimental deviatoric stress–strain curves for the Ottawa sand tests listed in Table 5 from Shahin and Hurley (2022). (b) Simulated deviatoric stress–strain curves for the same loading conditions. (c) Experimental dilation curves comparing the volumetric strain to axial compression for tests reported in Shahin and Hurley (2022). (d) Corresponding simulated dilation curves comparing predicted volumetric strain during axial compression.
complete set of tests performed in this work are listed in Table 2 and use the experimental configuration shown in Figure 6. (See Kuwik et al., 2023, for further details.)
During each test, collected data included the initial projectile velocity; in-situ, stereo or asynchronous flash X-ray images; and high-speed scattered light images of the impact event. The position of the projectile within the sample was determined visually and measured using pixel counts in the GNU Image Manipulation Program (GIMP). Example X-ray and scattered light images from test #8 are shown in Figure 10b,c.
To simulate these impact events, we implement the stress update algorithm described in B into the axisymmetric, two-phase material point method (MPM) from Baumgarten and Kamrin (2019a). The simulated domain measures 3.81 cm \(\times\) 20.0 cm and is discretized into 15,159 triangular elements with a minimum side length of 300 \(\mu\)m: the \(d_{50}\) of Ottawa sand. The Ottawa sand samples (\(\bar{\rho}_{s}=1670\) kg/m\({}^{3}\)) are represented with 51,623 material points, and an additional layer of 51,623 material points is added to the water-saturated simulations to represent the interstitial fluid (\(\bar{\rho}_{f}=370\) kg/m\({}^{3}\)). The 3 mm, stainless steel projectile (\(\rho=8000\) kg/m\({}^{3}\), \(E=195\) GPa, \(\nu=0.3\)) is represented by 326 material points initially positioned 6 mm above the target surface. An example snapshot from a simulated impact into water-saturated Ottawa sand is shown in Figure 10a.
For this analysis, we examine two impact simulations in particular: one into dry sand and another into water-saturated sand. Both simulations consider impacts at 1.3 km/s: the average velocity for the tests reported in Table 2. The vertical position of the projectile
Figure 10: Comparison of high-velocity impact simulation with flash X-ray imaging and high-speed scattered light imaging of impact into water-saturated Ottawa sand. (a) Simulation snapshot taken 154 \(\mu\)s after impact using two-phase material point method (MPM; see Baumgarten and Kamrin, 2019a). (b) Scanned, in-situ X-ray image from test #8 in Table 2 captured 154 \(\mu\)s after impact (inset brightness and contrast adjusted using GNU Image Manipulation Program, GIMP; “L” indicates use of left-side X-ray flash). (c) High-speed video frame taken during test #8 from Table 2 captured 154 \(\mu\)s after impact. Note that the dispersion of the ejecta visible in (c) is greater than what is apparent in (a), highlighting a potential model limitation.
relative to the target surface is computed during both simulations and is plotted over time in Figure 11. These two time histories highlight an interesting result, which is also observed in the experimental data: the presence of interstitial water appears to weaken the Ottawa sand targets, allowing the projectiles to penetrate further into the samples than when the dry sand is impacted alone. This observation is further highlighted in Figures 12-14.
Figure 12: Comparison of crater shape, projectile position, and pressure contours approximately 25 \(\mu\)s after impact of 3 mm, stainless steel sphere into dry (a,b) and water-saturated (c–e) Ottawa sand at 1.3 km/s. (a) Flash X-ray image from dry test #10 at 28 \(\mu\)s (inset brightness and contrast adjusted). (b) Pressure contours from dry impact simulation at 25 \(\mu\)s. (c) Flash X-ray image from saturated test #9 at 23 \(\mu\)s (inset brightness and contrast adjusted). (d) Solid pressure contours, \(p_{s}=-\mathrm{tr}(\boldsymbol{\sigma}_{s})/3\), from saturated impact simulation at 25 \(\mu\)s. (e) Fluid pressure contours, \(p_{f}\), from saturated impact simulation at 25 \(\mu\)s.
Figure 13: Comparison of crater shape, projectile position, and pressure contours approximately 100 \(\mu\)s after impact of 3 mm, stainless steel sphere into dry (a,b) and water-saturated (c–e) Ottawa sand at 1.3 km/s. (a) Flash X-ray image from dry test #10 at 103 \(\mu\)s (inset brightness and contrast adjusted). (b) Pressure contours from dry impact simulation at 100 \(\mu\)s. (c) Flash X-ray image from saturated test #9 at 98 \(\mu\)s (inset brightness and contrast adjusted). (d) Solid pressure contours, \(p_{s}=-\mathrm{tr}(\boldsymbol{\sigma}_{s})/3\), from saturated impact simulation at 100 \(\mu\)s. (e) Fluid pressure contours, \(p_{f}\), from saturated impact simulation at 100 \(\mu\)s.
Figure 14: Comparison of crater shape, projectile position, and predicted porosity during impact of 3 mm, stainless steel sphere into dry (a,c,e) and water-saturated (b,d,f) Ottawa sand at 1.3 km/s. (a,b) Simulation snapshots at 10 \(\mu\)s. (c,d) Simulation snapshots at 25 \(\mu\)s. (e,f) Simulation snapshots at 50 \(\mu\)s. The presence of interstitial water in the saturated sand simulations on the right appears to inhibit pore collapse, which is apparent in the dry simulations on the left. This may explain the apparent fluidizing effect of the water as it inhibits the formation of the granular contacts required to develop frictional stresses.
Although the presence of water in the pore space between grains increases the combined density of the material ahead of the projectile, it also appears to reduce the sand's ability to absorb and dissipate kinetic energy. This behavior is consistent with the observed fluidization of loosely packed sediments during submarine landslides (e.g., \(\phi_{s}\leq 0.55\); see Pailha and Pouliquen, 2009). However, densely packed sediments (e.g., \(\phi_{s}\geq 0.60\)) and fine-particle suspensions (see Hoffman, 1974; Waitukaitis and Jaeger, 2012; Baumgarten and Kamrin, 2019b), generally exhibit the opposite trend, becoming stronger when saturated with water.
To understand this counter-intuitive result, we examine the cratering process and internal pressures predicted by these two simulations. In Figure 12 and Figure 13, the predicted pressure contours and crater shapes are compared with flash X-ray images taken during tests #9 and #10. Figure 12 shows this comparison at 25 \(\mu\)s after impact, and Figure 13 shows this comparison 100 \(\mu\)s after impact.
At 25 \(\mu\)s, the granular pressures, \(p_{s}=-\mathrm{tr}(\boldsymbol{\sigma}_{s})/3\), in both simulations exceed 100 MPa in the impact zone immediately ahead of the projectile. However, in the water-saturated simulation (shown in Figure 12d), this region of high granular pressure is much smaller than in the dry simulation and does not extend as far laterally. In this second simulation, the fluid pore pressure \(p_{f}\) appears to drive the combined response of the mixture, exceeding the granular pressure \(p_{s}\) throughout much of the domain. This transfer of stresses from the granular material to the interstitial water appears to inhibit the dominant mechanisms of energy dissipation in the granular sediment: frictional sliding, particle fragmentation, and pore collapse.
This hypothesis that the interstitial water inhibits the evolution of granular stress is further supported by the simulation snapshots shown in Figure 14. Here, the evolution of the porosity \(\phi_{f}\) in the absence of pore fluid (shown in Figure 14a,c,e) is compared with the evolution in presence of interstitial water (shown in Figure 14b,d,f). In the dry simulation, the granular material is free to compact ahead of the projectile, as the individual grains are pulverized and the pore space collapses. In the water-saturated simulation, on the other hand, the presence of water in this pore space inhibits bulk compaction, reduces the stresses normally carried by the particles, and effectively fluidizes the material in the early stages of the cratering process.
## 7 Discussion
The model proposed in this work combines soil mechanics, poromechanics, and shock physics into a system of governing equations and constitutive expressions that predicts the behavior of fluid-saturated sediments in extreme loading environments. Key to this modeling approach is the use of granular micromechanics in defining the mathematical descriptions of granular elasticity; granular rearrangement; particle fracture and fragmentation; and pore fluid coupling. Using this approach, the model is shown to have unique predictive capabilities. In this section, we further discuss the results presented in this work and highlight important model limitations.
One advantage of the micromechanics modeling approach is that it allows us to isolate model equations and evaluate their applicability to hypothetical problems. For each of the
model mechanisms listed above, we may define an associated microscopic time-scale: i.e.,
\[\tau_{\sigma}\propto d_{50}/C_{0},\quad\tau_{i}\propto\sqrt{\rho_{s}d_{50}^{2}/p_ {s}},\quad\tau_{v}\propto\eta_{0}/p_{s},\quad\text{and}\quad\tau_{B}\propto d_{5 0}/v_{c}. \tag{36}\]
Here, \(\tau_{\sigma}\) denotes the time-scale of granular elasticity and is determined by the time required for stress-waves to propagate across individual particles. This time-scale is associated with the model equations in (17), (18), (19), and (21). Similarly, \(\tau_{i}\) denotes the time-scale of inertia-dominated granular rearrangement and is determined by the time required to move particles around one another. This time-scale is associated with the model equations in (20), (22), (23), (26b), (26c), and (28). On the other hand, \(\tau_{v}\) denotes the time-scale of viscous-dominated granular rearrangement and is associated with (23), (28), (32), and (35). Finally, \(\tau_{B}\) denotes the time-scale of fracture and fragmentation and is determined by the time required for _critical_ cracks to span individual particles. This time-scale is associated with the model equations in (22), (23), (26a), and (27). Note that \(v_{c}\) denotes the critical crack propagation speed in the constituent solid; however, a sub-critical time-scale may also be considered. (See Jop et al., 2006; Boyer et al., 2011; Zhang and Buscarnera, 2017, for additional discussion.)
An implicit assumption in the model is that each micromechanism behaves similarly across the range of temperatures and strain-rates considered in this work (i.e., 180 to 1000 K and \(10^{-1}\) to \(10^{6}\) s\({}^{-1}\), respectively). The temperature component is a clear model limitation, as we do not consider melting or a possible brittle-ductile transitions of the constituent material. The strain-rate component, on the other hand, can be reasonably evaluated by comparing the microscopic time-scales above with a mesoscopic deformation time-scale: i.e.,
\[\tau_{\epsilon}\propto 1/\sqrt{\boldsymbol{D}_{s}:\boldsymbol{D}_{s}}, \tag{37}\]
with \(\boldsymbol{D}_{s}\) the mesoscopic strain-rate from (7). Importantly, we assume that the time-scales in (36) are fast relative to the mesoscopic timescale in (37), such that strain-rate effects can be neglected in their mathematical description.
Following such an analysis for water-saturated Ottawa sand (\(d_{50}=300\)\(\mu\)m; \(C_{0}=3630\) m/s; \(v_{c}\approx 1000\) m/s; \(\eta_{0}=8.91\times 10^{-4}\) Pa\(\cdot\)s) near the impact zone in Figure 1 (\(p_{s}\approx 1\) GPa), we predict that \(\tau_{\epsilon}\gg\tau_{\sigma}\), \(\tau_{i}\), \(\tau_{v}\), \(\tau_{B}\) for mesoscopic strain-rates between \(10^{-1}\) to \(10^{5}\) s\({}^{-1}\). Above this rate -- very near the impacting body in Section 6.4 -- we anticipate that the model will likely under-predict mechanical stresses in (19) and particle fragmentation in (26) due to dynamic fracture and the inability of particles to rearrange fast enough: \(\tau_{\epsilon}\approx\tau_{B}\), \(\tau_{i}\gg\tau_{\sigma}\), \(\tau_{v}\).
In addition to this time-scale evaluation, we can independently evaluate several of the model equations by considering their prediction of secondary model quantities, including the relative breakage variable, \(B\). As shown in Table 4 and Figure 8c, the model equations in (26) appear to systematically over-predict the amount of particle pulverization that is occurring within the simulated samples in Section 6.1 and 6.2. These model equations are adjusted from similar forms proposed in Tengattini et al. (2016) and Cil et al. (2020), which exhibit similar deficiencies (e.g., see Kuwik et al., 2022). Adjusting these equations, more
carefully modeling the coupling angle \(\omega\), or changing the interpretation of the variable \(B\) in (23) may be particularly interesting directions for future model development.
All together, the model is robust and reasonably accurate across a range of conditions: from low-pressure testing of highly porous sediments (shown in Section 6.1) to high-pressure compression of silicate sands (shown in Sections 6.2 and 6.3) to high-rate dynamic loading of water-saturated samples (shown in Section 6.4). The results presented in this work highlight the importance of modeling the combined response of the solid sediment particles alongside the interstitial fluid using an appropriate mixture theory. Additionally, as shown in Section 6.2, the proposed decomposition of solid strain energy (\(\psi_{s}\)) into a contact component (\(\psi_{c}\)) and a compressible granular component (\(\psi_{g}\)) enhances model predictions at pressures above 100 MPa. Below this stress level, however, a more traditional treatment of the constituent solid appears sufficient for modeling the response of these materials.
The ability of the model to capture the multi-stage loading response of laboratory samples (shown in Figures 7 and 9) appears unmatched by similar models in the literature, which are usually calibrated to a single loading condition (e.g., see Figure 8). Additionally, the close agreement between the numerical simulations and experimental measurements reported in Section 6.4 (shown in Figure 11) indicate that the model for Ottawa sand has a well-calibrated, predictive capability over a wide range of stresses and strain-rates.
## 8 Conclusion
In this work, we have proposed, calibrated, and validated a predictive constitutive model for fluid-saturated, brittle granular materials and used this model to study the dynamics of projectile impact into dry and water-saturated sediments. Model parameters are provided for an example carbonate sand (Dog's Bay sand), which is calibrated to low pressures, and for an example silicate sand (Ottawa sand), which is calibrated to stresses between \(10^{3}\) and \(10^{9}\) Pa. Numerical simulations of the model highlight its predictive capabilities and allow study of the apparent fluidizing effect of interstitial water during a 1.3 km/s impact into Ottawa sand.
Application of the model is limited to conditions where the granular particles exhibit brittle, solid-like behavior and stresses have time to propagate during loading. However, the model is formulated to continue making physically realistic predictions somewhat outside of this range.
## 9 CRediT Authorship Contribution Statement
**A. S. Baumgarten:** Conceptualization, Methodology, Software, Formal Analysis, Investigation, Writing - Original & Draft, Visualization. **J. Moreno:** Investigation, Resources, Data Curation, Writing - Review & Editing. **B. Kuwik:** Methodology, Formal Analysis, Investigation, Data Curation, Writing - Review & Editing. **S. Ghosh:** Investigation, Data Curation, Writing - Review & Editing. **R. Hurley:** Conceptualization, Methodology, Resources, Writing - Review & Editing, Supervision, Project Administration. **K.T. Ramesh:** Conceptualization, Methodology, Resources, Writing - Review & Editing, Supervision, Project Administration, Funding Acquisition.
## 10 Acknowledgements
The project or effort depicted was or is sponsored by the Department of the Defense, Defense Threat Reduction Agency under the MSEE URA, HDTRA1-20-2-0001. The content of the information does not necessarily reflect the position or the policy of the federal government, and no official endorsement should be inferred. A. S. Baumgarten, J. Moreno, B. Kuwik, S. Ghosh, R. Hurley, and K.T. Ramesh gratefully acknowledge helpful discussions with all participants of the Material Constitutive Laws focus area of the Material Science in Extreme Environments University Research Alliance.
|
2310.20131 | Metasurface-based Mueller Matrix Microscope | In conventional optical microscopes, image contrast of objects mainly results
from the differences in light intensity and/or color. Muller matrix optical
microscopes (MMMs), on the other hand, can provide significantly enhanced image
contrast and rich information about objects by analyzing their interactions
with polarized light. However, state-of-art MMMs are fundamentally limited by
bulky and slow polarization state generators and analyzers. Here, we
demonstrated the feasibility of applying metasurfaces to enable a fast and
compact MMM, i.e., Meta-MMM. We developed a dual-color MMM, in both reflection
and transmission modes, based on a chip-integrated high-speed (>20fps)
metasurface polarization state analyzer (Meta-PSA) and realized high
measurement accuracy for Muller matrix (MM) imaging. We then applied our
Meta-MMM to nanostructure characterization, surface morphology analysis and
discovered birefringent structures in honeybee wings. Our meta-MMMs hold the
promise to revolutionize various applications from biological imaging, medical
diagnosis, material characterization to industry inspection and space
exploration. | Jiawei Zuo, Ashutosh Bangalore Aravinda Babu, Mo Tian, Jing Bai, Shinhyuk Choi, Hossain Mansur Resalat Faruque, Sarah Holloway, Michael N. Kozicki, Chao Wang, Yu Yao | 2023-10-31T02:51:30Z | http://arxiv.org/abs/2310.20131v3 | # Metasurface-based Mueller Matrix Microscope
###### Abstract
In conventional optical microscopes, image contrast of objects mainly results from the differences in light intensity and/or color. Muller matrix optical microscopes (MMMs), on the other hand, can provide significantly enhanced image contrast and rich information about objects by analyzing their interactions with polarized light. However, state-of-art MMMs are fundamentally limited by bulky and slow polarization state generators and analyzers. Here, we demonstrated the feasibility of applying metasurfaces to enable a fast and compact MMM, i.e., Meta-MMM. We developed a dual-color MMM, in both reflection and transmission modes, based on a chip-integrated high-speed (\(>\)20fps) metasurface polarization state analyzer (Meta-PSA) and realized high measurement accuracy for Muller matrix (MM) imaging. We then applied our Meta-MMM to nanostructure characterization, surface morphology analysis and discovered birefringent structures in honeybee wings. Our meta-MMMs hold the promise to revolutionize various applications from |
2309.14491 | Unsupervised 3D Perception with 2D Vision-Language Distillation for
Autonomous Driving | Closed-set 3D perception models trained on only a pre-defined set of object
categories can be inadequate for safety critical applications such as
autonomous driving where new object types can be encountered after deployment.
In this paper, we present a multi-modal auto labeling pipeline capable of
generating amodal 3D bounding boxes and tracklets for training models on
open-set categories without 3D human labels. Our pipeline exploits motion cues
inherent in point cloud sequences in combination with the freely available 2D
image-text pairs to identify and track all traffic participants. Compared to
the recent studies in this domain, which can only provide class-agnostic auto
labels limited to moving objects, our method can handle both static and moving
objects in the unsupervised manner and is able to output open-vocabulary
semantic labels thanks to the proposed vision-language knowledge distillation.
Experiments on the Waymo Open Dataset show that our approach outperforms the
prior work by significant margins on various unsupervised 3D perception tasks. | Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott Ettinger, Dragomir Anguelov | 2023-09-25T19:33:52Z | http://arxiv.org/abs/2309.14491v1 | # Unsupervised 3D Perception with 2D Vision-Language Distillation
###### Abstract
Closed-set 3D perception models trained on only a predefined set of object categories can be inadequate for safety critical applications such as autonomous driving where new object types can be encountered after deployment. In this paper, we present a multi-modal auto labeling pipeline capable of generating amodal 3D bounding boxes and tracklets for training models on open-set categories without 3D human labels. Our pipeline exploits motion cues inherent in point cloud sequences in combination with the freely available 2D image-text pairs to identify and track all traffic participants. Compared to the recent studies in this domain, which can only provide class-agnostic auto labels limited to moving objects, our method can handle both static and moving objects in the unsupervised manner and is able to output open-vocabulary semantic labels thanks to the proposed vision-language knowledge distillation. Experiments on the Waymo Open Dataset show that our approach outperforms the prior work by significant margins on various unsupervised 3D perception tasks.
## 1 Introduction
In autonomous driving, most existing 3D detection models [63, 23, 43] have been developed with the prior assumption that all possible categories of interest should be known and annotated during training. While significant progress has been made in this supervised closed-set setting, these methods still struggle to fully address the safety concerns that arise in high-stakes applications. Specifically, in the dynamic real-world environment, it is unacceptable for autonomous vehicles to fail to handle a category that is not present in the training data. To address this safety concern, a recent development by Najibi [36] proposed an unsupervised auto labeling pipeline that uses motion cues from point cloud sequences to localize 3D objects. However, by design, this method does not localize static objects which constitute a significant portion of traffic participants. Moreover, it only models the problem in a class-agnostic way and fails to provide semantic labels for scene understanding. This is suboptimal as semantic information is essential for downstream tasks such as motion planning, where category-specific safety protocols are deliberately added to navigate through various traffic participants.
Recently, models trained with large-scale image-text datasets have demonstrated robust flexibility and generalization capabilities for open-vocabulary image-based classification [39, 20, 34], detection [21, 12, 25, 60] and semantic segmentation [24, 11] tasks. Yet, open-vocabulary recognition in the 3D domain [9, 16, 41] is in its early stages. In the context of autonomous driving it is even more underexplored. In this work, we fill this gap by leveraging a pre
Figure 1: An illustration of three interesting urban scene examples of open-vocabulary perception. Left: our method can faithfully detect objects based on user-provided text queries during inference, without the need for 3D human supervision. Red points are points matched with the text queries. Right: camera images for readers’ reference. Note that the inference process solely relies on LiDAR points and does not require camera images.
trained vision-language model to realize open-vocabulary 3D perception in the wild.
We propose a novel paradigm of Unsupervised 3D Perception with 2D Vision-Language distillation (UP-VL). Specifically, by incorporating a pre-trained vision-language model, UP-VL can generate auto labels with substantially higher quality for objects in arbitrary motion states, compared to the latest work by Najibi _et al_. [36].
With our auto labels, we propose to co-train a 3D object detector with a knowledge distillation task, which can achieve two goals simultaneously, _i.e_. improving detection quality and transferring semantic features from 2D image pixels to 3D LiDAR points. The perception model therefore is capable of detecting all traffic participants and thanks to the distilled open-vocabulary features, we can flexibly query the detector's output embedding with text prompts, for preserving specific types of objects at inference time (see Figure 1 for some examples).
We summarize the contributions of UP-VL as follows:
* UP-VL achieves state-of-the-art performance on unsupervised 3D perception (detection and tracking) of moving objects for autonomous driving.
* UP-VL introduces semantic-aware unsupervised detection for objects in any motion state, a first in the field of autonomous driving. This breakthrough eliminates the information bottleneck that has plagued previous work [36], where class-agnostic auto labels were used, covering only moving objects with a speed above a predetermined threshold.
* UP-VL enables 3D open-vocabulary detection of novel objects in the wild, with queries specified by users at inference time, therefore removing the need to re-collect data or re-train models.
## 2 Related works
Vision-language training.Contrastive vision language training on billions of image-text training pairs resulted in impressive improvements in the tasks of open-set and zero-shot image classification and language related applications [39, 20, 58]. More recently, open-set object localization in 2D images has been shown to benefit from such abundant image-text data as well. Specifically, [21, 12, 25, 60, 61, 33] used image-text training to improve the open-set capability of 2D object detectors and [24, 11] explored the use of large-scale scene-level vision-language data for the task of open-set 2D semantic segmentation. Recent research [31, 22, 51, 13, 18] has begun to explore the application of 2D vision-language pre-training in 3D perception tasks. However, these studies focused on static indoor scenario where the scene is small-scale and the RGB-D data is captured in high-resolution. Here we design a multi-modal pipeline that leverages vision-language pre-training for unsupervised open-set 3D perception in complex, sparse, and occlusion-rich environments for autonomous driving.
Unsupervised 3D object detection.Unsupervised 3D object detection from LiDAR data is largely under-explored [7, 54, 50, 28, 36]. Dewan _et al_. [7] proposed a model-free method to detect and track the visible part of objects, by using the motion cues from LiDAR sequences. However, this approach is incapable of generating amodal bounding boxes which is essential for autonomous driving. Cen _et al_. [3] relied on a supervised detector to produce proposals of unknown categories. However, this approach requires full supervision to train the base detector and has limited generalization capability to only semantically similar categories. Wong _et al_. [54] identified unknown instances via supervised segmentation and clustering, which by design cannot generate amodal boxes from partial observations. Most recently, Najibi _et al_. [36] developed an unsupervised auto meta labeling pipeline to generate pseudo labels for moving objects, which can be used to train real-time 3D detection models. This approach fails to provide semantics to detection boxes and ignores static objects, which limits its practical utility. Compared to all previous efforts, we realize open-vocabulary unsupervised 3D detection for both static and moving objects, by leveraging vision-language pre-training, and benchmark our system on the realistic and challenging scenario of autonomous driving. While utilizing 2D vision-language models that may have been pre-trained with human annotations, we avoid the need for any additional 3D labels within our paradigm, thereby creating a pragmatically unsupervised setting.
LiDAR 3D object detection.Most previous works focused on developing performant model architectures in the fully supervised setting, without considering the generalization capability to long-tail cases and unknown object types that are prevalent in the dynamic real world. These methods can be categorized into point based [43, 38, 56, 44, 35, 26], voxelization based [8, 52, 46, 37, 55, 45, 63, 23, 53, 57, 59, 5, 30], perspective projection based [32, 2, 10], and feature fusion [49, 6, 62, 14, 42]. Recent research also explore transferring knowledge from image for 3D point cloud understanding [40, 29, 19, 4]. Our method is compatible with any 3D detector, extending it to handle the open-set settings.
## 3 Method
We present UP-VL, a new approach for unsupervised open-vocabulary 3D detection and tracking of traffic participants. UP-VL advances the previous state-of-the-art [36] which was limited to _class-agnostic_ detection of _moving-only_ objects in two main directions: 1) It enables _class-aware_ open-set 3D detection by incorporating open-vocabulary text queries at inference time, and 2) It is able
to detect objects in _all motion states_ as opposed to moving-only objects in the previous study. To achieve these goals, we deploy a multi-modal approach and combine intrinsic motion cues [36] available from the LiDAR sequences with the semantics captured by a vision-language model [11] trained on generic image-text pairs from the Internet. An overview of our approach is shown in Figure 2. As illustrated on the left, our training pipeline involves two main stages. First, our auto labeling method uses these motion and semantic cues to automatically label the raw sensor data, yielding class-agnostic 3D bounding boxes and tracklets as well as point-wise semantic features. Then, in the second stage, we use these auto labels to train open-vocabulary 3D perception models. The right side of the figure illustrates our inference pipeline where given raw LiDAR point clouds, our detector is able to perform open-vocabulary 3D detection given a set of text queries.
### Background
The key challenges in unsupervised 3D perception are twofold: 1) generating high-quality 3D amodal bounding boxes and consistent tracklets for all open-set traffic participants, and 2) inferring per-object semantics. Najibi [36] developed an auto labeling technique to address the first challenge partially. Their approach focuses on moving objects only. Specifically, their method takes LiDAR sequences as input, and removes ground points. It then breaks down the scene into individual connected components (point clusters). Next, it calculates local flow between pairs of point clusters from two adjacent frames and retains only clusters with speed above a predefined threshold. It then tracks each cluster across frames and aggregates points to obtain a more comprehensive view of the object, which enables the derivation of a faithful 3D amodal bounding box. Finally, the resulting 3D amodal boxes and tracklets can serve as auto labels for training 3D perception models.
While the previous work [36] has shown promising results, it suffers from significant limitations: 1) it can only deal with moving objects; and 2) it is unable to output semantics. These limitations hinder its practical utility for safety-critical applications such as autonomous driving.
### Unsupervised Multi-modal Auto Labeling
In contrast to the traditional way of training a detection model by presenting box geometries and closed-set semantics, our unsupervised multi-modal auto labeling approach produces box geometries and point-wise semantic feature embeddings, where the former teaches the detector to localize all traffic participants and the latter informs the model to preserve certain types of objects based on the inference-time text queries.
Figure 3 shows an overview of the auto labeling pipeline and Algorithm 1 presents its details. Specifically, our system leverages multiple modalities as input, namely camera images, LiDAR point sequences, and natural language. It also employs a pre-trained vision-language model [11] to extract feature embeddings from images and texts, which naturally complements the 3D depth information and motion cues with rich semantics, compared to [36]. We begin by detailing the feature extraction process. We then describe how we utilize the extracted vision-language information in combination with the inherent motion cues from LiDAR sequences to generate auto labels in an unsupervised manner.
Figure 2: Overview of the proposed UP-VL framework. During training (left), our method taps into multi-modal inputs (LiDAR, camera, text) and produces high-quality auto supervisions, via Unsupervised Multi-modal Auto Labeling, including 3D point-level features, 3D object-level bounding boxes and tracklets. Our auto labels are then used to supervise a class-agnostic open-vocabulary 3D detector. Besides, our 3D detector distills the features extracted from a pre-trained 2D vision-language model. At inference time (right), our trained 3D detector produces class-agnostic boxes and per-point features in the embedding space of the pre-trained vision-language model. We then use the text encoder to map queries to the embedding space and compute the per-point similarity scores between the predicted feature and the text embeddings (\(\otimes\) refers to cosine similarity). These per-point scores are then aggregated to assign semantic labels to boxes.
### Feature Extraction
As the first step to our approach, we start by extracting open-vocabulary features from all available cameras and then transfer these 2D features to 3D LiDAR points using known sensor calibrations. Specifically, at each time \(t\), we have a set of images \(\{\mathbf{I}_{t}^{k}\in\mathbb{R}^{H_{k}\times W_{k}\times 3}\}_{t}\) captured by \(K\) cameras, where \(H_{k}\) and \(W_{k}\) are image dimensions of the camera \(k\). We also have a collection of point cloud, \(\{\mathbf{P}_{t}\in\mathbb{R}^{N_{t}\times 3}\}\), captured over time using LiDAR sensors. Here, \(N_{t}\) denotes the number of points at time \(t\). We use a pre-trained open-vocabulary 2D image encoder \(\mathcal{E}^{img}\) to extract the pixel-wise visual features for each image, denoted as \(\{\mathbf{V}_{t}^{k}\in\mathbb{R}^{H_{k}\times W_{k}\times D}\}\), where \(D\) represents the feature dimension. Next, we build the mapping between 3D LiDAR points and their corresponding image pixels using the camera and LiDAR calibration information. Once this mapping is created, we can associate each 3D point with its corresponding image feature vector. As a result, we obtain vision-language features for all the 3D points as \(\mathbf{F}_{t}^{vl}\in\mathbb{R}^{N_{t}\times D}\), where \(N_{t}\) is the number of points at time \(t\).
Additionally, we leverage motion signals as another crucial representation that can substantially aid in deducing the concept of objectness for moving instances in the open-set environment. Specifically, we employ the NSFP++ algorithm [36] to compute the scene flow \(\mathbf{F}_{t}^{sf}\in\mathbb{R}^{N_{t}\times 3}\) of points at each time \(t\), which is a set of flow vectors corresponding to each point in \(\mathbf{P}_{t}\).
### Bounding Box Proposal Generation
At each time step, we generate initial bounding box proposals \(\{\mathbf{P}_{t}^{vis}\in\mathbb{R}^{M_{t}\times 7}\}\) by clustering the points, where \(M_{t}\) is the number of boxes at time \(t\), and each box is parameterized as (center \(x\), center \(y\), center \(z\), length, width, height, heading). Note that \(vis\) indicates that each box only covers the visible portion of an object. To cluster each point, we leverage a set of features which includes the point locations \(\mathbf{P_{t}}\), scene flow \(\mathbf{F}_{t}^{sf}\), and the vision-language features \(\mathbf{F}_{t}^{vl}\).
We design our pipeline to flexibly generate auto labels for objects in desired motion states. Given scene flow \(\mathbf{F}_{t}^{sf}\), we introduce a velocity threshold \(\epsilon^{sf}\) to select points whose speed is greater than or equal to the threshold (_e.g._, 1.0 m/s). To capture objects in all motion states, we set \(\epsilon^{sf}=0\).
One major challenge of auto labeling objects in all motion states is how to automatically distinguish traffic participants (_e.g._, vehicles, pedestrians, _etc._) from irrelevant scene elements (_e.g._, street, fence, _etc._). We propose to leverage an a priori list of _background_ object categories to exclude irrelevant scene elements from labeling. Specifically, we use the text encoder, \(\mathcal{E}^{txt}\), from the pre-trained 2D vision-language model [11], to encode each background category name \(c\) into its feature embedding \(\mathcal{E}^{txt}(c)\in\mathbb{R}^{D}\). We further define a per-point binary background mask, denoted as \(\mathbf{M}_{t}^{bg}\in\{0,1\}^{N_{t}}\), that takes on a value of 1 if a point is assigned to one of the a priori background categories, or 0 otherwise. See Algorithm 1 for the definition of \(\mathbf{M}_{t}^{bg}\)
Figure 3: Overview of our unsupervised multi-modal auto labeling approach. This pipeline first extracts vision-language and motion features from multiple modalities, then proposes, tracks and completes bounding boxes of objects. The resulting pointwise VL features, 3D bounding boxes and tracklets will serve as automatic supervisions to train the perception model.
where \((\cdot)_{i}\) denotes the \(i\)-th row of a matrix and \(\mathbbm{1}(\cdot)\) represents the indicator function. We use this background mask to mark scene elements which are not of interest.
We then proceed to cluster the point cloud into neighboring regions using a spatio-temporal clustering algorithm, modified from [36], followed by calculating the tightest bounding box around each cluster. In addition to clustering points by their locations and motions, we also use \(\mathbf{M}_{t}^{bg}\) to eliminate bounding boxes which are likely to be background. To be precise, we discard any bounding box in which the ratio of background points exceeds a threshold of \(r^{bg}\) (which is set to 99%). This process results in the initial set of bounding box proposals \(\{\mathbf{B}_{t}^{vis}\}\). Note that in this step, the box dimensions are determined based on the _visible_ portion of each object, which can be significantly underestimated compared to the human labeled amodal box, due to ubiquitous occlusions and sparsity.
#### Amodal Auto Labeling
In autonomous driving, perception downstream tasks desire _amodal_ boxes that encompass both the visible and occluded parts of the objects. To transform our visible-only proposals to amodal auto labels, we follow [36] by adopting a tracking-by-detection paradigm with Kalman filter state updates to link all proposals over time. We then perform shape registration for each object track of \(\{\mathbf{T}_{t}\}\) using ICP [1]. Within each track, we leverage the intuition that different viewpoints contain complementary information and temporal aggregation of the registered points from proposals would allow us to obtain a complete shape of the object. Hence, we fit a new box to the aggregated points to yield the amodal box. Finally, we undo the registration from aggregated points to individual frames and replace the original visible box proposal at each time step with the amodal box, which produces auto labeled 3D boxes and the tracklet.
In practice, background point filtering, point cloud registration and temporal aggregation may contain noise, leading to spurious boxes, _e.g_., tiny and sizable boxes and overlapping boxes. We apply non-maximum suppression (NMS) to clean the auto label boxes. This final set of unsupervised amodal auto labels \(\{\mathbf{B}_{t}\}\), their track IDs \(\{\mathbf{T}_{t}\}\), together with the extracted vision-language embeddings \(\{\mathbf{F}_{t}^{vl}\}\), are then used to train open-vocabulary 3D object detection model as described in Sec. 3.3.
### Open-vocabulary 3D Object Detection
In this subsection, we describe how the unsupervised auto labels, can be used to train a 3D object detector capable of localizing open-set objects and assigning open-vocabulary semantics to them, all without using any 3D human annotations during training.
#### 3.3.1 Model Architecture
Our design, as depicted in Figure 2, is based on decoupling object detection into class-agnostic object localization and semantic label assignment. For class-agnostic bounding box prediction, we add a branch to a 3D point cloud encoder backbone to generate 3D bounding box center, dimensions, and heading. This branch accompanies a binary classification branch which outputs foreground / background class-agnostic per box objectness score. To super-vise these two branches, we treat our unsupervised auto labels (see Sec. 3.2) as ground-truth and add bounding box regression and classification losses to our learning objective. We would like to highlight that our pipeline is independent of a specific 3D point-cloud encoder [23, 62, 48] and the detection paradigm (either anchor-based or anchor-free detection). Here, we adopt an anchor-based PointPillar backbone [23] with Huber loss for box residual regression and Focal Loss [27] for objectness classification to have a fair comparison with prior works [36]. Besides predicting 3D bounding boxes, we also perform text query-based open-vocabulary semantic assignment by distilling knowledge from pre-trained 2D vision-language models using an extra branch which is described in the next subsections.
#### 3.3.2 Vision-Language Knowledge Distillation
Besides class agnostic bounding box generation, our 3D detector pipeline also distills the semantic knowledge from the per-point vision-language features provided by our auto labeling pipeline (_i.e_. \(\{\mathbf{F}_{t}^{vl}\}\), introduced in the the vision-language feature extraction in Sec. 3.2). In our method, we directly distill these features, which as will be discussed in the next subsection, unlocks text query-based open-vocabulary category assignment at inference time. More precisely, as shown in the left side of Figure 2, we add a new linear branch to the model to predict per-point \(D\) dimensional features (here \(D\) is the dimensionality of the vision-language embedding space). As the input to this branch, we scatter the computed voxelized features in our backbone back into the points and concatenate them with the available per-point input features (_i.e_. 3D point locations and LiDAR intensity and elongation features). We then train the network to predict the feature vector \(\mathbf{F}_{p}^{vl}\in\mathbf{F}_{t}^{vl}\) for any point \(\mathbf{p}\) visible in the camera images and add the following loss to the training objective:
\[\mathcal{L}_{\text{distill}}(\mathbf{p})=\text{CosineDist}(\mathbf{y}_{p}, \mathbf{F}_{p}^{vl}) \tag{1}\]
where \(\mathbf{y}_{p}\) is the distillation prediction by the model for point \(\mathbf{p}\). This together with the bounding box regression and the objectness classification losses (based on our auto labels as discussed in Sec. 3.3.1) form our final training objective.
#### 3.3.3 Open-Vocabulary Inference
So far, we have introduced how to train a detector to simultaneously localize all objects in a class-agnostic manner and predict vision-language features for all LiDAR points. Here, we discuss how we assign open-vocabulary semantics to the predicted boxes during inference. This process is depicted in the right side of Figure 2. The pre-trained 2D vision language model [11] contains an image encoder and a text encoder, which are jointly trained to map text and image data to a shared embedding space. As described in Sec. 3.3.2, we add a feature distillation branch that maps 3D input point clouds to the 2D image encoder embedding space, which essentially bridges the gap between point clouds and semantic text queries. As a result, at the inference time we can encode arbitrary open-vocabulary categories presented as text queries and compute their similarities with the observed 3D points. This can be achieved by computing the cosine similarity between the text query embeddings and the vision-language features predicted by our model for each 3D point. Finally, we assign open-vocabulary categories to boxes based on majority voting. Specifically, we associate each point the category with the highest computed cosine similarity, and then assign to each box the most common category of its enclosing points.
We would like to emphasize that our approach does not need to process images at inference time, since we have distilled image encoder features to the point cloud. Therefore, the only added computation is a simple linear layer for predicting per-point vision-language embeddings, which is negligible compared to the rest of the detector architecture.
## 4 Experiments
Our UP-VL approach advances the previous state-of-the-art in unsupervised 3D perception for autonomous driving [36] in two main important directions: 1) enabling open-vocabulary category semantics and 2) detecting objects in all motion states (as opposed to moving-only objects in the previous study). In this section, we perform extensive evaluations with respect to each of these innovations. Note that unsupervised open-set 3D detection is still at early stage in the research community with few published works. Therefore to fairly compare with the state-of-the-art [36], we perform our detection experiments first following the same setting as [36] (_i.e._ detecting class-agnostic moving objects) and then showcasing our new capabilities (_i.e_. detecting objects in any motion states with semantics).
Sec. 4.2 studies the performance of our system in the class-agnostic setting. This allows us to compare our approach with the existing state-of-the-art method on detecting moving-only objects, showing large improvements. Sec. 4.3 moves the needle beyond the capability of the previous class-agnostic state-of-the-art methods and reports results under open-vocabulary class-aware setting for detecting moving-only objects (Sec. 4.3.1) and the most challenging setting of open-vocabulary detection of objects in all motion states (Sec. 4.3.2). Finally, Sec. 4.4 reports the open-set tracking quality of our auto labels and Sec. 4.5 presents qualitative results. See supplementary materials for more ablation studies and error analyses.
### Experimental Setting
We evaluate our framework using the challenging Waymo Open Dataset (WOD) [47], which provides a large collection of run segments captured by multi-modal sensors in diverse environment conditions. To define moving-only objects in Sec. 4.2, we follow [36] and apply a threshold of 1.0 m/s (_i.e_. \(\epsilon^{sf}\) = 1.0). We set the cosine similarity threshold for background categories at \(\epsilon^{bg}=0.02\) to achieve best performance in practice. The background categories \(C^{bg}\) we exclude from auto labeling are "vegetation", "road", "street", "sky", "tree", "building", "house", "skyscaper", "wall", "fence", and "sidewalk". The WOD [47] has three common object categories, _i.e_. vehicle, pedestrian, and cyclist. In the class-aware 3D detection experiments (Sec. 4.3), we follow [36] and combine pedestrian and cyclist into one VRU (vulnerable road users) category, which contains a similar number of labels as the vehicle category. As in [36], we also train and evaluate the detectors on a 100m \(\times\) 40m rectangular region around the ego vehicle. We use the popular PointPillars detector [23] for all our detection experiments and set an intersection over union, IoU=0.4, for evaluations unless noted otherwise. Please refer to Sec. 1 of supplementary materials for a more detailed description of all experimental settings.
### Class-agnostic Unsupervised 3D Detection of Moving Objects
For fair comparison, we follow the same setting as [36] and tailor our approach to class-agnostic moving-only 3D detection. Specifically, we perform auto labeling as introduced in 3.2 with speed threshold \(\epsilon^{sf}=1.0\)m/s and train a class-agnostic detector with feature distillation as described in 3.3.1. However, we disable text queries at inference time. Note that [36] only considered detection of moving objects.
\begin{table}
\begin{tabular}{c|c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Box Type} & \multicolumn{2}{c|}{3D [email protected]} & \multicolumn{2}{c}{3D [email protected]} \\ & & L1 & L2 & L1 & L2 \\ \hline MI-UP [36] & \multirow{2}{*}{Auto labels} & 36.9 & 35.5 & 27.4 & 26.4 \\ UP-VL (ours) & & **39.9** & **38.4** & **34.2** & **32.0** \\ \hline \hline MI-UP [36] & \multirow{2}{*}{Detections} & 42.1 & 40.4 & 29.6 & 28.4 \\ UP-VL (ours) & & **49.9** & **48.1** & **38.4** & **36.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the methods on class-agnostic unsupervised 3D detection of _moving_ objects. Top: Auto label boxes. Bottom: Detection boxes.
We leave the study of more challenging settings to Sec. 4.3.
Table 1 shows our result and compares it with MIP-UP [36]. The top part of the table compares the auto labeling quality. The bottom part compares the detector performance between our UP-VL approach and MI-UP. We use the exact same detection backbone and hyper-parameters to ensure a fair comparison. When evaluating at IoU=0.4 as suggested by [36], UP-VL significantly outperforms MIP-UP, both in terms of the auto label as well as the detection performance. To better demonstrate our improved auto label quality, we also evaluate with a higher localization criterion at IoU=0.5, where our improvement becomes even more pronounced. We should also point out that in both methods, the final detection quality is superior to the auto label quality. We hypothesize that this is due to the network being able to learn a better objectness scoring function for ranking as well as its ability to denoise the auto labels given the inductive bias of the model [17].
### Class-aware Unsupervised Open-vocabulary 3D Detection
In this section, we evaluate the capability of our UP-VL pipeline in class-aware open-vocabulary 3D detection of objects in different motion states. Please note that we don't use any 3D human annotations during training and only use the available human labeled categories for evaluation. Moreover, it should be noted that the previous state-of-the-art [36], as a class-agnostic approach, falls short in this new setting, making comparisons not possible. In all experiments in this section, we assign labels to boxes by querying category names as text at inference time in an open-vocabulary fashion as described in Sec. 3.3.3 (see Sec. 1 of supplementary for a detailed list of text queries used).
#### 4.3.1 Moving-only Objects
Table 2 reports the class-aware open-vocabulary 3D detection results on the moving-only objects. Since [36] is no longer applicable in this setting, we construct two baselines for comparison: _i.e_. geometric clustering [36] which additionally uses our extracted scene flow features (\(\mathbf{F}_{t}^{sf}\)) and its variant which leverages both the scene flow features and the vision-language features (\(\mathbf{F}_{t}^{vl}\)). 3D point-wise semantics for the baselines are extracted directly by projecting the 2D image features of the pre-trained vision-language model. We report per-category AP as well as the mAP of these baselines in the top two rows of Table 2. The bottom of the table presents the results for our unsupervised auto labels and our final UP-VL detections. Our auto labels and UP-VL detector both outperform baselines constructed from prior approaches. As discussed in Sec. 3.3.2, unlike the baselines that requires applying the image encoder to all camera images at inference time, our detector directly predicts image features extracted by our auto labeling pipeline for 3D point clouds and consequently is more efficient.
#### 4.3.2 Objects in All Motion States
Finally in this section, we report results on the most challenging setting: unsupervised class-aware open-vocabulary 3D detection for all objects with arbitrary motion states. Like Sec. 4.3.1, since [36] falls short in this setting, we construct three clustering baselines using different combinations of our features. More specifically, the first row only uses point locations (\(\mathbf{P}_{t}\)), the second row uses both point locations and our vision-language features (\(\mathbf{F}_{t}^{vl}\)), and the third row leverages all the features including our scene flow features (\(\mathbf{F}_{t}^{sf}\)). As an ablation on the effectiveness of the introduced feature distillation in UP-VL, we also add a baseline called "Our detector w/o feature distillation", where we remove the distillation head and its loss from our detector, and like the baselines in the first three rows, we directly project the vision-language features from camera images to the point cloud for semantic label assignment. As summarized in Table 3, our auto labels significantly outperform other baselines listed in the first three rows. Moreover, comparing the last two rows, we observe that the proposed vision-language feature distillation leads to significant performance improvement across all metrics. For example, our approach with feature distillation outperforms the counterpart without distillation by more than 8 points in mAP.
### Tracking
The UP-VL exhibits a high performance not only in detection, but also in tracking - a critical task in autonomous driving. We employ the motion-based tracker from [36], and conduct experiments in the tracking-by-detection manner. We evaluate tracking performance for moving objects and compare our UP-VL detector trained with feature distillation as outlined in Table 2 against two baselines: MI-UP detector from Table 1 and another open-set baseline from Table 2. To measure the effectiveness of our model, we employ the widely used MOTA and MOTP metrics, both in the class-agnostic and class-aware open-vocabulary settings. Our experimental results (Table 4) demonstrate that UP-VL outperforms both baselines by a significant margin.
### Qualitative Results
Our UP-VL enables open-vocabulary detection of arbitrary object types beyond the few human annotated categories in the autonomous driving datasets. Figure 4 illustrates some examples. In each row, we present the camera image on the right for readers' reference. On the left, we show the corresponding 3D point cloud and the predicted 3D bounding box by our model based on the open-vocabulary text query provided at inference time.
## 5 Conclusions
In this paper, we study the problem of unsupervised 3D object detection and tracking in the context of autonomous driving. We present a cost-efficient pipeline using multi-sensor information and an off-the-shelf vision-language model pre-trained on image-text pairs. Core to our approach is a multi-modal auto labeling pipeline, capable of generating class-agnostic amodal box annotations, tracklets, and per-point semantic features extracted from vision-language models. By combining the semantic information and motion cues observed from the LiDAR point clouds, our auto labeling pipeline can identify and track open-set traffic participants based on the raw sensory inputs. We have evaluated our auto labels by training a 3D open-vocabulary object detection model on the Waymo Open Dataset without any 3D human annotations. Strong results have been demonstrated on the task of open-vocabulary 3D detection with categories specified during inference by text queries which we believe opens up new directions towards more scalable software stacks for autonomous driving.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{MOTA (\(\uparrow\)) / MOTP (\(\downarrow\))} \\ \cline{2-4} & Veh & VRU & Cls. ag. \\ \hline MI-UP [36] & N/A & N/A & 12.8/45.5 \\ \hline MI-UP-C [36] & \multirow{2}{*}{39.6/37.4} & \multirow{2}{*}{13.5/53.7} & \multirow{2}{*}{22.8/43.4} \\ + OpenSeg [11] & & & \\ \hline UP-VL detector & **65.3/31.0** & **24.0/46.8** & **41.3/37.4** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of tracking methods for moving objects with evaluations in class-agnostic (Cls. ag.) and class-aware settings. “MI-UP-C” refers to class-agnostic MI-UP clustering approach, which is unable to be evaluated in the class-aware setting.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Representations} & \multirow{2}{*}{Box type} & \multicolumn{2}{c|}{3D AP} & \multirow{2}{*}{mAP} \\ \cline{2-2} \cline{5-6} & Motion & \multicolumn{1}{c|}{Vision-Language} & & & Veh & VRU \\ \hline Clustering [36] & ✓ & & visible & N/A & N/A & 32.4* \\ Clustering [36] + OpenSeg [11] & ✓ & ✓ & visible & 47.8 & 21.5 & 34.7 \\ \hline
**Our auto labels** & ✓ & ✓ & amodal & 57.5 & **29.8** & 43.7 \\
**Our UP-VL detector w. feature distillation** & ✓ & ✓ & amodal & **76.9** & 28.6 & **52.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of methods on unsupervised class-aware _moving_ object detection. (*since semantics are not available, we report class agnostic AP for the first row, given that vehicle and VRU contain similar number of samples.)
Figure 4: Open-vocabulary detection of both static and moving objects via user-provided text queries. Note that in the open-vocabulary setting, the text queries of interested object types are not given in either auto labeling or model training.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Representations} & \multirow{2}{*}{Box type} & \multicolumn{2}{c|}{3D AP} & \multirow{2}{*}{mAP} \\ \cline{2-2} \cline{5-6} & Motion & \multicolumn{1}{c|}{Vision-Language} & & & \multicolumn{1}{c|}{Veh} & VRU \\ \hline Clustering [36] & & & visible & N/A & N/A & 11.6* \\ Clustering [36] + OpenSeg [11] & & ✓ & visible & 15.8 & 9.9 & 12.9 \\ Clustering [36] + OpenSeg [11] & ✓ & ✓ & visible & 16.1 & 10.0 & 13.1 \\ \hline
**Our auto labels** & ✓ & ✓ & amodal & 30.2 & 14.7 & 22.4 \\
**Our detector w/o feature distillation** & ✓ & ✓ & amodal & 40.0 & 15.2 & 27.6 \\
**Our UP-VL detector w. feature distillation** & ✓ & ✓ & amodal & **52.0** & **19.7** & **35.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of methods on unsupervised class-aware detection of objects in _all motion states_. (*since semantics are not available, we report class-agnostic AP for the first row, given that vehicle and VRU contain similar number of samples.) |
2306.16417 | On the Jacobson radical and semisimplicity of a semiring | Based on the minimal and simple representations, we introduce two
Jacobson-type Hoehnke radicals, m-radical and s-radical, of a semiring $S$.
Every minimal (simple) $S$-semimodule is a quotient of $S$ by a regular right
congruence (maximal) $\mu$ on $S$ such that $[0]_\mu$ is a maximal
$\mu$-saturated right ideal in $S$. Thus the m(s)-radical becomes an
intersection of some regular congruences. Finally, every semisimple semiring is
characterized as a subdirect product of primitive semirings; and every
s-primitive semiring is represented as a 1-fold transitive subsemiring of the
semiring of all endomorphisms on a semimodule over a division semiring. | A. K. Bhuniya, Puja Sarkar | 2023-04-30T17:42:20Z | http://arxiv.org/abs/2306.16417v1 | # On the Jacobson radical and semisimplicity of a semiring
###### Abstract
Based on the minimal and simple representations, we introduce two Jacobson-type Hoehnke radicals, m-radical and s-radical, of a semiring \(S\). Every minimal (simple) \(S\)-semimodule is a quotient of \(S\) by a regular right congruence (maximal) \(\mu\) on \(S\) such that \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\). Thus the m(s)-radical becomes an intersection of some regular congruences. Finally, every semisimple semiring is characterized as a subdirect product of primitive semirings; and every s-primitive semiring is represented as a 1-fold transitive subsemiring of the semiring of all endomorphisms on a semimodule over a division semiring.
Department of Mathematics, Visva-Bharati,
Santiniketan-731235, India.
[email protected]; [email protected]
2000 Mathematics Subject Classification: 16Y60; 16N99; 16D99.
_Key words and phrases_: Semiring; idempotent; Jacobson radical; Hoehnke radical; primitive.
## 1 Introduction
A semiring is an algebraic structure satisfying all the axioms of a ring, but one that every element has an additive inverse. The absence of additive inverses forces a semiring to deviate radically from behaving like a ring. For example, ideals are not in bijection with the congruences on a semiring. Further, the presence of the additively idempotent semirings makes the class of the semirings aberrant. Now semirings have become a part of mainstream mathematics for their importance in theoretical computer science [37] and automata theory [9]; and for the surprising 'characteristic one analogy' of the usual algebra over fields [10, 11, 12, 13]; and for the role of additively idempotent semirings in tropical mathematics [1, 7, 14, 23, 24, 31].
There are many papers on the concrete radicals [6, 17, 18, 22, 28, 29, 32] as well as on abstract theory of radicals [34, 35, 36, 39, 40, 41] on semirings. Bourne [6] characterized Jacobson radical of a semiring \(S\) internally as the sum of all right semiregular ideals in \(S\). Ilzuka [22] defined the same in terms of the irreducible representations of a semiring. Also, he introduced quasi-regularity in semirings and characterized the Jacobson radical of a semiring \(S\) as the intersection of all strongly closed primitive ideals in \(S\). If \(S\) is an additively idempotent semiring, then every right ideal in
\(S\) is semiregular and quasi-regular. Thus according to both Bourne and Ilzuka, every additively idempotent semiring \(S\) is a radical semiring, i.e., \(J(S)=S\). In a recent paper, [28] Katsov and Nam introduced and studied an external Kurosh-Amitsur Jacobson radical theory for a semiring \(S\) based on representations of \(S\). In search of a suitable analogue of the Jacobson radical that will work for additively idempotent semirings also, they introduced another radical \(J_{s}(S)\) of a semiring \(S\) in terms of the simple representations of \(S\) and characterized finite additively idempotent \(J_{s}\)-semisimple semirings. Mai and Tuyen [32] continued to study this radical \(J_{s}(S)\) and \(J_{s}\)-semisimplicity within the class of zerosumfree semirings.
Thus even though the ideals are the fundamental objects of the radical theory of rings, they can not fulfil the same role in the radical theory of semirings. At the same time, replacing ideals with the more general notion of congruences on semirings exhibits many excellent properties and several analogies with classical results on the rings [3, 26, 27]. Here we define and study two Jacobson type radicals of a semiring \(S\) as congruences on \(S\), and also show that these radicals act for additively idempotent semirings.
In our approach, the annihilator of an \(S\)-semimodule \(M\) is considered as a congruence on \(S\). Similarly to the semirings, subsemimodules are not in bijection with the congruences on a semi-module; which produces three variants of 'irreducibility' of semimodules - minimal semimodules, elementary semimodules, and simple semimodules [8], [25]. Based on the classes of minimal semi-modules and simple semimodules, two different Jacobson type Hoehnke radicals of a semiring \(S\) are defined in terms of annihilator of \(S\)-semimodules, which are called m-radical and s-radical of \(S\) respectively. The absence of additive inverse makes it difficult to define the regularity of an ideal in a semiring analogously to the rings, whereas in the case of a congruence, it can be defined naturally. An \(S\)-semimodule \(M\) is (simple) minimal if and only if \(M\simeq S/\mu\) where \(\mu\) is a (maximal) regular right congruence on \(S\) such that \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\). Thus to make the quotient semimodule \(S/\mu\) 'irreducible', the maximality of the regular right congruence \(\mu\) is not sufficient; indeed, it is shared with its zero class \([0]_{\mu}\). Such a sharing among the right congruences and the right ideals happens in most of the theorems of this article. Also, considering radical as a congruence makes it possible to present the \(J\)-semisimple semirings as a subdirect product of primitive semirings.
This paper is organized as follows. Section 2 briefly recaps necessary definitions and associated facts on semirings and semimodules. Section 3 introduces the Jacobson m-radical and s-radical externally. Each of these radicals is Hoehnke radical; and can be expressed as an intersection of a suitable class of regular right congruences. We conclude this section with the characterization of the radicals of the product semirings. Section 4 introduces m-primitive and s-primitive semirings and characterizes Jacobson semisimple semirings as a subdirect product of primitive semirings. Every commutative (s)m-primitive semiring is a (congruence simple) semifield. Since every congruence simple semifield \(S\) with \(|S|>2\) is a field, every commutative s-semisimple semiring is a subdirect product of a family of semirings, each of which is either the 2-element Boolean algebra or a field. Finally, every s-primitive semiring is represented as a 1-fold transitive subsemiring of the semiring
of all endomorphisms on a semimodule over a division semiring.
## 2 Preliminaries
In this paper by a _semiring_\((S,+,\cdot)\) we mean a nonempty set \(S\) with two binary operations '\(+\)' and '\(\cdot\)' satisfying:
* \((S,+)\) is a commutative monoid with identity element \(0\);
* \((S,\cdot)\) is a semigroup;
* \(a(b+c)=ab+ac\) and \((a+b)c=ac+bc\) for all \(a,b,c\in S\).
Following Golan [15], in the literature, it is generally assumed that a semiring has both an additive identity and a multiplicative identity and the additive identity \(0\) is absorbing, i.e., \(0s=s0=0\) for all \(s\in S\). However, we follow the convention of Hebisch and Weinert [16]. Also, there are many articles on semirings where existence of unity is not assumed [4], [5], [21], [25], [38], [42]. In this paper, we only assume that every semiring has an additive identity element \(0\) which is absorbing. If a semiring \(S\) with multiplicative identity is such that every nonzero element has a multiplicative inverse, then \(S\) is called a _division semiring_. A commutative division semiring is called a _semifield_. A semiring \(S\) is said to be an _additively idempotent semiring_ if \(a+a=a\) for all \(a\in S\). Both the two element Boolean algebra \(\mathbb{B}\) and the max-plus algebra \(\mathbb{R}_{max}\) are additively idempotent semifields.
A _right congruence_\(\rho\) on a semiring \(S\) is an equivalence relation on \(S\) such that for all \(a,b,c\in S\), if \((a,b)\in\rho\) then we have \((a+c,b+c)\in\rho\) and \((ac,bc)\in\rho\). A _left congruence_ on a semiring is defined similarly, and a _congruence_ is both a left and a right congruence. Similar to the rings, the ideals and homomorphisms of semirings are defined in the usual way. Also, it is assumed that every semiring homomorphism \(\phi:S_{1}\longrightarrow S_{2}\) satisfies \(\phi(0_{1})=0_{2}\). A bijective homomorphism is called an _isomorphism_; if there is an isomorphism \(\phi:S_{1}\longrightarrow S_{2}\), then the semirings \(S_{1}\) and \(S_{2}\) are said to be _isomorphic_ which is denoted by \(S_{1}\simeq S_{2}\). The kernel of a semiring homomorphism \(\phi:S_{1}\longrightarrow S_{2}\) is defined by \(\ker\phi=\{(a,b)\in S_{1}\times S_{2}\mid\phi(a)=\phi(b)\}\). Then \(\ker\phi\) is a congruence on \(S_{1}\) and \(S_{1}/\ker\phi\simeq\phi(S_{1})\).
We denote \(\Delta_{S}=\{(s,s)\mid s\in S\}\) and \(\nabla_{S}=S\times S\). A semiring \(S\) is said to be _congruence-simple_ if it has no congruences other than \(\Delta_{S}\) and \(\nabla_{S}\). If \(\rho\) is a (left, right) congruence on \(S\), then for every \(s\in S\), the \(\rho\)-class containing \(s\) is denoted by \([s]_{\rho}\) or briefly by \([s]\) if there is no scope of ambiguity.
For every (left, right) ideal \(I\) in \(S\), the Bourne (left, right) congruence \(\sigma_{I}\) on \(S\) is defined by: for \(x,y\in S\), \(x\sigma_{I}y\) if \(x+i_{1}=y+i_{2}\) for some \(i_{1},i_{2}\in I\). The Bourne (left, right) congruence \(\sigma_{I}\) is the smallest (left, right) congruence on \(S\) such that \(I\subseteq[0]_{\sigma_{I}}\). In general \([0]_{\sigma_{I}}\neq I\). In fact, \([0]_{\sigma_{I}}=\{x\in S\mid x+i=i\) for some \(i\in I\}\) which is denoted by \(\overline{I}\) and is called the _saturation of \(I\)_. A (left, right) ideal \(I\) is called _saturated_ if \(\overline{I}=I\)[30].
A _right \(S\)-semimodule_ is a commutative monoid \((M,+,0_{M})\) equipped with a right action \(M\times S\longrightarrow M\) that satisfies for all \(m,m_{1},m_{2}\in M\) and \(s,s_{1},s_{2}\in S\):
* \((m_{1}+m_{2})r=m_{1}r+m_{2}r\);
* \(m(r_{1}+r_{2})=mr_{1}+mr_{2}\);
* \(m(r_{1}r_{2})=(mr_{1})r_{2}\);
* \(m0=0_{M}=0_{M}r\).
Unless stated otherwise, by an \(S\)_-semimodule_\(M\), we mean a right \(S\)-semimodule.
Subsemimodules and congruences on a semimodule are defined in the usual way. The set of all subsemimodules and congruences on an \(S\)-semimodule \(M\) will be denoted by \(\mathcal{S}_{S}(M)\) and \(\mathcal{C}_{S}(M)\) respectively.
Following Chen et. al. [8] we define:
**Definition 2.1**.: _Let \(M\) be an \(S\)-semimodule such that \(MS\neq 0\). Then \(M\) is called,_
* _minimal if_ \(M\) _has no subsemimodules other than_ \((0)\)_and_ \(M\) _;_
* _simple if it is minimal and the only congruences on_ \(M\) _are_ \(\Delta_{M}\) _and_ \(\nabla_{M}\) _where_ \(\Delta_{M}\) _is the equality relation on_ \(M\) _and_ \(\nabla_{M}=M\times M\)_._
In [25], simple semimodules have been termed as irreducible semimodules. We denote the class of all minimal and simple \(S\)-semimodules by \(\mathcal{M}(S)\) and \(\mathcal{S}(S)\), respectively.
It was remarked in [25] that in a minimal semimodule \(M\), every \(m\neq 0\) generates \(M\). However, for the sake of completion, here we include proof.
**Lemma 2.2**.: _A nonzero \(S\)-semimodule \(M\) is minimal if and only if \(M=mS\) for all \(m(\neq 0)\in M\)._
Proof.: Let \(M\) be minimal. Then \(MS\neq 0\) implies that \(N=\{m\in M\ |\ mS=0\}\) is a proper subsemimodule of \(M\), and hence it must be equal to \(0\). So for every \(m\neq 0\), \(mS\) is a nonzero subsemimodule of \(M\), and it follows that \(mS=M\).
The converse is trivial.
Let \(R\) be a ring and \(M\) be a minimal semimodule over \(R\). If \(m(\neq 0)\in M\), then there exists \(s\in S\) such that \(m=ms\). Consider \(m(-s)\in M\). Then \(ms+m(-s)=m(s-s)=m0=0\) implies that every element of \(M\) has an additive inverse. Hence \((M,+)\) is a group, and so \(M\) is a module over \(R\). Thus over a ring \(R\), minimal semimodules and simple semimodules coincide.
The _annihilator_ of an \(S\)-semimodule \(M\) is defined by
\[ann_{S}(M)=\{(s_{1},s_{2})\in S\times S\ |\ ms_{1}=ms_{2}\ \text{for all}\ m\in M\}.\]
Then \(ann_{S}(M)\) is a congruence on \(S\).
The right action of \(S\) on \(M\) induces an \(S\)-endomorphism \(\psi_{s}:M\longrightarrow M\) ; \(m\mapsto ms\) on \(M\) for each \(s\in S\). Denote the semiring of all \(S\)-endomorphisms on \(M\) by \(End_{S}(M)\). Thus we get a representation \(\psi:S\longrightarrow End_{S}(M)\); \(s\mapsto\psi_{s}\) of \(S\). In this case, \(M\) is called a representation module of \(S\). Note that \(\ker\psi=\{(s_{1},s_{2})\in S\times S\ |\ \psi(s_{1})=\psi(s_{2})\}=ann_{S}(M)\).
An \(S\)-semimodule \(M\) is said to be _faithful_ if \(ann_{S}(M)=\Delta_{S}\), equivalently \(\ker\psi=\Delta_{S}\).
Let \(\rho\) be a right congruence on \(S\). Define \(S/\rho\times S\longrightarrow S/\rho\) by \(([a]_{\rho},s)\mapsto[as]_{\rho}\). Then \(S/\rho\) is a right \(S\)-semimodule. Also, if \(M\) is a right \(S\)-semimodule then for every congruence \(\rho\) on \(S\) with \(\rho\subseteq ann_{S}(M)\), the scalar multiplication \(m[s]_{\rho}=ms\) makes \(M\) an \(S/\rho\)-semimodule.
In the sequel, we will have several occasions to use the following result, which can be proved easily, and so we omit the proof.
**Lemma 2.3**.: _Let \(S\) be a semiring and \(\rho\) be a congruence on \(S\)._
1. _If_ \(M\) _is an_ \(S/\rho\)_-semimodule, then_ \(M\) _becomes an_ \(S\)_-semimodule under the scalar multiplication_ \(ms=m[s]\)_._ _Moreover,_ \(\rho\subseteq ann_{S}(M)\)_._
2. _Let_ \(M\) _be an_ \(S\)_-semimodule and_ \(\rho\subseteq ann_{S}(M)\)_. Then_ 1. \(ann_{S/\rho}(M)=ann_{S}(M)/\rho\)_._ 2. \(\mathcal{S}_{S}(M)=\mathcal{S}_{S/\rho}(M)\)_._ 3. \(\mathcal{C}_{S}(M)=\mathcal{C}_{S/\rho}(M)\)_._
The reader is referred to [15] for the undefined terms and notions concerning semirings and semimodules over semirings.
## 3 Jacobson radical of a semiring
In this section, we define Jacobson m-radical and s-radical of a semiring based on the classes of minimal semimodules and simple semimodules, respectively, in a way that is familiar from the radical theory of rings. Also, we find two suitable classes of regular right congruences on a semiring \(S\) to characterize these two radicals internally without any reference to the semimodules over \(S\).
**Definition 3.1**.: _Let \(S\) be a semiring. We define_
1. \(m\)_-radical of_ \(S\) _by_ \(rad_{m}(S)=\cap_{M\in\mathcal{M}(S)}ann_{S}(M)\)_;_
2. \(s\)_-radical of_ \(S\) _by_ \(rad_{s}(S)=\cap_{M\in\mathcal{S}(S)}ann_{S}(M)\)_._
_If there are no minimal semimodules over \(S\), then we define \(rad_{m}(S)=\nabla_{S}\). Similarly, we define \(rad_{s}(S)=\nabla_{S}\) if there is no simple \(S\)-semimodules._
_A semiring \(S\) is said to be \(m\)-semisimple if \(rad_{m}(S)=\Delta_{S}\); and \(s\)-semisimple if \(rad_{m}(S)=\Delta_{S}\)._
Recall that a right module \(M\) over a ring \(R\) is called irreducible if \(MR\neq\{0\}\) and \(M\) has no submodules other than \(\{0\}\) and \(M\). The Jacobson radical of a ring \(R\) is defined by \(J(R)=\cap Ann_{R}(M)\) where the intersection runs over all irreducible \(R\)-modules and \(Ann_{R}(M)=\{r\in R\mid mr=0\) for all \(m\in M\}\)[19]. Since every ideal in a ring is saturated, it follows that \(ann_{R}(M)=\sigma_{Ann_{R}(M)}\) and \([0]_{ann_{R}(M)}=Ann_{R}(M)\) for every right \(R\)-module \(M\).
**Example 3.2**.: _Let \(R\) be a ring (possibly without 1). If \(M\) is a minimal semimodule over \(R\), then, by Lemma 2.2, \((M,+)\) is a group, and so is a module over \(R\). An \(R\)-semimodule \(M\) is minimal if and only if it is simple; equivalently, \(M\) is an irreducible module over \(R\). Hence it follows that \(rad_{m}(R)=rad_{s}(R)\). Also we have \(rad_{s}(R)=\cap_{M\in\mathcal{S}(R)}ann_{R}(M)=\cap_{M\in\mathcal{S}(R)} \sigma_{Ann_{R}(M)}=\sigma_{\cap_{M\in\mathcal{S}(R)}Ann_{R}(M)}=\sigma_{J(R)}\) and \([0]_{rad_{s}(R)}=J(R)\)._
An assignment from the collection of all semirings to the collection of all congruences over semirings \(S\longmapsto r(S)\) is said to be a _Hoehnke radical_ if for every onto homomorphism \(f:S\to f(S)\),
1. \(f(r(S))\subseteq r(f(S))\) where \(f(r(S))=\{(f(a),f(b))\mid(a,b)\in r(S)\}\);
2. \(r(S/r(S))=\Delta_{(S/r(S))}\).
We show that both the m-radical and the s-radical are Hoehnke radicals on \(S\).
**Theorem 3.3**.: _Let \(S\) be a semiring. Then both the assignments \(S\longmapsto rad_{m}(S)\) and \(S\longmapsto rad_{s}(S)\) are Hoehnke radicals on \(S\)._
Proof.: We prove the result for the m-radical and the proof for the s-radical is similar. Let \(f:S\to f(S)\) be an onto homomorphism. Then the first isomorphism theorem for semirings implies \(S/\ker f\cong f(S)\).
Let \(M\) be a minimal \(f(S)\)-semimodule. Then, by Lemma 2.3, \(M\) becomes a minimal \(S\)-semimodule under the scalar multiplication \(ms=mf(s)\). Hence it follows that
\[\begin{split} f(rad_{m}(S))&=\{(f(a),f(b))\mid(a,b) \in rad_{m}(S)\}\\ &=\{(f(a),f(b))\mid(a,b)\in ann_{S}(M)\text{ for all }M\in\mathcal{M}(S)\}\\ &\subseteq\{(f(a),f(b))\mid(a,b)\in ann_{S}(M)\text{ for all }M\in\mathcal{M}(f(S))\}\\ &=\{(f(a),f(b))\mid(f(a),f(b))\in ann_{f(S)}(M)\text{ for all }M\in\mathcal{M}(f(S))\}\\ &=rad_{m}(f(S)).\end{split}\]
Also, by Lemma 2.3, we have
\[\begin{split} rad_{m}(S/rad_{m}(S))&=\cap_{M\in \mathcal{M}(S/rad_{m}(S))}ann_{S/rad_{m}(S)}(M)\\ &=\cap_{M\in\mathcal{M}(S/rad_{m}(S))}ann_{S}(M)/rad_{m}(S)\\ &=\cap_{M\in\mathcal{M}(S)}ann_{S}(M)/rad_{m}(S)\\ &=\Delta_{S/rad_{m}(S)}.\end{split}\]
Therefore the assignment \(rad_{m}(S)\) is a Hoehnke radical.
**Example 3.4**.: _Let \(G\) be a finite group and \(\mathbb{B}G\) its group semiring over the two-element Boolean algebra \(\mathbb{B}\). Let \(M\) be a minimal \(\mathbb{B}G\)-semimodule. Then, by Theorem 3.3 [8], \(M\) is isomorphic to the trivial semimodule \(\mathbb{B}\). Thus \(ann_{\mathbb{B}G}(M)=\nabla_{\mathbb{B}G}\). Therefore \(rad_{m}(\mathbb{B}G)=\nabla_{\mathbb{B}G}\)._
_Also, a \(\mathbb{B}G\)-semimodule \(M\) is minimal if and only if it is simple [8]. Hence \(rad_{s}(\mathbb{B}G)=rad_{m}(\mathbb{B}G)=\nabla_{\mathbb{B}G}\)._
A right congruence \(\mu\) on \(S\) is said to be a _regular right congruence_ if there exists \(e\in S\) such that \((es,s)\in\mu\) for every \(s\in S\).
If \(\rho\) is a regular right congruence on \(S\), then \(M=S/\rho\) is a right \(S\)-semimodule such that \(MS\neq 0\). Suppose that \(N\) is a subsemimodule of \(M\). Then \(I(N)=\{s\in S\mid[s]_{\rho}\in N\}\) is a right ideal in \(S\) that satisfies the property: for every \(s\in S\) and \(i\in I\), \((s,i)\in\rho\) implies that \(s\in I\). Conversely, for a right ideal \(I\) of \(S\) that satisfies the above property induces a subsemimodule \(N(I)=\{[s]_{\rho}\mid s\in I\}\) of the right \(S\)-semimodule \(S/\rho\).
Let \(I\) be a (left, right) ideal of a semiring \(S\) and \(\mu\) be a (left, right) congruence relation on \(S\). Then \(I\) is said to be a \(\mu\)_-saturated (left, right) ideal_ of \(S\) if for every \(s\in S\) and \(i\in I\), \((s,i)\in\mu\) implies that \(s\in I\).
An ideal \(I\) is \(\mu\)-saturated if and only if \(I=\cup_{a\in I}[a]_{\mu}\). Also, \(I\) is a saturated ideal if and only if it is \(\sigma_{I}\)-saturated. Thus the \(\mu\)-saturated ideals generalize the notion of saturated ideals.
The subsequent two theorems characterize the regular congruences \(\mu\) on \(S\) such that the quotient semimodule \(S/\mu\) is a minimal or a simple semimodule over \(S\).
**Theorem 3.5**.: _Let \(M\) be an \(S\)-semimodule. Then \(M\) is minimal if and only if there exists a regular right congruence \(\mu\) on \(S\) such that \(S/\mu\simeq M\) and \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\)._
Proof.: Let \(M\) be minimal. Consider \(m\in M,m\neq 0\). Then, by Lemma 2.2, we have \(mS=M\). Define \(\phi:S\to M\) by \(\phi(s)=ms\) for all \(s\in S\). Then \(\phi\) is an onto module homomorphism. Hence \(\mu=\ker\phi=\{(s_{1},s_{2})\in S\times S\mid\ ms_{1}=ms_{2}\}\) is a right congruence on \(S\); and we have \(M\simeq S/\mu\) as \(S\)-semimodules. Since \(mS=M\), there exists an element \(e\in S\) such that \(me=m\), which implies that \(mes=ms\) for all \(s\in S\). Therefore \((es,s)\in\mu\) for all \(s\in S\). Hence, \(\mu\) is a regular right congruence on \(S\). Let \(I\) be a \(\mu\)-saturated right ideal in \(S\) such that \([0]_{\mu}\subsetneq I\). Then \(J=\{[s]_{\mu}\mid\ s\in I\}\) is a subsemimodule of the \(S\)-semimodule \(S/\mu\). Now \(S/\mu\) is minimal and \(J\neq\{[0]_{\mu}\}\) implies that \(J=S/\mu\). Since \(I\) is \(\mu\)-saturated, it follows that \(I=S\). Thus \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal.
Conversely assume that \(\mu\) is a regular right congruence on \(S\) such that \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\). Then \(S/\mu\) is a right \(S\)-semimodule. Since \(\mu\) is regular, we have an element \(e\in S\) such that \([e]_{\mu}s=[s]_{\mu}\) for all \(s\in S\). Hence \([e]_{\mu}S=S/\mu\) and so \((S/\mu)S=S/\mu\). Now take any nonzero subsemimodule \(N\) of \(S/\mu\). Then \(I(N)=\{s\in S\mid\ [s]_{\mu}\in N\}\) is a \(\mu\)-saturated right ideal in \(S\) containing \([0]_{\mu}\). Since \(I(N)\neq[0]_{\mu}\), we have \(I(N)=S\). Thus \(N=S/\mu\) implies that \(S/\mu\) is minimal.
Every simple semimodule is congruence-simple. Therefore for every right congruence \(\mu\) on \(S\), \(S/\mu\) is simple, implying that \(\mu\) is maximal. The following theorem characterizes a regular right congruence \(\mu\) on \(S\) that makes the right \(S\)-semimodule \(S/\mu\) simple.
**Theorem 3.6**.: _Let \(M\) be a \(S\)-semimodule. Then \(M\) is simple if and only if there exists a maximal regular right congruence \(\mu\) on \(S\) such that \(S/\mu\simeq M\) and \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\)._
Proof.: Let \(M\) be simple. Then, by Theorem 3.5, there exists a regular right congruence \(\mu\) on \(S\) such that \(S/\mu\simeq M\) and \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\). Now for any regular right congruence \(\phi\) on \(S\) containing \(\mu\) we have a right congruence \(\phi/\mu=\{([a]_{\mu},[b]_{\mu})\mid\ (a,b)\in\phi\}\) on the \(S\)-semimodule \(S/\mu\). Since \(S/\mu\) is simple, \(\phi/\mu\) is either \(\Delta_{S/\mu}\) or \(\nabla_{S/\mu}\). Therefore \(\phi\) is either \(\mu\) or \(\nabla_{S}\). Thus \(\mu\) is a maximal regular right congruence on \(S\).
Conversely, by Theorem 3.5, it follows that \(S/\mu\) is a minimal \(S\)-semimodule for every maximal regular right congruence \(\mu\) on \(S\) where \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\). Since every maximal regular right congruence is also a maximal right congruence, \(S/\mu\) is simple.
Theorem 3.5 and Theorem 3.6 motivate two further definitions. A regular right congruence \(\mu\) on \(S\) is said to be _m-regular_ if \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\) and said to be _s-regular_ if it is a maximal regular right congruence such that \([0]_{\mu}\) is a maximal \(\mu\)-saturated right ideal in \(S\). We denote the set of all m-regular right congruences on \(S\) by \(\mathcal{RC}_{m}(S)\) and the set of all s-regular right congruences on \(S\) by \(\mathcal{RC}_{s}(S)\).
Now m-radical and s-radical of a semiring \(S\) are characterized in terms of the m-regular and the s-regular right congruences, respectively. First, we must associate the regular right congruences on a semiring \(S\) with the semimodules over \(S\). Let \(M\) be a right \(S\)-semimodule. For every \(m\in M\), we define
\[\delta_{m}=\{(a,b)\in S\times S\mid\ ma=mb\}.\]
Then \(\delta_{m}\) is a right congruence on \(S\). The following result shows that every regular right congruence is of this form. The following result is by analogy with [20].
**Lemma 3.7**.: _Let \(S\) be semiring and \(M\) be a right \(S\)-semimodule. Then_
1. \(ann_{S}(M)=\cap_{m\in M}\delta_{m}\)_._
2. _for every regular right congruence_ \(\mu\) _on_ \(S\)_, there exists an element_ \(e\in S\) _such that_ \(\delta_{[e]_{\mu}}=\mu\) _where_ \([e]_{\mu}\) _is an element of the right_ \(S\)_-semimodule_ \(S/\mu\)_._
3. _if moreover,_ \(M\) _is minimal then for each_ \(m(\neq 0)\in M\)_,_ \(\delta_{m}\) _is an m-regular right congruence on_ \(S\)_._
4. _if moreover,_ \(M\) _is simple then for each_ \(m(\neq 0)\in M\)_,_ \(\delta_{m}\) _is an s-regular right congruence on_ \(S\)_._
Proof.: (i) Follows trivially.
(ii) Since \(\mu\) is a regular right congruence on \(S\), there exists an element \(e\in S\) such that for every \(s\in S\), \((es,s)\in\mu\) and so \([e]_{\mu}s=[s]_{\mu}\) in the right \(S\)-semimodule \(S/\mu\).
Hence for every, \(s,t\in S\), \((s,t)\in\mu,\Leftrightarrow[s]_{\mu}=[t]_{\mu}\Leftrightarrow[e]_{\mu}s=[e]_{ \mu}t\Leftrightarrow(s,t)\in\delta_{[e]_{\mu}}\) which implies that \(\mu=\delta_{[e]_{\mu}}\).
(iii) Let \(M\) be a minimal \(S\)-semimodule and \(m(\neq 0)\in M\). Then \(mS=M\), by Lemma 2.2. Hence there exists an element \(a\in S\) such that \(ma=m\). Then for every \(s\in S\), we have \(mas=ms\), which
implies that \((as,s)\in\delta_{m}\). Thus \(\delta_{m}\) is a regular right congruence where \([0]_{\delta_{m}}=\{s\in S\mid ms=0\}\). Now \(s\delta_{m}t\) and \(t\in[0]_{\delta_{m}}\) implies that \(ms=mt=0\) and so \(s\in[0]_{\delta_{m}}\). Thus \([0]_{\delta_{m}}\) is a \(\delta_{m}\)-saturated right ideal in \(S\).
Let \(I\) be a \(\delta_{m}\)-saturated right ideal in \(S\) such that \([0]_{\delta_{m}}\subsetneq I\). Consider an element \(x\in I\backslash[0]_{\delta_{m}}\). Then \(mx\neq 0\), which implies that \((mx)S=M\). So for each \(s\in S\), there exists an element \(t\in S\) such that \(mxt=ms\), which implies that \((xt,s)\in\delta_{m}\). Since \(xt\in I\), it follows that \(s\in I\). Hence \(S=I\) and so \([0]_{\delta_{m}}\) is a maximal \(\delta_{m}\)-saturated ideal in \(S\).
(iv) Let \(M\) be a simple \(S\)-semimodule and \(m(\neq 0)\in M\). Then by (3), we have \(\delta_{m}\) is a regular right congruence on \(S\) such that \([0]_{\delta_{m}}\) is a maximal \(\delta_{m}\)-saturated ideal in \(S\). Let \(\phi\) be a right congruence on \(S\) containing \(\delta_{m}\). We define \(\phi_{M}=\{(ms,mt)\in M\times M\mid(s,t)\in\phi\}\). Then for every \(m\neq 0\) in \(M\), \(mS=M\), implies that \(\phi_{M}\) is reflexive. Also, it follows from the definition that \(\phi_{M}\) is a congruence on \(M\). Hence \(\phi_{M}\) is either \(\Delta_{M}\) or \(\nabla_{M}\) which implies that \(\phi\) is either \(\delta_{m}\) or \(\nabla_{S}\). Therefore \(\delta_{m}\) is an s-regular right congruence on \(S\).
If \(0\) is the zero element of an \(S\)-semimodule \(M\), then \(\delta_{0}=S\times S\), and so \(ann_{S}(M)=\cap_{m\neq 0}\delta_{m}\). Thus Lemma 3.7 tells us that the annihilator of every minimal (simple) \(S\)-semimodule \(M\) can be expressed as the intersection of the family \(\{\delta_{m}\mid m(\neq 0)\in M\}\) of m-regular (s-regular) right congruences on \(S\). This characterization of the annihilators of the minimal and simple semimodules gives the following characterization of the m-radical and s-radical of a semiring.
**Theorem 3.8**.: _Let \(S\) be a semiring. Then_
1. \(rad_{m}(S)=\cap_{\mu\in\mathcal{RC}_{m}(S)}\mu\)__
2. \(rad_{s}(S)=\cap_{\mu\in\mathcal{RC}_{s}(S)}\mu\)__
Proof.: (i) If \(M\) is a minimal \(S\)-semimodule then for all \(m\in M,m\neq 0\), \(\delta_{m}\in\mathcal{RC}_{m}(S)\) by Lemma 3.7. Hence \(\cap_{\mu\in\mathcal{RC}_{m}(S)}\mu\subseteq\cap_{m(\neq 0)\in M}\delta_{m}=ann _{S}(M)\). Thus it follows that \(\cap_{\mu\in\mathcal{RC}_{m}(S)}\mu\subseteq\cap_{M\in\mathcal{M}(S)}ann_{S}(M)\).
Now let \((a,b)\in\cap_{M\in\mathcal{M}(S)}ann_{S}(M)\). Then Theorem 3.5 implies that \((a,b)\in ann_{S}(S/\mu)\) for every \(\mu\in\mathcal{RC}_{m}(S)\). For each \(\mu\in\mathcal{RC}_{m}(S)\), we have \(ann_{S}(S/\mu)=\cap_{s\in S}\delta_{[s]_{\mu}}\). Also there exists an element \(e\in S\) such that \(\delta_{[e]_{\mu}}=\mu\) which implies that \((a,b)\in\mu\). Hence \((a,b)\in\cap_{\mu\in\mathcal{RC}_{m}(S)}\mu\) and it follows that \(ann_{S}(M)\subseteq\cap_{\mu\in\mathcal{RC}_{m}(S)}\mu\). Thus we have \(rad_{m}(S)=\cap_{\mathcal{M}(S)}ann_{S}(M)=\cap_{\mu\in\mathcal{RC}_{m}(S)}\mu\).
(ii) If \(M\) is a simple \(S\)-semimodule, then for all \(m\in M(m\neq 0)\), \(\delta_{m}\in\mathcal{RC}_{s}(S)\) by Lemma 3.7. Thus \(\cap_{\mu\in\mathcal{RC}_{s}(S)}\mu\subseteq\cap_{m(\neq 0)\in M}\delta_{m}=ann _{S}(M)\). Therefore \(\cap_{\mu\in\mathcal{RC}_{s}(S)}\mu\subseteq\cap_{M\in\mathcal{S}(S)}ann_{S}(M)\).
The reverse inclusion follows by Theorem 3.6 and Lemma 3.7, similarly as in the proof of (i). Therefore \(rad_{s}(S)=\cap_{\mu\in\mathcal{RC}_{s}(S)}\mu\).
**Example 3.9**.: _Let \(F\) be a semifield. Then \(F\) has no nontrivial proper ideals. Hence \(\Delta_{F}\in\mathcal{RC}_{m}(F)\) which implies that \(rad_{m}(F)=\Delta_{F}\)._
_In particular, the max-plus algebra \(\mathbb{R}_{\text{max}}=(\mathbb{R}\cup\{-\infty\},max,+)\) is a semifield. Hence \(rad_{m}(\mathbb{R}_{\text{max}})=\Delta_{\mathbb{R}_{\text{max}}}\)._
Subsemiring of an \(m\)-semisimple semiring may not be \(m\)-semisimple, as we see in the following example.
**Example 3.10**.: _Consider the subsemiring \(S=\mathbb{R}^{+}\cup\{0,-\infty\}\) of the max-plus algebra \(\mathbb{R}_{max}\) where \(\mathbb{R}^{+}\) is the set of all positive reals. Then \(S\) is an additively idempotent semiring with \(1_{S}=0\). Hence every right congruence on \(S\) is a regular congruence._
_Let \(\rho\neq\Delta_{S},\nabla_{S}\) be a congruence on \(S\). Since \(\{-\infty\}\) and \(S\) are the only two saturated ideals in \(S\), we have \([-\infty]_{\rho}=\{-\infty\}\). Then there exist \(x,y\in\mathbb{R}^{+}\) and \(x<y\) such that \((x,y)\in\rho\). If \(x<y\) and \((x,y)\in\rho\) then for \(a=y-x\) we have \(x\rho(y+na)\) for every positive integer \(n\); and so \([x,\infty)\times[x,\infty)\subseteq\rho\). Hence we have \(\rho=(J\times J)\cup\Delta_{S}\) where \(J\) is either \([a,\infty)\) or \((a,\infty)\), by the completeness property of \(\mathbb{R}\). If \(J\neq[0,\infty)\) then \(J\cup\{-\infty\}\) is a \(\rho\)-saturated proper ideal containing \([-\infty]_{\rho}\), which implies that \(\rho\notin\mathcal{RC}_{m}(S)\). Hence \(\rho=([0,\infty)\times[0,\infty))\cup\Delta_{S}\) is the only m-regular congruence on \(S\), and it follows that \(rad_{m}(S)=([0,\infty)\times[0,\infty))\cup\Delta_{S}\)._
If \(\rho\) is a right congruence on \(S\) then \(S/\rho\) is a right S-semimodule such that \(ann_{S}(S/\rho)=\{(x,y)\in\nabla_{S}\mid\ (sx,sy)\in\rho\ \text{for all}\ s\in S\}\). Define
\[(\rho:\nabla_{S})=\{(x,y)\in\nabla_{S}\mid\ (sx,sy)\in\rho\ \text{for all}\ s\in S\}.\]
Then \((\rho:\nabla_{S})\) is a congruence on \(S\). Furthermore, if \(\rho\) is regular, there exists an element \(e\in S\) such that \((es,s)\in\rho\) for all \(s\in S\). Let \((x,y)\in(\rho:\nabla_{S})\). Then \((ex,ey)\in\rho\) implies that \(x\rho exp\rho y\) and so \((x,y)\in\rho\). Thus \((\rho:\nabla_{S})\subseteq\rho\). In fact, \((\rho:\nabla_{S})\) is the largest congruence on \(S\) contained in \(\rho\) for every regular right congruence \(\rho\) on \(S\).
Now \((\rho:\nabla_{S})=ann_{S}(S/\rho)\) together with Theorem 3.5 and Theorem 3.6 turn out to be yet another characterization of the m-radical and s-radical of a semiring.
**Theorem 3.11**.: _For every semiring \(S\), we have_
1. \(rad_{m}(S)=\cap_{\rho\in\mathcal{RC}_{m}(S)}(\rho:\nabla_{S})\)_, and_
2. \(rad_{s}(S)=\cap_{\rho\in\mathcal{RC}_{s}(S)}(\rho:\nabla_{S})\)_._
Let \(R\) and \(S\) be two semirings. Then \(R\times S\) is a semiring where the addition and the multiplication are defined componentwise. Consider two right congruences \(\sigma\) and \(\eta\) on \(R\) and \(S\) respectively. Define
\[\sigma\times\eta=\{((r_{1},s_{1}),(r_{2},s_{2}))\mid(r_{1},r_{2})\in\sigma\ \text{and}\ (s_{1},s_{2})\in\eta\}.\]
Then \(\sigma\times\eta\) is a right congruence on the semiring \(R\times S\) where \([(0,0)]_{\sigma\times\eta}=\{(r,s)\in R\times S\mid(r,0)\in\sigma\ \text{and}\ (0,s)\in\eta\}=[0]_{ \sigma}\times[0]_{\eta}\). If moreover, \(\sigma\) and \(\eta\) are regular, then \(\sigma\times\eta\) is also regular on \(R\times S\).
For every right congruence \(\rho\) on \(R\times S\), define
\[\rho_{R} =\{(r_{1},r_{2})\in R\times R\mid\exists\ s_{1},s_{2}\in S\ \text{such that}\ (r_{1},s_{1})\rho(r_{2},s_{2})\}\] \[\text{and}\ \rho_{S} =\{(s_{1},s_{2})\in S\times S\mid\exists\ r_{1},r_{2}\in R\ \text{such that}\ (r_{1},s_{1})\rho(r_{2},s_{2})\}.\]
Then \(\rho\subseteq\rho_{R}\times\rho_{S}\). The following result shows that the equality holds if \(\rho\) is a \(m\)-regular congruence on \(R\times S\).
**Lemma 3.12**.: _Let \(R\) and \(S\) be two semirings. Then \(\rho\in\mathcal{RC}_{m}(R\times S)\) if and only if \(\rho=\sigma\times\nabla_{S}\) or \(\rho=\nabla_{R}\times\delta\) where \(\sigma\in\mathcal{RC}_{m}(R)\) and \(\delta\in\mathcal{RC}_{m}(S)\)._
Proof.: First assume that \(\rho\in\mathcal{RC}_{m}(R\times S)\). Then both \(\rho_{R}\) and \(\rho_{S}\) are regular right congruences on \(R\) and \(S\), respectively. Also \([0]_{\rho_{R}}\times[0]_{\rho_{S}}\) is a proper right ideal of \(R\times S\) such that \([(0,0)]_{\rho}\subseteq[0]_{\rho_{R}}\times[0]_{\rho_{S}}\).
Now consider \((r,s)\in[0]_{\rho_{R}}\times[0]_{\rho_{S}}\) and \((r^{\prime},s^{\prime})\in R\times S\) such that \((r^{\prime},s^{\prime})\rho(r,s)\). Then \(r^{\prime}\rho_{R}r\rho_{R}0\) and \(s^{\prime}\rho_{S}s\rho_{S}0\) implies that \((r^{\prime},s^{\prime})\in[0]_{\rho_{R}}\times[0]_{\rho_{S}}\). Hence \([0]_{\rho_{R}}\times[0]_{\rho_{S}}\) is \(\rho\)-saturated right ideal of \(R\times S\). Since \([(0,0)]_{\rho}\) is a maximal \(\rho\)-saturated right ideal of \(R\times S\), it follows that \([(0,0)]_{\rho}=[0]_{\rho_{R}}\times[0]_{\rho_{S}}\). Therefore either \([0]_{\rho_{R}}\neq R\) or \([0]_{\rho_{S}}\neq S\).
Suppose that \([0]_{\rho_{R}}\neq R\). Let \(I\) be a \(\rho_{R}-saturated\) proper right ideal of \(R\) such that \([0]_{\rho_{R}}\subseteq I\). Then \(I\times S\) is a proper \(\rho-saturated\) right ideal of \(R\times S\) such that \([0]_{\rho_{R}}\times[0]_{\rho_{S}}\subseteq I\times S\). Then maximality of \([(0,0)]_{\rho}\) implies that \([0]_{\rho_{R}}\times[0]_{\rho_{S}}=I\times S\) and hence \([0]_{\rho_{R}}=I\) and \([0]_{\rho_{S}}=S\). Therefore \([0]_{\rho_{R}}\) is a maximal \(\rho_{R}-saturated\) right ideal of \(R\), which implies that \(\rho_{R}\in\mathcal{RC}_{m}(R)\). Also \([0]_{\rho_{S}}=S\) implies that \(\rho_{S}=\nabla_{S}\).
Now \([(0,0)]_{\rho}=[0]_{\rho_{R}}\times S\) implies that \((0,s_{1})\rho(0,s_{2})\) and so \((r,s_{1})\rho(r,s_{2})\) for all \(r\in R\), \(s_{1},s_{2}\in S\). Let \((r_{1},s_{1})\rho_{R}\times\nabla_{S}(r_{2},s_{2})\). Then \(r_{1}\rho_{R}r_{2}\) which implies that \((r_{1},s)\rho(r_{2},s^{\prime})\) for some \(s,s^{\prime}\in S\). Hence \((r_{1},s_{1})\rho(r_{1},s)\rho(r_{2},s^{\prime})\rho(r_{2},s_{2})\) and so \(\rho_{R}\times\nabla_{S}\subseteq\rho\). Also \(\rho\subseteq\rho_{R}\times\nabla_{S}\). Therefore \(\rho=\rho_{R}\times\nabla_{S}\).
If \([0]_{\rho_{S}}\neq S\), then similarly it follows that \(\rho=\nabla_{R}\times\rho_{S}\).
Conversely let \(\sigma\in\mathcal{RC}_{m}(R)\). Then \(\sigma\times\nabla_{S}\) is a regular right congruence on \(R\times S\) with \([(0,0)]_{\sigma\times\nabla_{S}}=[0]_{\sigma}\times S\). Let \(J\) be a proper \(\sigma\times\nabla_{S}\)-saturated ideal in \(R\times S\) containing \([0]_{\sigma\times\nabla_{S}}\). Denote \(J_{R}=\{r\in R\mid\exists s\in S\text{ such that }(r,s)\in J\}\). Then \(J=J_{R}\times S\) and \(J_{R}\) is a proper \(\sigma\)-saturated right ideal in \(R\) containing \([0]_{\sigma}\). Therefore \(J_{R}=[0]_{\sigma}\) and so \(J=[0]_{\sigma}\times S\). Hence \(\sigma\times\nabla_{S}\in\mathcal{RC}_{m}(R\times S)\). Similarly it follows that \(\nabla_{R}\times\eta\in\mathcal{RC}_{m}(R\times S)\) for every \(\eta\in\mathcal{RC}_{m}(S)\).
The following theorem characterizes the Jacobson m-radical of the product semiring \(R\times S\) in terms of the Jacobson m-radicals of the component semirings \(R\) and \(S\).
**Theorem 3.13**.: _Let \(R\) and \(S\) be two semirings. Then \(rad_{m}(R\times S)=rad_{m}(R)\times rad_{m}(S)\)._
Proof.: We have,
\[rad_{m}(R\times S) =\cap_{\rho\in\mathcal{RC}_{m}(R\times S)}\rho\] \[=(\cap_{\rho_{R}\in\mathcal{RC}_{m}(R)}(\rho_{R}\times\nabla_{S}) )\cap(\cap_{\rho_{S}\in\mathcal{RC}_{m}(S)}(\nabla_{R}\times\rho_{S}))\] \[=((\cap_{\rho_{R}\in\mathcal{RC}_{m}(R)}\rho_{R})\times\nabla_{S}) \cap(\nabla_{R}\times(\cap_{\rho_{S}\in\mathcal{RC}_{m}(S)}\rho_{S}))\] \[=(rad_{m}(R)\times\nabla_{S})\cap(\nabla_{R}\times rad_{m}(S))\] \[=rad_{m}(R)\times rad_{m}(S).\]
Similarly, it can be proved analogous lemmas related to s-radical to get the following result whose proof is omitted.
**Theorem 3.14**.: _Let \(R\) and \(S\) be two semirings. Then \(rad_{s}(R\times S)=rad_{s}(R)\times rad_{s}(S)\)._
## 4 Jacobson semisimple semirings
In this section, we study the structure of Jacobson m-semisimple and s-semisimple semirings.
A semiring \(S\) is Jacobson m-semisimple if \(\cap_{M\in\mathcal{RC}_{m}(S)}ann_{S}(M)=\Delta_{S}\). Hence every m-semisimple semiring is a subdirect product of the family of semirings \(\{S/ann_{S}(M)\mid M\in\mathcal{RC}_{m}(S)\}\). Also, it follows from Lemma 2.3 that if \(M\) is an \(S\)-semimodule, then \(M\) is a faithful \(S/ann_{S}(M)\)-semimodule. If moreover, \(M\) is a minimal \(S\)-semimodule, then \(M\) is so as an \(S/ann_{S}(M)\)-semimodule. Similarly, every \(s\)-semisimple semiring \(S\) is a subdirect product of the family of semirings \(\{S/ann_{S}(M)\mid M\in\mathcal{RC}_{s}(S)\}\) where each quotient semiring \(S/ann_{S}(M)\) has the property that \(M\) is faithful and a simple semimodule over the quotient semiring \(S/ann_{S}(M)\). Intending to characterize the structure of semisimple semirings, we introduce the following two notions.
**Definition 4.1**.: _Let \(S\) be a semiring. Then \(S\) is called_
1. \(m\)_-primitive if there is a faithful minimal_ \(S\)_-semimodule_ \(M\)_;_
2. \(s\)_-primitive if there is a faithful simple_ \(S\)_-semimodule_ \(M\)_._
If \(S\) is an \(m\)-primitive semiring, then there is a minimal \(S\)-semimodule \(M\) such that \(ann_{S}(M)=\Delta_{S}\). Hence \(rad_{m}(S)=\cap_{M\in\mathcal{M}(S)}ann_{S}(M)=\Delta_{S}\) and so \(S\) is \(m\)-semisimple. Similarly, every \(s\)-primitive semiring is \(s\)-semisimple.
A congruence \(\sigma\) on \(S\) is said to be an \(m\)_-primitive (\(s\)-primitive) congruence_ if the quotient semiring \(S/\sigma\) is an \(m\)-primitive (\(s\)-primitive) semiring. Thus \(\sigma\) is \(m\)-primitive(\(s\)-primitive) if and only if there exists a faithful minimal(simple) \(S/\sigma\)-semimodule \(M\).
**Lemma 4.2**.: _Let \(\sigma\) be a congruence on \(S\). Then the following conditions are equivalent:_
1. \(\sigma\) _is_ \(m\)_-primitive (s-primitive);_
2. \(\sigma=ann_{S}(M)\) _for some minimal (simple)_ \(S\)_-semimodule_ \(M\)_;_
3. \(\sigma=(\rho:\nabla_{S})\) _for some_ \(\rho\in\mathcal{RC}_{m}(S)\) _(_\(\rho\in\mathcal{RC}_{s}(S)\)_)._
Proof.: we prove the result for \(m\)-primitive congruences. The other cases are similar.
\((i)\Rightarrow(ii):\) Let \(\sigma\) be an \(m\)-primitive congruence on \(S\). Then there exists a faithful minimal \(S/\sigma\)-semimodule \(M\). Hence, by Lemma 2.3, \(M\) is also a minimal \(S\)-semimodule such that \(\sigma\subseteq ann_{S}(M)\) and \(\Delta_{S/\sigma}=ann_{S/\sigma}(M)=ann_{S}(M)/\sigma\), i.e., \(\sigma=ann_{S}(M)\).
\((ii)\Rightarrow(iii):\) Let \(M\) be a minimal \(S\)-semimodule and \(\sigma=ann_{S}(M)\). Then \(M\) is a minimal and faithful right \(S/\sigma\)-semimodule. Theorem 3.5 implies that there exists \(\rho\in\mathcal{RC}_{m}(S)\) such that
\(M\simeq S/\rho\); and hence \(ann_{S}(M)=(\rho:\nabla_{S})\). Then \(ann_{(S/\sigma)}(M)=ann_{S}(M)/\sigma=\Delta_{(S/\sigma)}\) implies that \(ann_{S}(M)=\sigma\). Hence \((\rho:\nabla_{S})=ann_{S}(M)=\sigma\).
\((iii)\Rightarrow(i):\) Let \(\rho\in\mathcal{RC}_{m}(S)\) and \(\sigma=(\rho:\nabla_{S})\). Then \(S/\rho\) is a minimal right \(S\)-semimodule and \(ann_{S}(S/\rho)=(\rho:\nabla_{S})=\sigma\). Since \(\sigma\) is a semiring congruence on \(S\) and \(\sigma=(\rho:\nabla_{S})\), it follows that \(S/\rho\) is a minimal right \(S/\sigma\)-semimodule. Also, by Lemma 2.3, we have \(ann_{S/\sigma}(S/\rho)=ann_{S}(S/\rho)/\sigma\)\(=\Delta_{S/\sigma}\). Hence \(S/\rho\) is a minimal and faithful right \(S/\sigma\)-semimodule, so \(\sigma\) is a \(m\)-primitive congruence on \(S\).
From the definition, it follows that a semiring \(S\) is an \(m\)-primitive (\(s\)-primitive) semiring if and only if \(\Delta_{S}\) is an \(m\)-primitive (\(s\)-primitive) congruence on \(S\). Thus we have:
**Corollary 4.3**.: _Let \(S\) be a semiring. Then \(S\) is_
1. \(m\)_-primitive if and only if there exists_ \(\rho\in\mathcal{RC}_{m}(S)\) _such that_ \((\rho:\nabla_{S})=\Delta_{S}\)_;_
2. \(s\)_-primitive if and only if there exists_ \(\rho\in\mathcal{RC}_{s}(S)\) _such that_ \((\rho:\nabla_{S})=\Delta_{S}\)_._
Division semiring is a noncommutative generalization of a semifield. The following result shows that primitive semirings are other noncommutative generalizations of semifields. The m-primitive semirings generalize the semifields, whereas the s-primitive semirings generalize the congruence-simple semifields.
**Theorem 4.4**.: _Let \(S\) be a commutative semiring. Then \(S\) is_
1. \(m\)_-primitive if and only if it is a semifield._
2. \(s\)_-primitive if and only if it is a congruence-simple semifield._
Proof.: (i) Let \(S\) be a commutative \(m\)-primitive semiring. Then, by Corollary 4.3, there is a regular right congruence \(\rho\) in \(\mathcal{RC}_{m}(S)\) such that \((\rho:\nabla_{S})=\Delta_{S}\). Since \(S\) is a commutative semiring, \(\rho\) becomes a congruence on \(S\) and so \(\rho=(\rho:\nabla_{S})=\Delta_{S}\). Therefore \(\Delta_{S}\in\mathcal{RC}_{m}(S)\) and there is an element \(e\in S\) such that \(es=s=se\) for all \(s\in S\). Thus \(e\) is a multiplicative identity in \(S\). Also \(\rho=\Delta_{S}\in\mathcal{RC}_{m}(S)\) implies that \((0)\) is maximal \(\Delta_{S}\)-saturated ideal in \(S\). Since every ideal in \(S\) is \(\Delta_{S}\)-saturated, it follows that \((0)\) and \(S\) are the only two ideals in \(S\). Now for each non-zero element \(a\in S\), \(aS\) is a non-zero ideal in \(S\). Hence \(aS=S\), which implies that there exists an element \(b\in S\) such that \(ab=e=ba\). Thus \(S\) is a semifield.
Conversely, let \(S\) be a semifield. Then \(M=S\) is a minimal \(S\)-semimodule and \(ann_{S}(M)=\{(s_{1},s_{2})\in S\times S\mid ss_{1}=ss_{2}\) for all \(s\in S\}=\Delta_{S}\). Therefore \(S\) is \(m\)-primitive.
(ii) Let \(S\) be a commutative \(s\)-primitive semiring. Then \(S\) is \(m\)-primitive, and so, by (i), it is a semifield. Also, by Corollary 4.3, there exists a right congruence \(\rho\in\mathcal{RC}_{s}(S)\) such that \((\rho:\nabla_{S})=\Delta_{S}\). Since \(S\) is commutative, it follows that \((\rho:\nabla_{S})=\rho\). Hence \(\Delta_{S}=\rho\in\mathcal{RC}_{s}(S)\) which implies that \(M=S/\rho\simeq S\) is a simple \(S\)-semimodule. Hence the semifield \(S\) is congruence-simple.
Conversely, if \(S\) is a congruence-simple semifield, then \(S\) itself is a faithful simple \(S\)-semimodule. Hence \(S\) is \(s\)-primitive.
Theorem 4.4 tells us that the congruence-simple semifields constitute an important subclass of the semifields. Similarly to the fields, the Krull-dimension of a congruence-simple semifield is 0, whereas there are semifields, say, for example, \(\mathbb{R}_{max}\) having the Krull-dimension 1 [26]. A semiring \(S\) is called _zerosumfree_ if for every \(a,b\in S\), we have \(a+b=0\) implies that \(a=0\) and \(b=0\). It is well known that a semifield \(S\) is either zerosumfree or is a field [Proposition 4.34; [15]]. Every field is a congruence-simple semifield. If \(S\) is zerosumfree, then \(\rho=\{(s,t)\in S\times S\mid s\neq 0\neq t\}\cup\{(0,0)\}\) is a congruence on \(S\). So for \(S\) to be congruence-simple, we must have \(|S|=2\). Then \(S\) is the 2-element Boolean algebra \(\mathbb{B}\). Thus a congruence-simple semifield is either the 2-element Boolean algebra \(\mathbb{B}\) or a field.
However, in the following, we include independent proof.
**Theorem 4.5**.: _Let \(S\) be a semiring with \(|S|>2\). Then \(S\) is a congruence-simple semifield if and only if it is a field._
Proof.: First, assume that \(S\) is a congruence-simple semifield. Denote \(Z(S)=\{x\in S\mid x+y=0\text{ for some }y\in S\}\). Then \(Z(S)\) is an ideal of \(S\); and so \(Z(S)\) is either \(\{0\}\) or \(S\). If \(Z(S)=\{0\}\), then \(S\) is zerosumfree. So \(\{(0,0)\}\cup\{(s,t)\in S\times S\mid s\neq 0\neq t\}\) induces a nontrivial congruence on \(S\), which contradicts that \(S\) is congruence-simple. Hence \(Z(S)=S\) which implies that \((S,+)\) is a group. Thus \(S\) is a field.
Converse follows trivially.
Thus a zerosumfree semifield \(S\) with \(|S|>2\) can not be congruence-simple. So, in particular, the max-plus algebra \(\mathbb{R}_{max}\) is a semifield but not congruence-simple. Hence \(\mathbb{R}_{max}\) is m-primitive but not s-primitive.
The 2-element Boolean algebra \(\mathbb{B}\) and the field \(\mathbb{Z}_{2}\) of all integers modulo 2 are the only semifields of order two up to isomorphism. Hence it turns out to be the following specific characterization of the commutative s-primitive semirings.
**Corollary 4.6**.: _A commutative semiring \(S\) is s-primitive if and only if it is either the 2-element Boolean algebra \(\mathbb{B}\) or a field._
A semiring \(S\) is called a _subdirect product_ of a family \(\{S_{\alpha}\}_{\Delta}\) of semirings if there is an one-to-one semiring homomorphism \(\phi:S\longrightarrow\prod_{\Delta}S_{\alpha}\) such that for each \(\alpha\in\Delta\), the composition \(\pi_{\alpha}\circ\phi:S\longrightarrow S_{\alpha}\) is onto where \(\pi_{\alpha}:\prod_{\Delta}S_{\alpha}\longrightarrow S_{\alpha}\) is the projection mapping.
It is well known that a semiring \(S\) is a subdirect product of a family \(\{S_{\alpha}\}_{\Delta}\) of semirings if and only if there is a family \(\{\rho_{\alpha}\}_{\Delta}\) of congruences on \(S\) such that \(S/\rho_{\alpha}\simeq S_{\alpha}\) for every \(\alpha\in\Delta\) and \(\cap_{\Delta}\rho_{\alpha}=\Delta_{S}\).
**Theorem 4.7**.: _A semiring \(S\) is \(m\)-semisimple (\(s\)- semisimple) if and only if it is a subdirect product of \(m\)-primitive (\(s\)-primitive) semirings._
Proof.: We prove the result for \(m\)-semisimple semirings. Proof for \(s\)-semisimple semirings is similar.
First, assume that \(S\) is a \(m\)-semisimple semiring. Then \(rad_{m}(S)=\cap_{M\in{\cal M}(S)}ann_{S}(M)=\Delta_{S}\). Hence \(S\) is a subdirect product of the family \(\{S/ann_{S}(M)\mid M\in{\cal M}(S)\}\) of semirings. Lemma 4.2 implies that \(ann_{S}(M)\) is an \(m\)-primitive congruence on \(S\) for every minimal \(S\)-semimodule \(M\). Therefore every semiring in the family \(\{S/ann_{S}(M)\mid M\in{\cal M}(S)\}\) is an \(m\)-primitive semiring, and so \(S\) is a subdirect product of m-primitive semirings.
Conversely, let \(S\) be a subdirect product of a family of \(m\)-primitive semirings \(\{S_{i}\mid\) for all \(i\in\Lambda\}\). Then there exists a one-to-one homomorphism \(\phi:S\to\Pi_{i\in\Lambda}S_{i}\) such that the mapping \(\pi_{i}\circ\phi:S\longrightarrow S_{i}\) is onto for all \(i\in\Lambda\). Thus \(S/ker(\pi_{i}\circ\phi)\cong S_{i}\) for all \(i\in\Lambda\). Let \(M_{i}\) be a faithful minimal \(S_{i}\)-semimodule for each \(i\in\Lambda\). Then, by the Lemma 2.3, \(M_{i}\) is a minimal \(S\)-semimodule where \(ms=m\pi_{i}\circ\phi(s)\) for all \(s\in S\) and \(m\in M_{i}\). Hence \(\cap_{M\in{\cal M}(S)}ann_{S}(M)\subseteq\cap_{i\in\Lambda}ann_{S}(M_{i})\). Now \((a,b)\in ann_{S}(M_{i})\) implies that \(m\pi_{i}\circ\phi(a)=m\pi_{i}\circ\phi(b)\) for all \(m\in M_{i}\); and so \((\pi_{i}\circ\phi(a),\pi_{i}\circ\phi(b))\in ann_{S_{i}}(M)\). Since \(M_{i}\) is faithful over \(S_{i}\), it follows that \(\pi_{i}\circ\phi(a)=\pi_{i}\circ\phi(b)\). Hence \(\cap_{i\in\Lambda}ann_{S}(M_{i})=\Delta_{S}\) which implies that \(rad_{m}(S)=\cap_{M\in{\cal M}(S)}ann_{S}(M)=\Delta_{S}\). Thus \(S\) is a \(m\)-semisimple semiring.
Now, taken together the structure of an s-semisimple semiring characterized in Theorem 4.7 and the characterization of the commutative s-primitive semirings in Corollary 4.6 turn out to be an characterization of the commutative s-semisimple semirings.
**Corollary 4.8**.: _Let \(S\) be a commutative semiring. Then \(S\) is an \(s\)-semisimple semiring if and only if it is a subdirect product of a family of semirings that are either the 2-element Boolean algebra \(\mathbb{B}\) or fields._
Mischell and Fenoglio [33] and Basir et al. [2] independently proved that a commutative semiring \(S\) with \(|S|\geqslant 2\) is congruence-simple if and only if it is either a field or the 2-element Boolean algebra \(\mathbb{B}\). Hence it follows that a commutative semiring is s-semisimple if and only if it is a subdirect product of congruence-simple commutative semirings. A semiring homomorphism \(f:S_{1}\longrightarrow S_{2}\) is said to be _semiisomorphism_ if, for every \(a\in S_{1}\), we have \(f(a)=0\) only for \(a=0\). Katsov and Nam [28] proved that a commutative semiring \(S\) is Brown-McCoy semisimple if and only if \(S\) is semi-isomorphic to a subdirect product of a family of semirings that are either the 2-element Boolean algebra \(\mathbb{B}\) or fields. Hence every commutative s-semisimple semiring is Brown-McCoy semisimple in the sense of Katsov and Nam.
**Example 4.9**.: _Consider the semiring \(\mathbb{N}\) of all nonnegative integers. Then for every prime \(p\), the Bourne congruence \(\sigma_{p\mathbb{N}}\) is a maximal regular congruence on \(\mathbb{N}\) with \([0]_{\sigma_{p\mathbb{N}}}=p\mathbb{N}\). If \(J\) is a \(\sigma_{p\mathbb{N}}\)-saturated ideal in \(\mathbb{N}\) with \(p\mathbb{N}\subsetneq J\), then there exists \(a\in J\) such that \(0<a<p\). By the Fermat's little theorem, we have \(a^{p-1}\equiv 1(modp)\) which implies that \(1\in J\) and so \(J=\mathbb{N}\). Thus \(p\mathbb{N}=[0]_{\sigma_{p\mathbb{N}}}\) is a maximal \(\sigma_{p\mathbb{N}}\)-saturated ideal in \(\mathbb{N}\) and it follows that \(\sigma_{p\mathbb{N}}\in{\cal RC}_{s}(\mathbb{N})\). Hence \(rad_{s}(\mathbb{N})\subseteq\cap\sigma_{p\mathbb{N}}=\Delta_{\mathbb{N}}\); and so \(\mathbb{N}\) is an \(s\)-semisimple semiring._
_Also \(\cap\sigma_{p\mathbb{N}}=\Delta_{\mathbb{N}}\) implies that \(\mathbb{N}\) is a subdirect product of the family of fields \(\mathbb{N}_{p}=\mathbb{N}/\sigma_{p\mathbb{N}}\) where \(p\) is a prime._
We conclude this section with a representation of s-primitive semirings as a semiring of endomorphisms on a semimodule over a division semiring.
The opposite semiring \(S^{op}\) of a semiring \((S,+,\cdot)\) is defined by \((S,+,\circ)\) where \(a\circ b=b\cdot a\) for all \(a,b\in S\). Hence a semiring \(S\) is a division semiring if and only if the opposite semiring \(S^{op}\) is so.
Let \(M\) be a semimodule over a division semiring \(D\). Then a subsemiring \(T\) of the endomorphism semiring \(End_{D}(M)\) is called _1-fold transitive_ if for every non-zero \(m\in M\) and \(n\in M\) there exists \(\alpha\in T\) such that \(\alpha(m)=n\).
In the context of semirings, Schur's lemma was proved in [25], which states that if \(M\) is a simple \(S\)-semimodule, then the endomorphism semiring \(End_{S}(M)\) is a division semiring.
Let \(M\) be a right \(S\)-semimodule and \(E=End_{S}(M)\). Then for \(D=E^{op}\), \(M\) is a right semimodule over \(D\) where the scalar multiplication is defined by \(m\cdot\alpha=\alpha(m)\) for all \(m\in M\) and \(\alpha\in D\).
**Theorem 4.10**.: _If \(S\) is a right \(s\)-primitive semiring, then \(S^{op}\) is isomorphic to a 1-fold transitive subsemiring of the semiring \(End_{D}(M)\) of all endomorphisms on a semimodule \(M\) over a division semiring \(D\)._
Proof.: Let \(M\) be a faithful simple right \(S\)-semimodule. By Schur's Lemma for semimodules [25], the semiring \(E=End_{S}(M)\) is a division semiring. Hence \(D=E^{op}\) is a division semiring, and so \(M\) as a right \(D\)-semimodule where \(m\cdot\alpha\mapsto\alpha(m)\).
For every \(a\in S\), define a mapping \(\psi_{a}:M\to M\) by \(\psi_{a}(m)=ma\). Then for every \(\alpha\in D\), we have \(\psi_{a}(m.\alpha)=\psi_{a}(\alpha(m))=\alpha(m)a=\alpha(ma)=(ma).\alpha=\psi_ {a}(m).\alpha\). In fact, \(\psi_{a}\) is an endomorphism on \(M\) considered a \(D\)-semimodule.
Also the mapping \(\psi:S^{op}\to End_{D}(M)\) defined by \(\psi(a)=\psi_{a}\) is a semiring homomorphism. Moreover \(ker\ \psi=ann_{S}(M)=\Delta_{S}\) implies that \(\psi\) is an injective homomorphism; and so \(S^{op}\) is isomorphic to the subsemiring \(T=\{\psi_{a}\mid a\in S\}\) of \(End_{D}(M)\).
Since \(M\) is a simple right \(S\)-semimodule, by Lemma 2.2, for every \(m(\neq 0)\in M\), \(mS=M\). Then for every \(n\in M\) there exists \(a\in S\) such that \(ma=n\) and so \(\psi_{a}(m)=n\). Thus \(T\) is a 1-fold transitive subsemiring of \(End_{D}(M)\).
It follows from Corollary 4.6 that the semifield \(F=\mathbb{R}_{max}\) is not an \(s\)-primitive semiring. Since \(F\) contains 1, every \(F\)-endomorphism on \(F\) is of the form \(\psi_{a}:F\to F\) given by \(\psi_{a}(m)=am\). Hence \(F\simeq End_{F}(F)\) which implies that \(End_{F}(F)\) is not \(s\)-primitive; whereas \(End_{F}(F)\) is a 1-fold transitive subsemiring of itself. Thus the converse of the Theorem 4.10 does not hold. However, the converse holds in the following weaker form.
**Theorem 4.11**.: _Let \(D\) be a division semiring and \(M\) be a right \(D\)-semimodule. If \(T\) is a 1-fold transitive subsemiring of \(End_{D}(M)\), then \(T^{op}\) is a right \(m\)-primitive semiring._
Proof.: Define \(M\times T^{op}\to M\) by \(m.\alpha\mapsto\alpha(m)\). Then \(M\) is a right \(T^{op}\)-semimodule. Let \(m\) be a non-zero element in \(M\). Then for every \(n\in M\), there exists \(\alpha\in T\) such that \(m.\alpha=n\). Therefore
\(mT^{op}=M\) which implies that \(M\) is minimal, by Lemma 2.2. Now
\[ann_{T^{op}}(M) =\{(\alpha,\beta)\in T\times T\mid m.\alpha=m.\beta\text{ for all }m\in M\}\] \[=\{(\alpha,\beta)\in T\times T\mid\alpha(m)=\beta(m)\text{ for all }m\in M\}\] \[=\{(\alpha,\beta)\in T\times T\mid\alpha=\beta\}\] \[=\Delta_{S}\]
and so \(M\) is a faithful minimal \(T^{op}\)-semimodule. Therefore \(T^{op}\) is a \(m\)-primitive semiring.
## 5 Remarks and open questions
In Section 3, we introduced the m-radical and s-radical of a semiring based on the two notions of'simplicity' of a semimodule, namely minimal semimodule and simple semimodule. Similarly, the e-radical of a semiring can also be defined based on the class of congruence simple semimodules, which are known as elementary semimodules [8], [25]. We conjecture that an \(S\)-semimodule \(M\) is elementary if and only if there exists a maximal regular right congruence \(\mu\) on \(S\) such that \(S/\mu\) is isomorphic to \(M\). Once this conjecture is proved, it would be possible to characterize the e-radical of a semiring internally. Similarly to the present work, every e-semisimple semiring can be expressed as a subdirect product of e-primitive semirings. Thus the present work can be extended to characterize the e-radical of a semiring and e-semisimple semirings.
|
2309.06081 | Information Flow in Graph Neural Networks: A Clinical Triage Use Case | Graph Neural Networks (GNNs) have gained popularity in healthcare and other
domains due to their ability to process multi-modal and multi-relational
graphs. However, efficient training of GNNs remains challenging, with several
open research questions. In this paper, we investigate how the flow of
embedding information within GNNs affects the prediction of links in Knowledge
Graphs (KGs). Specifically, we propose a mathematical model that decouples the
GNN connectivity from the connectivity of the graph data and evaluate the
performance of GNNs in a clinical triage use case. Our results demonstrate that
incorporating domain knowledge into the GNN connectivity leads to better
performance than using the same connectivity as the KG or allowing
unconstrained embedding propagation. Moreover, we show that negative edges play
a crucial role in achieving good predictions, and that using too many GNN
layers can degrade performance. | Víctor Valls, Mykhaylo Zayats, Alessandra Pascale | 2023-09-12T09:18:12Z | http://arxiv.org/abs/2309.06081v1 | # Information Flow in Graph Neural Networks:
###### Abstract
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs. However, efficient training of GNNs remains challenging, with several open research questions. In this paper, we investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs). Specifically, we propose a mathematical model that decouples the GNN connectivity from the connectivity of the graph data and evaluate the performance of GNNs in a clinical triage use case. Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation. Moreover, we show that negative edges play a crucial role in achieving good predictions, and that using too many GNN layers can degrade performance.
## I Introduction
Machine learning algorithms were originally designed to work with data that can be represented as a sequence (e.g., text) or grid (e.g., images). However, these data structures are inadequate for modeling the data of modern applications. For instance, in digital healthcare, a patient's electronic health record (EHR) can include numerous elements, such as demographic information, medical and medication history, laboratory results, etc. One way to model data with arbitrary structure is to use a Knowledge Graph (KG): a graph where nodes represent pieces of information and the edges indicate how the information pieces relate to one another.
Many learning problems on KGs can be cast as predicting links between nodes. Fig. 1 shows an example of a chronic disease prediction problem on a KG. The patient (IDXA98) is connected to its EHR (with the patient's information such as name, dob, medical conditions, etc.), and the goal is to predict to which chronic disease nodes the patient is connected (colored arrow in Fig. 1). Making such a prediction is possible by analyzing the EHR of other patients with _known_ chronic diseases. Another example of a link prediction problem on a KG is when a patient is already diagnosed with a disease (e.g., SARS-CoV-2), and the goal is to find the most effective drug/treatment to help the patient recover [1].
While there exist several methods for predicting edges on graphs [2, 3, 4], Graph Neural Networks (GNNs) have emerged as one of the most widely used techniques. In brief, GNNs were developed in parallel by two communities: _geometric deep learning_ and _graph representation learning_. The first community focused on applying neural networks for prediction tasks on graph data, while the latter community concentrated on learning low-dimensional vector representations of the nodes and edges in the graph [5]. Current GNNs approaches combine the efforts of both communities and include important extensions such as the ability to handle multi-modal and multi-relational data [6, 7].
GNNs' ability to process multi-modal and multi-relational graphs boosted their popularity in various domains, including healthcare. Some applications include the prediction of hospital readmission [8, 9], chronic diseases [10], and ICU mortality [11, 12]. However, despite their popularity, efficient training of GNNs remains challenging. Previous work has primarily focused on designing new architectures for embedding aggregation [6], with little emphasis on how embedding information should be exchanged in the network. For instance, the works in [6, 13] suggest that the GNNs' connectivity--which determines how nodes receive information from their neighbors--should align with the connectivity of the KG. However, there are cases where it could be advantageous to explore more complex GNN connectivities that are tailored to the specific task at hand. For example, exchanging embeddings based on the KG connectivity depicted in Fig. 1 precludes medical conditions from influencing patient embeddings. Yet, incorporating such interactions can be beneficial in tasks such as chronic disease prediction, where it is essential to capture the patients' existing medical conditions (e.g., hyperglycemia for predicting diabetes) in their embeddings.
In this paper, we investigate how the flow of embeddings within a GNN affects the prediction of links in a clinical triage use case. The paper makes the following contributions:
1. We present a mathematical model for predicting links
Fig. 1: Example of a Knowledge Graph (KG) representing the medical record of a patient (Orla). The gray boxes represent the nodes, and the arrows the edges. The dashed and colored arrow with the question mark is the link we would like to predict.
on KGs with GNNs, where we cast the prediction task as an optimization problem and leverage GNNs as an algorithmic tool to solve it (Sec. II). This model emphasizes that the GNN design parameters, such as the GNN connectivity, can be decoupled from the underlying structure of the graph data (i.e., the KG).
2. We show how to map the link prediction optimization to a program in PyG (Sec. III) and study how the GNN parameters affect the link prediction accuracy in a clinical triage use case (Sec. IV). Our findings suggest that a GNN connectivity that considers domain knowledge is more effective than just using the connectivity of the graph data, and that allowing embeddings to flow in any direction may result in poor performance (Sec. IV-C1). Additionally, we demonstrate that negative edges play a crucial role in achieving good predictions (Sec. IV-C3), and that using too many GNN layers can degrade performance (Sec. IV-C2).
## II Link Prediction Model
### _Multi-relational Knowledge Graph (KG)_
A multi-relational Knowledge Graph (KG) is a graph with \(n\) nodes and \(m\)_directed_ links, where each link is associated with a relation \(r\) that represents the type of connection between the nodes. For instance, in the semantic triple _patient_ (node) _suffers from_ (relation) _anemia_ (node), the relation _suffers from_ indicates the type of connection between the node _patient_ and the node _anemia_. The nodes in the graph are also associated with a type or class, e.g., the node _anemia_ can be of the type _medical condition_.
Besides the links' relation, a link can be _positive_, _negative_, or _unknown_. A positive link indicates the two nodes are connected, while a negative link implies no connection. For example, if a patient has tested positive for diabetes, there will be a (positive) link between the patient's node and the diabetes node in the KG. Conversely, if the patient has tested negative, a (negative) link will indicate that such a connection does not exist. Unknown links, as the name suggests, are links whose existence is unknown from the data. This is the type of link that we would like to predict.
### _Link prediction as an optimization problem_
We can model a link prediction problem in a KG as follows. Every node \(i\in\{1,\ldots,n\}\) is associated with a feature vector \(e_{i}\in\mathbf{R}^{d}\), which is also known as the _node's embedding_, or _embedding_ for short. Similarly, every link is associated with a relation matrix \(W_{r}\in\mathbf{R}^{d\times d}\), \(r\in\{1,\ldots,R\}\). Next, for every pair of nodes \((i,j)\) and relation \(r\), we define the links' "score" as
\[x_{ij}^{(r)}:=f(e_{i},W_{r},e_{j}) \tag{1}\]
where \(f\) is a function that takes \(e_{i}\), \(W_{r}\), and \(e_{j}\) as inputs and returns a real number in the interval \([0,1]\).1 Similarly, for every link connecting nodes \((i,j)\) with a relation \(r\), we define the labels
Footnote 1: For example, \(\mathcal{L}\) can be \(\|\mathbf{x}-\mathbf{y}\|_{2}\), i.e., the \(\ell_{2}\)-norm.
\[y_{ij}^{(r)}=\begin{cases}1&\text{there is a relation $r$ from node $i$ to $j$},\\ 0&\text{there is not a relation $r$ from node $i$ to $j$}.\end{cases}\]
With the above model, we can formulate the optimization problem
\[\underset{e_{i},W_{r}}{\text{minimize}}\quad\mathcal{L}(\mathbf{x},\mathbf{y}) \tag{2}\]
where \(\mathcal{L}:\mathbf{R}^{m}\times\mathbf{R}^{m}\rightarrow\mathbf{R}\), \(\mathbf{x}=(x_{ij}^{(r)})\in\mathbf{R}^{m}\), \(\mathbf{y}\in\{0,1\}^{m}\). The role of the loss function \(\mathcal{L}\) is to penalize vector \(\mathbf{x}\) being different from vector \(\mathbf{y}\) component-wise.2 Namely, by minimizing \(\mathcal{L}\) in (2), we are finding the nodes' embeddings \(e_{i}\) and the matrices \(W_{r}\) such that the score \(x_{ij}^{(r)}\) is equal to (or, close to) the label \(y_{ij}^{(r)}\), which indicates the presence of a positive/negative link.
Footnote 2: For example, \(\mathcal{L}\) can be \(\|\mathbf{x}-\mathbf{y}\|_{2}\), i.e., the \(\ell_{2}\)-norm.
### _Solving the link prediction problem with a GNN_
GNNs tackle the optimization problem (2) by computing nodes' embeddings based on their connectivity patterns with neighboring nodes. To illustrate the concept, we show in Figure 2 a toy example where a "patient" feature vector is built with the patient's medical conditions embeddings. In particular, the feature vectors of nodes \(i\) and \(j\) (medical conditions) are combined _linearly_ to obtain the embedding of node \(k\) (a patient).
GNNs combine the neighbors' embeddings by using multiple _non-linear_ functions. Fig. 3 shows an example of how
Fig. 3: Example of a GNN with three nodes and two NN layers per node. Vectors \(e_{i}^{(0)}\), \(e_{j}^{(0)}\), \(e_{k}^{(0)}\) are the initial embeddings of nodes \(i\), \(j\), \(k\), i.e., the input in the first NN layer. Vectors \(e_{i}^{(2)}=e_{i}\), \(e_{j}^{(2)}=e_{j}\), \(e_{k}^{(2)}=e_{k}\) are the embedding outputs of layer 2.
Fig. 2: Toy example of how the embedding of a patient is the linear combination of two medical conditions embeddings.
a GNN uses multiple layers (i.e., functions) to combine the embeddings. Each layer \(l\in\{1,\ldots,L\}\) fuses the feature vector in the \((l-1)\)-th layer of the node with embeddings of its neighbors, and the output is passed to the next layer where the process is repeated.
The GNN connectivity depends on how nodes are connected in the KG (but is not necessarily the same), and it determines how the embedding information propagates. Designing a GNN connectivity that enables efficient learning is use-case dependent as it requires knowing how nodes should interact. In Sec. IV-C1, we will show how different GNN connectivities affect the link-prediction performance for a clinical triage use case.
### _Link prediction_
Predicting an (unknown) link/relation in a KG consists of evaluating (1) with the embeddings and relation weights learned during the training. For example, suppose we have a patient and want to predict whether the patient may be _infected_ by COVID-19. Then, we use \(e_{\text{patient}}\), \(W_{\text{infected}}\), and \(e_{\text{COVID-19}}\) in (1), and if the score is larger than a confidence threshold (e.g., \(0.9\)), we can determine a positive link exists.
Often, we need to predict links that connect nodes not seen during the training. In that case, we need to first compute the embeddings of the new nodes by combining their initial embeddings3 with the embeddings of their neighbors seen in the training. For example, a new patient may be connected to nodes that appeared during the training (e.g., fever, headache, cough). Then, the embedding of the unseen node (i.e., the patient) is calculated with the embeddings of the neighboring nodes.
Footnote 3: The input in the first NN layer. See Fig. 3.
## III Link Prediction in PyG
This section presents how to implement the link prediction optimization in Sec. II with PyG [14]--a python library for GNNs built upon PyTorch. We follow a similar approach as in the PyG tutorial for node classification [15].
### _Creating the KG and tensors_
The first step is constructing a KG with the format in Table I. Each row in the table corresponds to a subject-relation-object "triple" indicating how nodes are connected. Recall the edges in the KG are directed, where the _subject_ and _object_ are the _source_ and _target_ nodes, respectively.4 The column _link type_ indicates whether such a link is positive (True) or negative (False), and the columns _sub. type_ and _obj. type_ indicate--as the names suggest--the types of nodes. Having different node types is useful, for example, to control how the embedding information flows in the GNN. In Sec. IV-C1, we will show how different GNN connectivities affect the link prediction performance.
Footnote 4: The names in the subject and object columns identify a single node in the KG. That is, there cannot be multiple nodes called _London_ referring to different cities, e.g., England (UK), Ontario (Canada), Texas (USA), etc.
The second step is to map the KG to a PyG data object that contains the nodes' initial embeddings (data.x), the network connectivity (data.edge_index), the types of relations (data.edge_type), and the labels (data.y) that indicate whether the links are positive or negative. Fig. 4 shows how to map the KG in Table I to a data object, where the nodes and relations have unique IDs (integers).
### _GNN model_
The GNN model consists of two core parts: (i) the generation of the embeddings (i.e., encoder) and (ii) the scoring function (i.e., decoder). Fig. 5 shows an example of a GNN model with two RGCNConv layers [6].5 The first part is to initialize the scoring function6 and the NN functions that will generate the embeddings. The initial embeddings (data.x) in the encoder and can be set manually--e.g., using a pre-trained natural language model that maps the node's name (i.e., a string) to a vector of a fixed size (e.g., as in [16])--or they can be variables in the optimization.
Footnote 5: RGCNConv is a type of layer/function to compute the embeddings.
Footnote 6: The weights \(W_{r}\) are part/defined in the scoring function.
The forward function computes the nodes' embeddings and scores for every link in the KG. The embedding information is obtained with a communication mask7 that controls how the nodes propagate their embeddings to their neighbors in the GNN, i.e., the GNN connectivity.
Footnote 7: A mask is a vector of booleans that selects which edges (i.e., the tensor’s rows) to use.
### _Training the model_
The training of the GNN is shown in Fig. 6. The process follows the steps in the tutorial for node classification in [15] with two differences. The first one is that we can control the embedding communication in the GNN, which affects how the initial embeddings and the embedding in intermediate layers are combined. The second difference is that we can filter the edges for which we want to evaluate the loss function, which
Fig. 4: Example of a torch tensor that maps the KG in Table I with unique IDs. Each row corresponds to a row in Table I where nodes’ IDs are assigned sequentially: Orla (0), Paul (1), London (2), cholesterol (3), New York (4), diabetes (5). The relations’ IDs are also assigned sequentially: born (0), has (1).
is useful to specialize the model to the types of links we would like to predict.8
Footnote 8: This is similar to the mask used in [15] to filter the training data.
### _Link prediction_
The link prediction task consists of calling the function model(data) where data includes the links we would like to predict. For instance, if we want to predict if the node _Orla_ is connected to the node _diabetes_ with the relation type _has_ (with the mapping used in the example in Fig. 4), we need to add the tensor([0, 5]) to data.edge_index and tensor([1]) to data.edge_type.
To predict links connecting nodes not seen in the training, we must first assign the new nodes unique IDs and generate their embeddings using the same function used in the training. However, the GNN communication mask employed to create the embeddings must prevent the unseen/new nodes from affecting the embeddings of the nodes seen in the training.
## IV Use case: Clinical triage with Synthea
This section presents a numerical evaluation of the GNN for clinical triage with the Synthea dataset generator [17]. The experiments' goal is to illustrate how (i) the GNN parameters and (ii) the domain knowledge affect the link prediction accuracy. In particular, we study different GNN connectivities (Sec. IV-C1), embedding sizes, and number of GNN layers (Sec. IV-C2), and the importance of negative edges in the construction of the KG (Sec. IV-C3).
### _Use case and dataset overview_
#### Iv-A1 Use case
The _clinical triage_ problem involves determining the appropriate course of care when a patient presents with symptoms or medical conditions at the first point of contact. This includes deciding whether the patient requires immediate attention from a healthcare professional, such as in an emergency situation (e.g., a heart attack).
#### Iv-A2 Dataset
Synthea is a _synthetic_healthcare dataset generator that simulates realistic patient medical records. The generated patient records include a variety of information such as demographics, medical history, medications, allergies, and encounters with healthcare providers. The resulting data is designed to be representative of the United States population in terms of age, gender, and ethnicity, and it includes data on over 10 million synthetic patients.
For the clinical triage problem, we access the patient's medical records generated by Synthea and extract the patient's medical conditions and encounters. Each encounter is associated with conditions (e.g., diabetes) and observations (e.g., fever), and belongs to a class that corresponds to one of the following care actions: _wellness_, _inpatient_, _outpatient_, _ambulatory_, and _emergency_. The goal of clinical triage problem is: Given a patient's medical encounter with some medical conditions and observations, determine the type of care action the patient should receive, i.e., to which care action node should the encounter be connected.
### _Experiment setup_
#### Iv-B1 Kg
We generate a KG for the clinical triage problem with Synthea as shown in Fig. 7. There are five types of nodes (_encounter_, _observation_, _condition_, _patient_, and _care action_) connected by four different types of relations (_encounter-careaction_, _encounter-observation_, _encounter-condition_, and _patient-encounter_). As a remark, Synthea does not provide information about negative edges. Still, since an encounter can only be connected to one care action, we add negative links9 between an encounter and the other care actions. For example, suppose an encounter has a positive link with the
Fig. 6: Example of the optimization procedure using the steps in [15].
Fig. 7: (a) KG schema generated with Synthea for the clinical triage problem. (b) Example of a graph with 5 patients. The edges in (b) are shown as undirected due to the figure size.
Fig. 5: Example of a torch module to implement a GNN with two RGCNConv layers. The layer architecture was proposed in [6] and is available directly in PyG.
care action _inpatient_. In that case, we add negative links in the KG between the encounter and the care actions _wellness_, _outpatient_, _ambulatory_, and _emergency_.
#### V-B2 Gnn
We will introduce the GNN connectivities in the experiment in Sec. IV-C1. The initial embeddings of the node types _observation_, _condition_, and _care action_ are variables in the optimization problem, while the initial embeddings of the node types _encounter_ and _patient_ are set to zero. This choice of initial embeddings is because, in the testing, we do not want to infer the embeddings of observations, conditions, or care actions not seen in the training. However, we allow new patients and new encounters.
#### V-B3 Training parameters
We conducted the numerical experiments with 50/10 patients in the training/testing sets--sampled uniformly at random. All the experiments are run for 50 realizations, where each realization involves a random sample of 50/10 patients from the Synthea generated data.10 The scoring function is DistMult [6] with weights initialized at uniformly at random, the loss function is logistic regression, and the GNN architecture uses RGCNConv layers [6]. The training is carried out with Adam [18] for 1000 epochs with variable learning rate11 and weight decay 0.0005.
Footnote 10: A training sample has, on average, 35k edges and 1.9k nodes (50 patients, 1603 encounters, 153 observations, 107 conditions, and 5 care actions).
Footnote 11: The learning rate is equal to 0.1 for the first 100 epochs, 0.01 for the following 600 epochs, and 0.001 for the last 300 epochs.
### _Experiments_
#### V-C1 GNN connectivity
This experiment studies how the GNN connectivity for the propagation of the embedding information affects the link prediction performance. We consider the four GNN connectivities shown in Figure 8. In short, the C1 connectivity corresponds to the connectivity of the positive links in the KG. The C2 connectivity is obtained by adding "reverse" links in C1.12 Thus, the embedding information flows in any direction. The C3 connectivity is as C2 but without the edge from the node care action to encounter. Finally, the C4 connectivity allows only embedding information to flow from observation and condition nodes to encounter nodes. The rationale behind C4 is that the node types observation and condition can be regarded as the "attributes" or "properties" of an encounter, and therefore the embedding of an encounter should not affect the embeddings of the observation and condition nodes.13
Footnote 12: The reverse links have relation type {_sub. type_}-{_obj. type_}.
Fig. 9 shows the number of correct care action predictions for the four GNN connectivities, where _total_ (blue bar) indicates the number of care actions of that type in the testing data (ground truth). Observe from the figure that the frequency of care actions is skewed, with _wellness_ being the most common and _emergency_ being the least common. Regarding correct predictions, the C4 connectivity has the best overall performance, closely followed by the C3 despite C3 having more than twice as many edges as C4 (see also Table II). The C1 and C2 connectivities did not perform well, but for two different reasons. First, the C1 connectivity does not allow the encounter nodes to receive embeddings from the observation and condition nodes, which are the "characteristics that define an encounter." The C2 connectivity fails because we allow the encounter nodes to access information that is not available in the testing. Specifically, the links between care action and encounter nodes do not exist when creating the embeddings in the testing--since they are the links we would like to predict.14
Footnote 13: i.e., there should be no edge from an encounter node to an observation/condition node.
Footnote 14: We just want to predict the link from care action to encounter, but the reverse edge from care action to encounter will not exist either in the testing.
**Conclusions:** GNN connectivities that may appear intuitive (C1 and C2) do not perform well because (i) the associated KG connectivity does not capture how the nodes' embeddings should interact, and (ii) the training uses links for computing
Fig. 8: The four GNN connectivities used in the experiments in Sec. IV-C1. Accronyms: condition (C), observation (O), encounter (E), patient (P), and care action (CA).
Fig. 9: Illustrating how the GNNs connectivities in Fig. 8 affect the prediction of care actions. The results are the average of 50 random samples from the Synthea generated dataset. Each sample consists of 50/10 patients in the training/testing sets. The nodes’ embeddings have size 5 and the GNN has 2 layers. The total number of predictions per care action is indicated in blue.
the nodes' embeddings that are not present in the testing. Connecting nodes in every direction (C3) obtains a good performance, but it is slightly outperformed by a bespoke GNN connectivity (C4) that considers only essential connections.
#### Iv-B2 Embedding size and number of GNN layers
This experiment investigates the impact of two basic GNN design parameters: The number of layers and the size of the nodes' embedding. The GNN connectivity used here corresponds to the C4 connectivity described in Sec. IV-C1.
Fig. 9(a) shows the average prediction accuracy of care action as a function of the embedding size for GNNs with 2 and 3 layers. Observe from the figure that, in both cases, the accuracy improves rapidly for embedding sizes ranging from 1 to 3, but beyond that point, the increase in accuracy becomes more gradual.
Fig. 9(b) shows the prediction accuracy as a function of the number of GNN layers when the embedding sizes are fixed to 5 and 10. Observe from the figure that adding more layers decreases the GNN performance, which is in stark contrast to _deep_ CNNs, which use many layers. We reckon this behavior is because of the "over-smoothing" phenomenon also noted in the literature [19, 20], where adding more layers makes the GNN "too well connected," and therefore, the nodes embeddings become "too similar."
**Conclusions:** The link prediction performance improves with the embeddings' size, but the improvement gains diminish once the embeddings are large enough. Using a large number of GNN layers has a negative impact on performance.
#### Iv-B3 Negative edges
This experiment studies the impact of removing negative edges in the KG. Recall that the negative edges in the KG are from the encounter to the care actions with no positive edge (see also Sec. IV-B1). We use the C4 GNN connectivity introduced in Sec. IV-C1 and obtain the results shown in Figure 11. The figure shows that not using negative edges considerably drops the link prediction performance. This behavior is because the link prediction can be thought of as a binary classification of an edge, and negative edges are a source of negative samples of such "classification." Notably, negative edges are not readily available in the Synthea generated data, and we had to use domain knowledge (i.e., understanding of the data) to add those.
**Conclusions:** Negative edges are crucial for making good link predictions in this use case. The negative links were not available in Synthea directly, and we had to use domain knowledge (i.e., understanding of the graph data) to include them in the KG.
## V Conclusions
This paper studied the flow of embedding information within GNNs and its impact on performance, specifically in a clinical triage use case. We proposed a mathematical model that decouples the GNN connectivity from the connectivity of graph data and found that incorporating domain knowledge in the GNN connectivity is more effective than relying solely on graph data connectivity. Our results also show that negative edges play a crucial role in achieving good performance, while using many GNN layers can lead to performance degradation.
A future research direction is to evaluate how the approach performs in other datasets, and how to automatize the learning of the "domain knowledge." Specifically, the identification of key GNN links for transporting embedding information, and the identification of negative edges in the KG that may not be explicitly present in the data.
|
2309.04696 | pun: Fun with Properties; Towards a Programming Language With Built-in
Facilities for Program Validation | Property-based testing is a powerful method to validate program correctness.
It is, however, not widely use in industry as the barrier of entry can be very
high. One of the hindrances is to write the generators that are needed to
generate randomised input data. Program properties often take complicated data
structures as inputs and, it requires a significant amount of effort to write
generators for such structures in a invariant preserving way.
In this paper, we suggest and formalise a new programming language
\textsf{pun}; a simple functional programming with properties as a built-in
mechanism for program validation. We show how to generate input for
\textsf{pun} properties automatically, thus, providing the programmer with a
low barrier of entry for using property-based testing. We evaluate our work a
on library for binary search trees and compare the test results to a similar
library in Haskell. | Triera Gashi, Sophie Adeline Solheim Bosio, Joachim Tilsted Kristensen, Michael Kirkedal Thomsen | 2023-09-09T06:28:28Z | http://arxiv.org/abs/2309.04696v2 | # pun: Fun with Properties;
###### Abstract
Property-based testing is a powerful method to validate program correctness. It is, however, not widely use in industry as the barrier of entry can be very high. One of the hindrances is to write the generators that are needed to generate randomised input data. Program properties often take complicated data structures as inputs and, it requires a significant amount of effort to write generators for such structures in a invariant preserving way.
In this paper, we suggest and formalise a new programming language pun; a simple functional programming with properties as a built-in mechanism for program validation. We show how to generate input for pun properties automatically, thus, providing the programmer with a low barrier of entry for using property-based testing. We evaluate our work a on library for binary search trees and compare the test results to a similar library in Haskell.
## 1 Introduction
To reduce the risk of defects in modern software, it is common to perform a form of validation that ensure the software is built according to its specification [3]. As an example, software developers may rely on testing for providing evidence that their software behaves as intended, and thereby increase the likelihood that their work is correct. However, software testing can be hard and cumbersome and, thus, is not a very popular activity among developers [4, 11].
Property-based testing is a testing methodology that supports comprehensive software testing. When compared to unit and integration testing, property-based testing may justify greater confidence in program correctness; Instead of a handful of inputs and their expected output, the programmer provides a set of properties/assertions, that should hold for a specific program fragment. Idealised, a tool then automatically generates random test inputs and checks that these properties hold. This way, the programmer is relieved of coming up with inputs and predicting the outcome. Each newly generated test input increases confidence, rather than the same hundred test cases being run over again. As
the so-called _pesticide paradox_ states, running the same test cases repeatedly will not find new bugs [3].
A popular tool for performing property-based testing is called QuickCheck [1]. QuickCheck is a potent combinator library capable of generating test cases based on _assertions_ and _test input generators_. Although QuickCheck is a well-known testing tool (especially in the academic environment), a significant hindrance to its adoption in industry settings is the need for handwritten generators [7]. In practice the programmer not only needs to write the properties, but also a set of generators that can generate random inputs for the tests. Generating inputs for tests QuickCheck can do out-of-the-box, but for more complex user-defined data types and properties, the programmer is required to write a generator for the data type by hand.
In this work we want to alleviate the burden of writing generators. For this, we propose pun, a functional programming language with a built-in construct for defining and checking properties. This can in turn be used to automatically generate test input generators, to facilitate the use of property-based testing. Automatic program generation is a hard problem (and in general undecidable), so it is expected to come at the cost of the quality compared to good generators. However, with the conjecture that any testing is better than none, this can still give a significant improvement.
To give a better feeling of the problem of implementing generators, consider for following example. We will look at the built-in addition operator, \(\left|+\right|\), where a property we may want to test, is commutativity. Here the pun program might be as simple a single line:
```
1propertyadd-is-commutativem.m+n==n+m.
```
The property states that any two pun terms \(\left|\mathrm{m}\right|\) and \(\left|\mathrm{n}\right|\), will satisfy the above equation. The addition operation in pun is only defined for terms of type integer. So, in order to check it, we must be able to generate two arbitrary terms of type \(\left|\mathrm{integer}\right|\) and substitute these terms into the equation. Evaluating the resulting term will have type \(\left|\mathrm{boolean}\right|\), and the property holds if the term evaluates to \(\left|\mathrm{true}\right|\) for any choice of \(\left|\mathrm{m}\right|\) and \(\left|\mathrm{n}\right|\).
In this case, since QuickCheck comes with a reasonable integer generator out of the box, it is relatively easy to generate input data for the property - but for more complex properties and types, this is not the case. Take for instance the pun property
```
1propertyplus-zero-identityfx.f(x+0)==(f(x))+0.
```
where there are several choices to be made. How would you write generators for the terms \(\left|\mathrm{f}\right|\) and \(\left|\mathrm{n}\right|\)? - In both of the above examples, pun automatically generates the appropriate closed terms and substitutes them into the term in question. And pun trusts that the property holds if it holds for several (currently 50) choice of subterms (\(\left|\mathrm{m}\right|\) and \(\left|\mathrm{n}\right|\) or \(\left|\mathrm{f}\right|\) and \(\left|\mathrm{x}\right|\)), and it communicates this fact by outputting dot per test passed.
```
1testingplus-commutes:
2testingplus-zero-identity:
3.0ok
```
Suppose now, that we also want to check if subtraction is commutativity and write the program
```
1propertysub-is-commutativem.m-n==n-m.
```
Again, our property checker will generate the appropriate terms and do the substitution, The only difference in this case, is that pun will output the test case for which the property did not hold. In this example, that would be the first term where the generated terms are non-equal integer terms, for instance
```
1testing subtraction-commutes:.."failed with counter example :"
2(((x->x+2)3)-7==7-((x->x+2)3)
3"after2 tests"
```
In order to save the programmer the work of writing generators by hand, we need to write two functions. One function that, given information about the program bindings, can generate a generator of the appropriate type. A second function that uses those generators and substitutes the generated terms into the property. The problem of generating these two functions will be the focus of this paper.
A re-implementation of QuickCheck is Luck [7], a domain-specific language that in principle has the same goal as our work. However, the way that writing generators is made easier in Luck is by decorating predicates with lightweight annotations, while we take a more general approach.
Structure:Section 2 will give a larger example that details our approach. Section 3 formalises the syntax and type system of pun. Section 4 describes how to generate generators from pun programs. Section 5 discusses how to limit the generators for cases where inputs quickly diverges, while Section 6 evaluated our approach. Finally, in Section 7 we conclude the work.
The Haskell implementation of pun, and the benchmarks can be found at [https://github.com/jtkristensen/pun-lang](https://github.com/jtkristensen/pun-lang)
## 2 Approach to the Work
While illustrative, the examples above are too small sufficiently demonstrate the problems with generators. Therefore, we will in the following detail pun on a larger example. In [5], Hughes describes different techniques for writing good properties for pure functions in Haskell. As an example, he uses binary search trees (BSTs) and five common operations on these, and uses QuickCheck to test different properties for each operation. In the Haskell code he needs a hand-written generator to test on the BST.
Below, we have implemented the example in pun and Haskell for comparison.
```
1insert:integer->integer
2->(bstintegerinteger->bstintegerinteger).
3insertk1v1t=
4casetof
5leaf->[nodeleafk1v1leaf]
6;[node|k2v2r]->
7ifequalk1k2
8then[node|k2v1r]
9elseifk1<=k2
10then[node(insertk1v1l)k2v2r]
11elseifk1>k2
12then[node|k2v2(insertk1v1r)]
13else[node(leaf)k1v1(leaf)].
propertyinsert-valid k v t. if valid t then valid (insert k t) else true.
property find-post-present k v t. find_equal (find k (insert k v t)) ([node leaf k v leaf ]).
Given that we have a function that checks whether a tree is a valid binary search tree, we can formulate a property that inserting a key value pair should result in a valid BST. This is what Hughes describes as a _validity_ property. Another property to formulate is that you should be able to find a key after inserting it, which is a post-condition property. The way we have implemented the \(|\)find\(|\) function in pun, is to return a leaf if it cannot find the key, otherwise it returns the key stored in a node.
The equivalent Haskell implementation looks like so:
```
1insert::Ordk=>k->v->BSTkv->BSTkv
2insertkvLeaf=Branch(Leaf)kv(Leaf)
3insertkv(Branchleftk'v'right)
4|k==k'=Branchleftk'vright
5|k<k'=Branch(insertkvleft)k'v'right
6|k>k'=Branchleftk'v'(insertkvright)
7insertkv_=Branch(Leaf)kv(Leaf)
8
9instance(Ordk,Arbitraryk,Arbitraryv)=>Arbitrary(BSTkv)where
10arbitrary=do
11kv<-arbitraryreturn$fold(uncurryinsert)nil(kvs::[(k,v)]) shrink=filtervalid.genericShrink
12
13prop_InsertValid::Key->Val->Tree->Bool
14prop_InsertValidkvt=valid(insertkvt)
15
16prop_FindPostPresent::Key->Val->Tree->Property
17prop_FindPostPresentkvt=findk(insertkvt)===Justv
```
When comparing the pun and Haskell implementation, one can see that the properties themselves are similar (aside from differences in syntactic sugar). The big difference is providing the \(|\)Arbitrary\(|\) instance, which tells QuickCheck how to generate an arbitrary BST. One is also presented with the option of writing a shrink a tree in order to give the smallest possible failing BST that makes a property fail. In pun, the user does not need to provide either, thus trading off the potential quality of writing a good generator by hand for the ergonomics of getting one for free.
There are several things to consider when writing a generator, to ensure that the generator generates intended terms. Hughes uses a \(|\)valid\(|\) function in his arbitrary instance to make sure that only valid BSTs are generated. When writing generators you will often have to write similar functions for the arbitrary instance to ensure that what you generate has the desired properties. These functions must then also be correct and tested themselves before being used in the arbitrary instance. It is also difficult to make sure that the generator generates terms of useful sizes. Often you have to use combinators such as
\(|\)frequency\(|\) to increase the probability of something being generated. You need to investigate what your generator generates, and see if it produces the desired results. In all, there is much work required to write a generator, and none of these tasks are easy.
There is, however, an issue with not having the option of modifying, putting constraints or defining predicates about what is being generated. In Hughes' arbitrary instance, he uses \(|\)insert\(|\) in order to generate trees that are ordered correctly, with smaller keys in the left subtree and greater keys in the right subtree. He also specifies that shrinking a BST should result in a valid BST. Our generator will generate any pairs of keys and values to the tree, so there is a high probability that the tree generated does not inhabit this BST property. There are two options in this case; one could define weaker properties, such as the one above that only checks that the insert property holds for the generated trees that are valid, but then the question is how often do we randomly generate valid trees? Another option is to transform the output of the generator so that it has this property, by for instance writing a function \(|\)validify\(|\) that takes the randomly generated tree and gives the correct structure:
```
1propertyinsert-validkvt.valid(insertkv(validifyt)).
2
3validity:bstintegerinteger->bstintegerinteger.
4validityt=
5casetof
6;leaf->leaf
7;[node|kvr]->insertkv(union(validifyl)(validifyr)).
```
It is difficult to generate algebraic data types that must have a certain structure, unless you are able to modify the generated terms to have the desired properties. Even though the programmer does not need to provide a generator, they might have to provide a function such as \(|\)validify\(|\), and it is worth noting that this can be as difficult to write as the generator itself.
## 3 The pun Explained in Detail
The language pun is a higher-order functional language that uses call-by-value parameter passing. pun is furthermore extended with built-in facility for property-based testing.
pun is a garden variety functional programming language, inspired by simple functional programming languages such as FUN1 from Pierce [9], and REC from Winskel [12]. As such, this section omits the details with the operational semantics in the interest of saving space for discussing term generation.
Footnote 1: Our version of pun is actually closer related to FUN from Andrzej Filinski’s unpublished lecture notes on formal semantics and types.
### Syntax of pun
The syntax of pun is given in Figure 1. For terms \(\overline{n}\) denotes an integer literal and \(x\) ranges over an infinite set of immutable _variables_. You can work with terms using addition, tuples, and the product decomposition **fst** and **snd**. Furthermore, it contains the standard conditional, functions definition (lambda abstractions), function application, let-expressions and recursion.
A lambda abstraction is an anonymous function defined by its input \(x\) and output \(t_{0}\). Function application is written without parentheses surrounding the argument, unless they are needed for syntactic disambiguation, and is evaluated successfully on validly typed input. The let-expression corresponds to binding the term \(t_{1}\) to the name \(x\), and evaluating the term \(t_{2}\) with \(t1\) instead of \(x\). The recursion rule **rec**\(x.t_{0}\) substitutes into function body \(t_{0}\) the whole term. I.e., evaluating **rec**\(x.t_{0}\) results in \(t_{0}[\)**rec**\(x.t_{0}/x]\). Thus, this term may be non-terminating.
Every canonical form \(c\) is a well-formed _closed_ term: i.e., it has no free variables and can eventually be evaluated (or normalised) to a unique normal form. Note in particular that the body \(t_{0}\) of a \(\lambda\)-expression only can contain \(x\) as a free variable. This ensures that the application of the abstraction to a term, eventually will result in a closed term.
Additionally, for the binary search trees, the language has been extended with the corresponding terms and with case-statements. Binary search trees are implemented in the expected way: A binary search tree is either a simple **leaf** or a **node** which consists of a left subtree (\(t_{0}\)), a key (\(t_{1}\)), a value (\(t_{2}\)), and a right subtree (\(t_{3}\)).
Since it has been added for the sake of a benchmark test, the implementation of case-statements is specific to binary search trees. Its semantics corresponds to pattern-matching the term \(t_{0}\) to either a simple **leaf** or to a pattern \(p\). Then, the appropriate term, either \(t_{1}\) or \(t_{2}\), is evaluated. A pattern \(p\) is a subset of the canonical terms.
### Typing of pun
The grammar of pun types and typing rules for pun are given in Figure 2. A type is either simple or compound. The simple types are integer and Boolean. Compound types are product types (tuples), lambda abstractions (functions), and binary search trees. In a binary search tree, two types \(\tau_{1}\) and \(\tau_{2}\) are the key and value types, respectively. Finally, there is a special type **unit**, which is interpreted in the usual way: It has only one valid term, \(|()|\), which can hold no further information.
The typing rules for pun follows the conventional formalisation of a simple typed functional language (e.g. [9]), with the exception of case-statements that only are defined for binary search trees. Branching on booleans and integers can be performed using the conditional-term. Thus, to simplify and avoid overlap
Figure 1: The syntax of pun terms \(t\), canonical pun terms \(c\), and pun patterns \(p\).
in the typing judgements, we restrict cases to binary search trees, as that is the only term without a branching term. The typing rules is a needed foundation as our approach is to generate well-formed, well-typed, terminating terms.
## 4 The Generator Generator
The goal of this work is to be able to check properties without having to provide a generator used types, and rather have them generated from pun programs. A property consists of a name, arguments and a term which is the property that must hold. This property is a term that must have the type boolean. We want to substitute the arguments in the property term with generated terms of the appropriate types. The first step is writing a generator that can generate generators, which in turn generate terms that will be the arguments for the tests.
### Generating Generators
In the addition example, we need a generator for integer terms, because the addition operator in pun only is defined for integers. In the case of the binary search tree functions, we can look to the key and value types to know what types are valid input types. For a well-defined function, it is in general possible to infer the types of the required generators directly from the program. The more interesting task, then, is to write a function \(|\)generateGenerator\(|\).
Figure 2: The grammar of pun types \(\tau\), and typing rules for pun.
The function should be able to generate a generator for arbitrary well-formed and well-typed terms of the requested type that can be evaluated when substituted into a property. To do so, it needs to know the requested type, but also information about existing name bindings and types in the program. \(|\)generateGenerator\(|\) needs to know the global name bindings so it can generate arbitrary names for function abstractions, let-statements and recursive terms without binding a name that already exists on the top-level of the program. These would cause conflict upon evaluation.
It also needs to know what names have been bound to which types in the scope of the function, for two reasons. First, so that it may avoid conflicts when using names inside function abstractions, let-statements and recursive terms, since these names are not relevant outside the scope of each term. I.e., in the term \((\lambda x.x+5)\), the name \(x\) can be used freely outside the function abstraction and other bindings to that name are not relevant within the scope of the function. Second, so that it may with some probability generate a term of a type that is already inhabited and bound to a name elsewhere in the program.
When writing \(|\)generateGenerator\(|\), a significant challenge was how to generate _complex_ terms of a given type. I.e., we wish to conceive a strategy for generating terms of various complexities and types that can be _evaluated_ to a term of the correct type. For example, a valid term of type integer might be \((\lambda x.x+5)\) 3, since it normalises to 8. However, in attempting to generate more interesting terms, we may also generate some terms that are ill-typed, ill-formed, or non-terminating.
Our approach to this problem is influenced by Fetscher et al. [2] in their paper on generating well-typed terms from type systems. The requested type to be generated, dictates which typing rules could have resulted in such a term. We therefore start with a goal term on the form \(\Gamma\vdash t_{0}:\tau_{0}\), meaning that from the program environment environment \(\Gamma\), we wish to generate a term \(t_{0}\) of the requested type \(\tau_{0}\). We then work our way upwards and generate arbitrary derivations that result in this goal.
To generate the derivations, we must look to our typing rules to see which of them could have resulted in a term of type \(\tau_{0}\). For the sake of argument, let us say we wish to generate a generator for pairs of type \((\mathbf{bool}\times\mathbf{int})\). Then there are multiple appropriate rules. However, we may also notice that some of the applicable rules, require us to generate another, _larger_ term. For instance, we could have generated another pair term \(t_{1}:((\mathbf{bool}\times\mathbf{int})\times\mathbf{int})\), and then applied the \(\mathbf{fst}\)-rule to get a valid result \(\mathbf{fst}(t_{1}):((\mathbf{bool}\times\mathbf{int})\times\mathbf{int})\). But this way, we might end up generating bigger and bigger terms indefinitely, e.g., a term of type \((\tau_{0}\times(\tau_{1}\times(\tau_{2}\times...)))\). Therefore, we have to be careful about how we select rules. Some rules, such as the axiomatic rules, result in a well-typed, but smaller term.
To bound the recursion of \(|\)generateGenerator\(|\), we use the QuickCheck combinator \(|\)sized\(|\). It constructs a generator that depends on the \(|\)size\(|\) parameter, which is passed from QuickCheck to the generator. Initially, QuickCheck generates small test cases and increases the size to generate more complex test cases as testing progresses. In our case, the parameter controls the depth of the recursion and therefore of the derivations. When we have also disqualified some rules to begin with, this is a solution to bounding recursion of the term _generators_. We discuss the issue of recursive _terms_ in Section 5.
By selecting amongst the applicable rules randomly, while being careful to
For arbitrary types \(\tau\), \(m<n\),
\[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left(\left[\left[\left[\left[\left( \left[\left[\left[\left[\left(\left[\left[\left[\left(\left[\left[\left[\left( \left[\left[\left[\left(\left[\left[\left(\left[\left[\left(\left[\left[\left( \left[\left[\left(\left[\left[\left(\left[\left[\left(\left[\left(\left[\left[\left( \left[\left[\left[\left(\left[\left[\left(\left[\left[\left(\left[\left[\left[ \left[\left(\left[\left[\left[\left(\left[\left[\left(\left[\left[\left[\left[ \left[\left[\left[\left(\left[\left[\left[\left[\left[\left(\left[\left[\left[ \left[\left[\left[\left[\left[\left(\left[\left[\left[\left[\left[\left[\right[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\left[\rightrightrightright[\rightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightright]]]\)\)))))))))).
Figure
inferring a type for each variable, and then generating a term of that type. The rest is finicky book keeping, that keeps track of the number of tests that have been performed etc (for error messages).
However, while writing type checkers and interpreters is a well explored area of research, writing generator generators remains a novel an challenging endeavour. On one hand, when generating primitively typed terms we run the risk of generating terms that are too "boring". For instance, we may generate a simple numeric pun terms for the type **int**, even though **int**can also be the result of a term was typeable by the Application rule from Figure 2. Helping the programmers that wants to test their program thoroughly, can be to come up with terms of different complexity, because those are more interesting to test the program with. We have made some decisions for recursive, lambda and let terms with regards to what the user would do if written by hand.
On the other hand, we also face the challenge of bounding the generated terms. For instance, for terms of type **int**, we could generate terms such as
\[\mbox{\bf rec}\ x.\ x+1\]
that do not terminate.
The current implementation of pun limits such recursion in a very conservative way to ensure that terms are terminating. For instance, for rec we do not allow the bound variable to appear in its body at all, and since there is no other recursion possible in the language, never generating recursive terms ensures that all generated terms terminate. This strictness can be relaxed to recursion schemes that are known to terminate, but it has been left as future work. Together with an approach that generate terms that terminate by the size-change termination principle [8].
Where defining what "smaller" means with regards to terms is non-trivial in the general case. Instead, we have made the conservative choice to only allow the use of the **rec** rule when the argument to the rule \(f\) is a function \(f:a\to b\) such that the function argument \(a\) approaches a terminating base case for each call. Thus, the program below is a valid pun term containing a recursive function definition and call, since the argument to the **rec** rule is a function, whose argument \(n\) gets smaller, and thus closer to the base case, for each call.
```
1letfib=recf.(\n.ifn<=1then1elsef(n-1)+f(n-2))
2
3infib5
```
The above restriction certainly disallows some interesting recursive terms, but it still allows for a class of interesting recursive functions that we can be sure will terminate.
### Function Abstractions and Let-Statements
When generating a function abstraction or let-statement, we have taken steps to increase probability that the program will use the generated variable name in the body of the term. This is because it seems pointless for the program to generate names that are not used. If a programmer were to write a test program, they would likely define one of those terms with the intention of using
the names. For a programming language with side-effects it would make sense to have a let-statement that is only used for computing something with side-effects. This could for instance be an assertion or a print for debugging. Once the programmer is done debugging, these terms can easily be commented out since they are now not used. In pun there are no side effects, so there is no reason to have a let-statement or abstraction, where the variable name is not used in the term body.
The types of the terms generated inside the body of a let-statement or function abstraction, are randomly generated. Therefore, it is not sufficient to only increase the probability of choosing a variable. If the let-term is of type integer and the body never has to generate something of type integer, then the name will not be used. We therefore altered the program further for the type generator to have a higher probability of generating a type that can be found in the bindings, which then allows for a variable to be generated.
## 6 Evaluation
We will now evaluate pun by benchmarking it against a Haskell implementation of binary search trees as Hughes describes in [5]. The Haskell implementation is used as a benchmark for pun. The properties tested when providing a generator, like Hughes does for QuickCheck, should yield the same result when written in pun. This means that when we introduce an error in the Haskell implementation and the equivalent error in pun, then pun should provide an error message as well. The benchmark suite therefore includes several faulty properties, as Hughes describes in the paper, to check that pun does in fact find the same errors as QuickCheck does for the Haskell implementation. Using Hughes' examples exposes pun for several kinds of properties; validity, metamorphic, inductive, model-based, post-condition and preservation of equivalence. This way we can check that pun finds the same errors as QuickCheck regardless of the kind of property that is being checked.
Part of the result is checking whether pun finds a bug that has been planted, another part is measuring how much work is saved by not having to provide a generator. The amount of work saved is measured in lines of code. We discuss comparative results at the end of the paper.
For metamorphic properties, Hughes checks whether two trees have the same content by first converting them to lists and then comparing them. Because pun does not have built-in lists, we use a particular binary search tree as a way of representing lists when defining our properties in pun. We have a function written in pun that takes a binary search tree and modifies it to a binary search tree that is isomorphic to lists. The regular trees contain a key and a value as separate information about the tree, whereas the modified tree contains both the key and value as a tuple where only the key would originally have been stored. The value is replaced by unit, which is used as a way of representing that there is no extra information there. All the left sub trees are replaced with a _leaf_ which does not add any new information. The tree is built from left to right, so any new node appears as a right sub tree. Below to is an example of a tree with the regular binary search tree structure.
After using the function \(|\,|\,\)model\(|\), we end up with the following same tree but with the structure of the model that we have chosen for lists.
Hughes also uses lists as a model for the model-based properties, in which case we also use the same function that turns a regular binary search tree to one that is isomorphic with lists. The reason this structure is isomorphic is because there are equally many elements in the tree as there would have been in a list. Each key that contains a tuple _(key, value)_ counts as an element. There are also mappings in both directions, such that you can have a list and represent it as this specific binary tree structure, and also be able to go from that tree back to its original list form.
Initial bench marking shows that at least blatant errors are easily detected by pun, though it may run more tests before finding a counter example. For instance, when changing the implementation of \(|\,\)insert\(|\) from Section 2 to
respectively. The property \(|\,\)find-post-present\(|\) usually fails within 50 tests in both implementations.
## 7 Conclusion
In this paper we have proposed the programming language pun that is a higher order functional language extended with properties. Based on the formalisation of the language, we have shown how we from well-typed programs can generate QuickCheck generators. We have added limitations to the generation of terms, to ensure that the test will terminate. To show that pun can find the same bugs in a program as a Haskell implementation with QuickCheck, we extended the language with binary search trees. The evaluation showed that we could find the
same errors in many cases, but that some one sometimes need a transformation function (such as \(|\)validify\(|\) in Section 2), or that the tests may need to run more times.
In the future we would like to extend with more data types like lists, which would make it easier to define certain properties, and even generalised to general algebraic data types, that the programmer can define themselves. It will make pun programs that are easier implement and improve readability, which in turn makes the code easier to maintain. We would also need to iterate on the heuristics for interesting terms to generate a larger class of terms that will result in terminating tests. And finally, investigate if reverse interpretation of the property generation can improve the accuracy [6, 10].
|
2309.15943 | Scalable Multi-Robot Collaboration with Large Language Models:
Centralized or Decentralized Systems? | A flurry of recent work has demonstrated that pre-trained large language
models (LLMs) can be effective task planners for a variety of single-robot
tasks. The planning performance of LLMs is significantly improved via prompting
techniques, such as in-context learning or re-prompting with state feedback,
placing new importance on the token budget for the context window. An
under-explored but natural next direction is to investigate LLMs as multi-robot
task planners. However, long-horizon, heterogeneous multi-robot planning
introduces new challenges of coordination while also pushing up against the
limits of context window length. It is therefore critical to find
token-efficient LLM planning frameworks that are also able to reason about the
complexities of multi-robot coordination. In this work, we compare the task
success rate and token efficiency of four multi-agent communication frameworks
(centralized, decentralized, and two hybrid) as applied to four
coordination-dependent multi-agent 2D task scenarios for increasing numbers of
agents. We find that a hybrid framework achieves better task success rates
across all four tasks and scales better to more agents. We further demonstrate
the hybrid frameworks in 3D simulations where the vision-to-text problem and
dynamical errors are considered. See our project website
https://yongchao98.github.io/MIT-REALM-Multi-Robot/ for prompts, videos, and
code. | Yongchao Chen, Jacob Arkin, Yang Zhang, Nicholas Roy, Chuchu Fan | 2023-09-27T18:40:36Z | http://arxiv.org/abs/2309.15943v2 | Scalable Multi-Robot Collaboration with Large Language Models: Centralized or Decentralized Systems?
###### Abstract
A flurry of recent work has demonstrated that pre-trained large language models (LLMs) can be effective task planners for a variety of single-robot tasks. The planning performance of LLMs is significantly improved via prompting techniques, such as in-context learning or re-prompting with state feedback, placing new importance on the token budget for the context window. An under-explored but natural next direction is to investigate LLMs as multi-robot task planners. However, long-horizon, heterogeneous multi-robot planning introduces new challenges of coordination while also pushing up against the limits of context window length. It is therefore critical to find token-efficient LLM planning frameworks that are also able to reason about the complexities of multi-robot coordination. In this work, we compare the task success rate and token efficiency of four multi-agent communication frameworks (centralized, decentralized, and two hybrid) as applied to four coordination-dependent multi-agent 2D task scenarios for increasing numbers of agents. We find that a hybrid framework achieves better task success rates across all four tasks and scales better to more agents. We further demonstrate the hybrid frameworks in 3D simulations where the vision-to-text problem and dynamical errors are considered. See our project website4 for prompts, videos, and code.
Footnote 4: [https://yongchao98.github.io/MIT-REALM-Multi-Robot/](https://yongchao98.github.io/MIT-REALM-Multi-Robot/)
## I Introduction
Multi-robot systems have great potential as a tool for operations that require the completion of many tasks, such as warehouse management. Planning for these systems is often challenging due to heterogeneous robot capabilities, coordination during tasks requiring multiple robots, inter-dependencies of separate tasks, and general safety considerations (e.g. collision avoidance). Further, the difficulty scales with the number of robots. Previous work has used algorithm-based [1, 2, 3] or learning-based [4, 5] methods to control multi-robot systems. Such approaches are typically tuned for a specific scenario, requiring significant engineering effort that limits generalization into novel tasks or scenarios.
Motivated by the ability of pre-trained large language models (LLMs) to generalize to new task domains [6, 7], there have been many recent efforts to use them for single-agent task planning [8, 9]. Planning performance is significantly improved through clever use of the context provided to the LLM, whether via techniques for initial prompts (e.g., in-context learning, chain-of-thought) or iterative re-prompting with feedback (e.g., environment state changes, detected errors). Given this success, there is new interest in investigating LLMs as task planners for multi-robot systems [10, 11]. These recent efforts address systems consisting of two or three robots; they assign an LLM to each robot and have the models engage in collaborative dialogue rounds to try to find good plans.
Scaling to systems of many robots and tasks with longer horizons is an issue for approaches that assign each robot its own LLM agent. First, both the number of possible coordinating actions and the possible action inter-dependencies grow exponentially with the number of agents, making the reasoning more difficult for the language models. Second, the context provided to each LLM contains the responses of each other LLM for the current round of dialogue in addition to the history of dialogue, actions, and states from prior rounds; so, scaling the number of agents also scales the context token length requirements toward their modern limits and increases the runtime of LLM inference (and API costs). Moreover, the immediately relevant information in the context can become diluted in longer prompts. These limitations are beyond the scope of prior work [10, 11].
Our goal is to preserve the generalizability of LLMs as task planners for multi-robot settings while addressing the challenges of scaling to many agents. We argue that different frameworks for integrating LLM planners into multi-robot task planning can improve both scalability and task planning success rates. In this work, we compare four different frameworks (Figure 3) of cooperative dialogue for task planning among multiple LLMs for increasing numbers of robots. For each, planning is performed incrementally in which the LLMs collaborate to find the next action to take for each agent in the system. The first approach (DMAS) uses a decentralized communication framework in which each robot is provided its own LLM agent and dialogue proceeds in rounds of turn-taking. The second approach (CMAS) uses a centralized framework in which a single LLM produces the next action for all robots in the system. We also propose two hybrid versions of these two approaches: (1) a variant of DMAS that adds a central LLM responsible for providing an initial plan to prime the dialogue (HMAS-1) and (2) a variant of CMAS that gives each robot an LLM with which to provide robot-local feedback to the central LLM planner. To further address issues of token length due to historical dialogue and planning context, we also propose a truncated prompt that only includes state-action information from prior dialogue rounds. We evaluate the performance of each approach in four different task planning environments inspired by warehouse settings. To further demonstrate LLMs as multi-robot planners, we apply these approaches to a simulated 3D manipulation task that requires coordination among the manipulators.
## II Problem Description
This work focuses on task planning for multi-robot systems. We consider a cooperative multi-robot task scenario with \(N\) robots and \(M\) LLM agents. We assume that each LLM agent has full knowledge of the environment and each robot's capabilities. The robot capabilities can be heterogeneous, requiring the planners to assign tasks to robots accordingly. In order to provide each LLM with the task goals and observations, we manually define functions to translate them into text prompts. We also define functions to map the output of the LLM planners into pre-defined robot actions. Planning is performed iteratively, choosing the next action for each robot to take. At each iteration, the \(M\) LLM agents engage in collaborative dialogue to find a consensus for the next set of robot actions. Given the next action, the robots act in the environment, and the resulting new state is provided as context to the LLMs for the next planning iteration.
## III Methods
Given the goal and the current environment state as text, the LLM agents engage in dialogue per the communication framework (Section III-B) in order to generate an initial set of actions for the robots to take. Before execution, this action set is checked by an external rules-based verifier for syntax errors; any errors are provided as feedback to re-prompt for correction. Given a syntactically correct set of actions, the robots then execute those actions in the environment, resulting in a new environment state. We show examples of the prompt structure for the HMAS-1 and HMAS-2 approaches in Figure 1 and Figure 2 respectively. We describe the main components of the initial prompt in the next subsection.
### _Main Components of LLM Prompt_
We use the same basic structure to prompt each LLM agent, but the specifics of the prompt depend on the individual agent's role. The prompt structure consists of the following main components:
* **Task Description**: the requirements and constraints of the task for the multi-robot system to accomplish.
* **Step History**: the history of dialogue, environment states, and actions from previous steps in the iterative planning process. We describe this in more detail in Section III-C.
* **Current State**: the objects in the environment (boxes) and their properties (position and volume).
Fig. 1: Simplified prompt example of the HMAS-1 local agent. The acquired ‘Response1’ acts as the initial plan, or otherwise sent to the next local agent for further discussion.
Fig. 3: Four LLM-based multi-agent communication frameworks compared in this work. The circles represent robots that may have actions in the current step and the ‘LLM’ text represents each LLM agent. The overlap between one circle and one ‘LLM’ text means that the robot is delegated with one LLM agent to express its special opinions to other agents. The ‘LLM’ text without the overlapped circle represents a central planning agent.
Fig. 2: Simplified prompt example of the HMAS-2 central agent. The generated ‘Response1’ is sent to local agents for feedback. Once the central-local iteration terminates, the output plan is checked for syntactic correctness.
* **Robot State & Capability**: the capabilities (available actions) of each robot and their current location. This prompt component is synthesized by our pre-defined functions. Note that the available actions include possible collisions; it is the responsibility of the planner to find safe plans.
* **Agent Specialized Prompt**: the prompt for each local agent emphasizes its own state and indicates the responses and initial plans of the other agents. For frameworks with a central agent, the prompt for that agent includes feedback from the local agents. Further, each agent is provided a persona.
* **Communication Instruction**: the instruction for how to respond to other agents & how to format the output.
* **Plan Syntactic Checking Feedback**: (optional) explanation of syntax errors in the generated output. The syntactic checking ensures that the output is formatted correctly and uses available actions.
### _Communication Frameworks for Sub-task Plan_
We compare the four LLM-based multi-robot planning frameworks shown in Figure 3. The Decentralized Multi-agent System framework (DMAS) is shown in Figure 3(a) and is the framework used in previous works on LLMs as multi-robot planners [10, 11]. Each robot is assigned an LLM planner and another agent to whom it should send its comments. The agents use a turn-taking approach for dialogue, as illustrated. The comments from prior agents in the dialogue are concatenated and included as part of the prompt for the next agent; thus, the prompt length increases over the duration of the dialogue for the current planning iteration. The dialogue ends once the current agent outputs "EXECUTE" followed by the action for each agent.
The Centralized Multi-agent System framework (CMAS) is shown in Figure 3(c). This approach incorporates only a single LLM as a central planner that is responsible for assigning the actions for each robot at each planning iteration.
We propose two Hybrid Multi-agent System frameworks, HMAS-1 (Figure 3(b)) & HMAS-2 (Figure 3(d)), that are variants of DMAS and CMAS respectively. In HMAS-1, a central LLM planner proposes an initial set of actions for the current planning iteration that is provided to each of the robots' LLM planners; the robots' LLMs then proceed as done in DMAS. In HMAS-2, a central LLM planner generates an initial set of actions for each robot, as done in CMAS; however, each robot has an LLM agent that checks its assigned action and provides feedback to the central planner. In the case of a local agent disagreeing with its assigned action, the central agent will re-plan. This process repeats until each robot's LLM agrees with its assigned action. Note that in both HMAS-1 and HMAS-2, only the agents that will take an action participate in the dialogue, thus reducing the duration of dialogue and the corresponding number of tokens in the prompts.
### _Step History_
Including the full history of the dialogue, environment states, and actions rapidly exhausts the context token budget for the LLM planners, constraining the performance of these frameworks. We therefore compare three approaches in an ablation study of the historical information included in the context: (1) no historical information, (2) only state-action pair history (no dialogue), and (3) the full history. We only include the results for all three approaches in the ablation study; since we found that (2) has the best trade-off between task performance and token efficiency (see Section IV-C), all other experiments are performed with only state-action pair history.
### _Token Length Constraint_
We use gpt-4-0613 and gpt-3.5-turbo-0613 in this work, which have context token limits of 8192 and 4097, respectively. To make sure the total token length (prompt + response) does not surpass these limits, we employ a sliding context window over the step history part of the prompt; the step history will include as many of the most recent steps as permissible without surpassing a total prompt length of 3500 tokens.
## IV Experiments
### _Testing Environments_
To compare the four different LLM-based planning frameworks, we design four multi-robot task planning environments inspired by a warehouse setting. In order to evaluate how these frameworks scale to many robots, we instantiate each environment with increasing numbers of robots. For BoxNet1 and BoxNet2, we run trials of 4, 8, 16, and 32 robots. For Warehouse and BoxLift, we run trials 4, 6, 8, and 10 robots. For each number of robots in each environment, we perform 10 trials with varied initial conditions, resulting in 40 total trials per environment.
We track whether each trial resulted in task execution. A task is considered a failure in the following conditions: (1) the dialogue among agents results in a context length beyond the token limit, (2) the agents do not reach consensus before a pre-specified limit of dialogue rounds, (3) the syntactic checking iterates beyond a pre-specified limit, (4) the number of planning iterations exceeds a limit before reaching the
Fig. 4: Four multi-robot task planning environments.
goal, and (5) the plan results in a collision. We choose the limits for (1), (2), (3), and (4) such that the failure is very likely a result of endless dialogue or actions. Note that only BoxNet2 and Warehouse can have collision.
**BoxNet1** Figure 4(a) shows the BoxNet1 environment. The environment consists of cell regions, robot arms, colored boxes, and colored goal locations (circles) for each box. The goal is to move each box into its associated goal location in the fewest time steps. The robot arms are confined to the cell they occupy. Each arm has three possible actions: (1) move a box within its cell to a neighboring cell, (2) move a box within its cell to a goal location within its cell, and (3) do nothing. We assume no collisions.
**BoxNet2** Figure 4(b) shows the BoxNet2 environment, which is similar to BoxNet1. In this environment, each box can only be moved between cells by being placed at a corner (red circles), and a given corner can only hold one box at a time; we treat placing two or more boxes on the same corner as a collision (and thus task failure). Each arm in this environment has three possible actions: (1) move a box from a corner to a different corner of the cell, (2) move a box from a corner to a goal location within the its cell, and (3) do nothing. The constraint on box movement and possibility of collision makes this scenario more challenging than BoxNet1.
**Warehouse** Figure 4(c) shows the Warehouse environment. In this environment, mobile manipulators are tasked with moving all of the boxes (green) to the target region (blue) in the fewest time steps. Each robot can only move between permissible locations (red) by traveling along the gray paths; in a single time step, a robot cannot move beyond an adjacent permissible location. We treat two robots occupying the same location as a collision, resulting in task failure. A robot can pick up a box only when at a permissible location that is immediately adjacent to it. Each robot has six possible actions: (1) & (2) move left or right (if a permissible location exists), (3) pick up an adjacent box, (4) place a box in the target region, (5) move from the target region to any of the adjacent permissible locations, and (6) do nothing.
**BoxLift** Figure 4(d) shows the BoxLift environment. In this environment, robots are tasked to lift each box (green) in the fewest time steps. The robots are able to lift different amounts of weight, and the boxes have different sizes and weights. In a single time step, multiple robots can be assigned to lift the same box. The box is lifted if the total capability of the robots lifting is greater than the box's weight. As an additional challenge, the LLM agents are only able to observe the size of each box, not their weight. This is meant to simulate real situations in which box size roughly correlates with weight. The size and weight of each box is roughly proportional, but we introduce some variability. The LLM agents are provided feedback about whether or not the box was successfully lifted. This environment attempts to test the LLM planner's ability to efficiently assign heterogeneous robots to collaborative tasks and also incorporate prior experience when planning.
### _Metrics_
To measure how well each framework is able to plan, we report the average task success rate and average number of steps per plan. To measure the token efficiency and API usage, we also report the average number of tokens used per plan and the average number of API calls per plan. The average number of steps per plan, the average number of tokens per plan, and the average number of API calls per plan only include plans that were successful. We therefore report normalized values for those three metrics. Let \(\mathcal{M}\) be the set of values for a given metric \(M\), e.g. average API calls, such that \(m_{i}\in\mathcal{M}\) is the value of metric \(M\) for the \(i^{th}\) framework. Let \(\hat{\mathcal{M}}\) be the set of values for the normalized metric \(\hat{M}\) such that the normalized metric value \(\hat{m}_{i}\in\hat{\mathcal{M}}\) for the \(i^{th}\) framework is:
\[\hat{m}_{i}=\frac{m_{i}}{min(\mathcal{M})} \tag{1}\]
The best value for the normalized metrics is 1.0. The framework that performs best for one of those metrics will thus have a value of 1.0.
### _Results_
Table I shows the experimental results for the four LLM-based multi-robot planning frameworks.
**Communication Frameworks** We note a few key results. The HMAS-2 framework outputs plans with highest quality since it achieves highest success rates and the fewest actions per plan. The CMAS framework has the fewest API calls and uses the fewest tokens; this is expected as it uses a single LLM and only requires one API call to generate the plan (assuming no syntax errors). The DMAS framework uses the most API calls and tokens, and also has the lowest task success rate. We observe that the LLM agents in DMAS often take many rounds of dialogue per planning step to decide to act, resulting in long dialogues. During the dialogue, the agents often repeat what previous agents have said without contributing anything new; or, agents will repeatedly propose the same action, diluting the context of important information [12]. HMAS-1, our hybrid variant of DMAS, primes the dialogue with an initial plan from a central LLM planner. This modification significantly improves the performance of the dialogue that follows. We hypothesize that the initial plan serves as better starting point than DMAS, thus leading to better performance metrics. However, HMAS-2 outperforms HMAS-1 in all metrics. We show one example of HMAS-1 dialogue in Figure 5. It shows that the LLM agents can get stuck on their proposed action, leading to inefficient dialogue.
We also report the trend of task success rates as a function of increasing numbers of agents, as shown in Figure 6. For low numbers of agents, CMAS is competitive with HMAS-2; however, CMAS scales significantly worse to more agents. For more challenging tasks like Warehouse, CMAS performs worse than HMAS-2 for all numbers of agents, indicating that a single central LLM planner tends to generate unreasonable plans for more complex multi-robot task scenarios. Unlike CMAS, HMAS-2 is able to check and correct for errors
in plans, such as identifying actions that would result in a collision. Figure 7 shows one example of dialogue correcting the flawed plan via feedback from local LLM agents.
**Step History Method** In Table I, we report the results of HMAS-2 for different step history prompts, as described in Section III-C. The framework performs much worse when provided no history of prior actions or dialogue rounds than when provided with the state-action pair history; this is consistent with our intution that the past actions provide useful information for future decisions. The framework performs a bit worse when provided the full history than when provided with the state-action pair history. We hypothesize that this is a result of context dilution [12] from long dialogue histories.
**GPT-3 Performance** A common trend among pre-trained LLM evaluations is that some capabilities do not emerge until a model reaches sufficient size or is trained on sufficient amounts of quality data [9]. We report the results of CMAS and HMAS-2 when using GPT-3 as the LLM and find that it performs significantly worse than GPT-4. It is a useful reminder that the quality of LLM-based planners depends on the quality and capability of the underlying LLM.
### _3D Simulation_
In addition to the 2D scenarios for our experiments, we also perform experiments in a 3D environment simulated using Pybullet [13], as illustrated in Figure 8. The task and environment are similar to BoxNet1 and BoxNet2. The environment consists of colored boxes, colored bowls, and robot arms. The goal is to move each colored box into its associated bowl of the same color in the fewest actions. Each arm is immobile and confined to actions within its workspace (indicated by the dotted blue lines). Arms can only pick and place boxes that are within its workspace or on the border.
Fig. 5: Simplified communication example of HMAS-1. The local agents hesitate on possible actions, making the dialogue endless.
Fig. 6: Success rate vs. robot number for CMAS and HMAS-2 methods in four testing environments.
Pick and place actions are executed via pre-defined motion primitives. Unlike BoxNet2, boxes can be placed anywhere that is reachable along the boundary, so collisions are possible but unlikely. We again test the scalability to more agents and instantiate the environment with either three or six arms (each has its own workspace). The 3D environment has an additional complexity of using an image-to-text model (ViLD [14]) to provide bounding boxes and text descriptions for each object. Further, the 3D simulation has a richer environment model that permits action execution errors due to dynamical factors (e.g., a box slips out of a gripper) that require re-planning. The iterative nature of the LLM-based planning frameworks in this work naturally handles such instances of replanning.
We do ten runs for each scenario. Table II shows the results of the experiments with three and six agents, respectively. Both CMAS and HMAS-2 achieve 100% success rates. CMAS used more action steps than HMAS-2 in the six robot situation, consistent with our results from the 2D environments that CMAS performs worse than HMAS-2 in more complex tasks.
## V Related Work
**LLMs for Robotics** A representative set of prior work [15, 6, 16, 7, 17] uses LLMs to select actions from pre-defined skill primitives and complete the tasks step by step with texts or codes as the intermediate, such as SayCan [8], Inner Monologue [18], Code-As-Policy [19], and ProgGPT [20]. Regarding to connect task planning and motion planning, prior work such as Text2Motion [21] and AutoTAMP [2] studies integrating LLMs with traditional Task and Motion Planners. Other work explores querying LLMs to output rewards of robot actions so that independent reward-based planners can be connected. The reward formats can be real values [22, 23], temporal logics [24, 2, 25], or patterns [26]. The recent two work [10, 11] firstly extend LLMs into multi-robot situations, while the robot number is limited to two or three and the scalability of frameworks and step history approaches is not considered. A recent work [27] considers the scalability of LLM-based single-robot planning in broader environments with more objects.
**Dialogues and Debates of LLMs** Outside the Robotics domain, LLM-based multi-agent discussion has shown impressive capability to promote the research in social behaviors [28, 23], dialogue-based games[29], and software development [30]. Recent work shows that discussion among multiple LLM agents can improve factuality and accuracy[31, 32]. Prior work focuses more on understanding LLM behaviors or improving the solution for a single question.
**Multi-Robot Collaboration** Multi-robot collaboration has been extensively studied many decades, especially on multi-arm and multi-drone motion planning [33, 34, 35]. The traditional methods rely on sampling-based methods for trajectory generation [36], or formal methods to optimize Task and Motion Planning [37, 3, 35]. Recent work also explored learning-based methods as alternatives [38, 39].
## VI Conclusion
Our work considers the scalability of LLM-based multi-robot task planning for long-horizon tasks to systems with many robots with heterogeneous capabilities. We propose several new frameworks for collaborative LLM dialogue and find that hybrid approaches with both central and local LLM planners produce the most successful plans and scale best to large number of agents. Future work can explore more complex tasks with more hierarchical frameworks of robot groups, e.g., each agent for each specialized robot sub-group.
Fig. 8: 3D simulation environments: robot arms collaborate to move all the boxes into the same colored bowls. Each robot arm has a limited workspace and can only move within its assigned region (divided by the blue lines).
Fig. 7: Simplified communication example of HMAS-2. The local agents detect the collision risk and report it to the central agent. |
2309.12792 | DurIAN-E: Duration Informed Attention Network For Expressive
Text-to-Speech Synthesis | This paper introduces an improved duration informed attention neural network
(DurIAN-E) for expressive and high-fidelity text-to-speech (TTS) synthesis.
Inherited from the original DurIAN model, an auto-regressive model structure in
which the alignments between the input linguistic information and the output
acoustic features are inferred from a duration model is adopted. Meanwhile the
proposed DurIAN-E utilizes multiple stacked SwishRNN-based Transformer blocks
as linguistic encoders. Style-Adaptive Instance Normalization (SAIN) layers are
exploited into frame-level encoders to improve the modeling ability of
expressiveness. A denoiser incorporating both denoising diffusion probabilistic
model (DDPM) for mel-spectrograms and SAIN modules is conducted to further
improve the synthetic speech quality and expressiveness. Experimental results
prove that the proposed expressive TTS model in this paper can achieve better
performance than the state-of-the-art approaches in both subjective mean
opinion score (MOS) and preference tests. | Yu Gu, Yianrao Bian, Guangzhi Lei, Chao Weng, Dan Su | 2023-09-22T11:06:04Z | http://arxiv.org/abs/2309.12792v1 | # DurIAN-E: Duration Informed Attention Network for Expressive Text-to-Speech Synthesis
###### Abstract
This paper introduces an improved duration informed attention neural network (DurIAN-E) for expressive and high-fidelity text-to-speech (TTS) synthesis. Inherited from the original DurIAN model, an auto-regressive model structure in which the alignments between the input linguistic information and the output acoustic features are inferred from a duration model is adopted. Meanwhile the proposed DurIAN-E utilizes multiple stacked SwishRNN-based Transformer blocks as linguistic encoders. Style-Adaptive Instance Normalization (SAIN) layers are exploited into frame-level encoders to improve the modeling ability of expressiveness. A denoiser incorporating both denoising diffusion probabilistic model (DDPM) for mel-spectrograms and SAIN modules is conducted to further improve the synthetic speech quality and expressiveness. Experimental results prove that the proposed expressive TTS model in this paper can achieve better performance than the state-of-the-art approaches in both subjective mean opinion score (MOS) and preference tests.
Yu Gu, Yianrao Bian, Guangzhi Lei, Chao Weng, Dan Su
Tencent AI Lab
Expressive TTS, DurIAN, SwishRNN, Transformer, Style-Adaptive Instance Normalization, DDPM
## 1 Introduction
Text-to-speech (TTS) synthesis is the task of generating intelligible and natural sounding synthetic speech waveforms given the input text messages. TTS synthesis technique is an indispensable basic component in various applications with speech interface such as car navigation systems, voice assistant and screen readers, etc. Due to the advantages of deep learning, many state-of-the-art TTS systems based on deep neural networks were able to synthesize more natural and high-quality speech, compared with traditional unit selection concatenative and statistical parametric speech synthesis approaches. Those different kinds of acoustic models and neural vocoders can be divided into autoregressive (AR) and non-autoregressive methods. Some sequence-to-sequence acoustic models relied on content-based attention mechanism to address the one-to-many alignment problems, which could generate mel-spectrograms from linguistic features frame-by-frame using AR decoders [1]. Different explicit phoneme duration models rather than attention modules were also employed into some non-autoregressive TTS acoustic models in which acoustic feature sequences of each utterance could be generated in parallel [2, 3]. The DurIAN model [4] also involved an additional duration model to avoid the typical attention errors such as skipping and repeating and meanwhile the AR decoders were also reserved to improve speech quality by combining both the linguistic information and the acoustic information from previously predicted acoustic features.
Although those TTS systems have synthesized speech qualitatively similar to real human speech, there still exists a huge gap between TTS-synthetic speech and human speech in terms of expressiveness. Many researchers have also focused on expressive TTS technology for decades, which aims to model and control the speaking style and can further broaden TTS application prospect. At present, there are two mainstream approaches to model the speaking style information: one uses pre-defined categorical style labels as the global control condition of TTS systems to denote different speaking styles [5, 6] and the other imitates the speaking style given a reference speech [7, 8]. For the first kind of approach, the style control strategy is more intuitive and interpretable, which is more suitable for practical applications. For the second one, the global style tokens or style embeddings extracted from the training datasets can enrich the diversity of expressiveness and additional style labels is not required.
Some style adaptive and transfer methods were also conducted in expressive TTS systems. Style-Adaptive Layer Normalization (SALN) layers were applied on expressive TTS systems, which received the style representation vector and predicted the gain and bias of the input feature vector [9, 10]. Style-Adaptive Instance Normalization (SAIN) layers [11] were also employed to learn the distribution with a style-specific mean and variance for each channel in mel-spectrograms and each channel in the mel-spectrograms represents a single frequency range. Therefore compared with SALN, where a single mean and variance was learned for the entire feature map, decoders with SAIN blocks achieved better expressiveness in terms of style reflection than SALN blocks [11]. SwishRNN [12], an extremely simple recurrent module, was built into the Transformer-based masked language model to increase model stability and accuracy. Recently denoising diffusion models [13] were also applied as acoustic models of TTS systems, which converted the
noise into mel-spectrogram conditioned on the linguistic features and could achieve better speech quality [14, 15]. Motivated by the success of SwishRNN-based Transformer, a linguistic encoder is proposed in the DurIAN-E model, in which SwishRNNs are involved to substitute the feed-forward blocks in Transformer [16]. SAIN-based modules are employed on both frame-level encoders and DDPM-based denoiser to better model the expressiveness with the pre-defined rich categorical style labels. An AR decoder similar-like the original DurIAN model is reserved to take full advantages of both style-specific linguistic features and acoustic features.
This paper is organized as follows. Section 2 gives a brief review of the modules and models related with our work. Section 3 introduces the proposed DurIAN-E model in this paper and the constructed expressive TTS system in detail. The experimental conditions and results are described in Section 4 and finally Section 5 concludes this paper.
## 2 Related Work
### DurIAN
DurIAN [4] is an AR model in which the alignments between the input text and the output acoustic features are inferred from a duration model, which is different from the conventional end-to-end attention mechanism used in speech synthesis systems such as Tacotrons [1]. The architecture of DurIAN mainly contains a _skip-encoder_ to encode the phoneme and prosody sequences, a _duration-model_ which aligns the input phoneme sequence and the target acoustic frames at frame level, an AR decoder network that generated target acoustic features frame by frame and a _post-net_[1] to further enhance the quality of the predicted mel-spectrograms.
The _skip-encoder_ utilizes a sequence of symbols \(\{\mathbf{x}[i]\}_{i=1}^{N}\) as input which contains both the phoneme sequence and the prosodic boundaries among different phonemes. The output hidden state sequences are encoded as \(\{\mathbf{h}[i]\}_{i=1}^{N^{\prime}}\), where \(N\) is the length of the input sequence, and \(N^{\prime}\) is the length of the input phoneme sequence without the prosodic boundaries. It's worth noting that the length \(N^{\prime}\) is smaller than the length \(N\) of the input sequence because the hidden states associated with the prosodic boundaries are excluded by _skip-encoder_. Then according to the frame numbers predicted from the _duration-model_, the sequence \(\{\mathbf{h}[i]\}_{i=1}^{N^{\prime}}\) is expanded by replication as frame aligned hidden states \(\{\mathbf{e}[i]\}_{i=1}^{T}\) where \(T\) is the sum number of acoustic frames. During the training stage, the ground truth duration of each phoneme is obtained through forced alignment using GMM-HMMs. The duration model is jointly trained conditioned on \(\{\mathbf{h}[i]\}_{i=1}^{N^{\prime}}\) to minimize the \(\ell 2\) loss between the predicted and target duration obtained from forced alignment.
Similar with other end-to-end models such as Tacotrons, the expanded hidden states \(\{\mathbf{e}[i]\}_{i=1}^{T}\) which are exactly paired with the target acoustic frames are employed as the AR decoder to predict each mel-spectrogram frame autoregressively. Then the output acoustic features from the decoder network is passed through a _post-net_[1] with convolutional layers and residual connections to further improve the quality of the predicted mel-spectrograms. The entire acoustic models are trained to minimize the \(\ell 1\) loss.
### SwishRNN
SwishRNN [12] consists of a multiplicative gating recurrent cell which uses only two matrix multiplications and an extremely simple sequential pooling operation. Therefore SwishRNN is much faster than other heavier RNNs such as LSTM and GRU. As illustrated in Fig.1, SwishRNN first conducts two linear transformations of input sequence \(\mathbf{X}\):
\[\{\mathbf{x}_{1}[i]\}_{i=1}^{l}=\mathbf{X}\mathbf{W}_{1},\quad\{\mathbf{x}_{2}[i]\}_{i=1}^{l} =\mathbf{X}\mathbf{W}_{2} \tag{1}\]
where \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are parameter matrices optimized during training and \(l\) is the sequence length. The hidden vectors \(\{\mathbf{c}[i]\}_{i=1}^{l}\) are calculated as follows:
\[\mathbf{c}[i]=\texttt{Swish}\left(\mathbf{c}[\text{i-1}]-\mathbf{x}_{1}[i]\right)+\mathbf{x}_ {1}[i], \tag{2}\]
where \(\texttt{Swish}()\) represents the element-wise Swish activation function [17].1 Eq.(2) can be interpreted as a pooling operator where the greater value between \(\mathbf{c}[\text{i-1}]\) and \(\mathbf{x}_{1}[i]\) are selected.2 Finally, the output sequences are calculated followed by a linear layer with weight \(\mathbf{W}_{3}\):
Footnote 1: Swish\((\mathbf{x})=\texttt{sigmoid}(\mathbf{\alpha}\cdot\mathbf{x}+\beta)\cdot\mathbf{x}\).
Footnote 2: Note \(\mathbf{c}[i]=\mathbf{x}_{1}[i]\) if \(\mathbf{x}_{1}[i]\gg\mathbf{c}[\text{i-1}]\), and \(\mathbf{c}[i]=\mathbf{c}[\text{i-1}]\) if \(\mathbf{x}_{1}[i]\ll\mathbf{c}[\text{i-1}]\).
\[\mathbf{H}=\mathbf{W}_{3}\left((\mathbf{C}+\mathbf{b}_{c})\odot\sigma(\mathbf{X}_{2}+\mathbf{b }_{\sigma})\right)+\mathbf{b}_{3}, \tag{3}\]
where \(\sigma()\) is a sigmoid gating activation function, \(\odot\) is the element-wise product, \(\mathbf{C}\) and \(\mathbf{X}_{2}\) represent the concatenated matrices of \(\{\mathbf{c}[i]\}_{i=1}^{l}\) and \(\{\mathbf{x}_{2}[i]\}_{i=1}^{l}\) respectively.
### Denoising diffusion probabilistic model
DDPMs take inspiration from non-equilibrium statistical physics in which the main idea is to iteratively destroy the
Figure 1: The SwishRNN cell.
structure in data through a _diffusion process_, and afterward, to learn a _reverse process_ to restore the data structure. The _diffusion process_ is modeled as a Gaussian transformation chain from data \(\mathbf{x}_{0}\) to the latent variable \(\mathbf{x}_{T}\) with pre-defined variance schedule \(\beta_{1},\cdots,\beta_{T}\):
\[q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod_{t\geq 1}q(\mathbf{x}_{t}|\mathbf{x}_{t\!-\!1}), \tag{4}\]
where \(q(\mathbf{x}_{t}|\mathbf{x}_{t\!-\!1})\sim\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\bm {x}_{t\!-\!1},\beta_{t}\mathbf{I})\), \(T\) is the total iteration step and \(q(\mathbf{x}_{0})\) is the original data distribution. The _reverse process_ parameterized with \(\theta\) is a denoising function to remove the added noise to restore the data structure, which is defined by:
\[p_{\theta}(\mathbf{x}_{0:T})=p(\mathbf{x}_{T})\prod_{t\geq 1}p_{\theta}(\mathbf{x}_{t\!-\!1 }|\mathbf{x}_{t}). \tag{5}\]
The denoising distribution \(p_{\theta}(\mathbf{x}_{t\!-\!1}|\mathbf{x}_{t})\) is often modeled by a conditional Gaussian distribution as \(p_{\theta}(\mathbf{x}_{t\!-\!1}|\mathbf{x}_{t})\sim\mathcal{N}(\mathbf{x}_{t\!-\!1};\mu_{ \theta}(\mathbf{x}_{t}),\sigma_{t}^{2}\mathbf{I})\), where \(\mu_{\theta}(\mathbf{x}_{t},t)\) and \(\sigma_{t}^{2}\mathbf{I}\) are the corresponding mean and variance. Through the parameterized reverse process with the well-trained parameter \(\theta\), the target data \(\mathbf{x}_{0}\) can be sampled from a Gaussian noise \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) iteratively for \(t=T,T-1,\cdots,1\), in which \(\mathbf{x}_{t\!-\!1}\) is sampled following distribution \(p_{\theta}(\mathbf{x}_{t\!-\!1}|\mathbf{x}_{t})\). The training goal is to maximize the evidence lower bound (ELBO\(\leq\log p_{\theta}(\mathbf{x}_{0})\)), which can be optimized to match the true denoising distribution \(q(\mathbf{x}_{t\!-\!1}|\mathbf{x}_{t})\) with the parameterized denoising model \(p_{\theta}(\mathbf{x}_{t\!-\!1}|\mathbf{x}_{t})\) with:
\[\text{ELBO}=\sum_{t\geq 1}\mathbb{E}_{q(\mathbf{x}_{t})}[D_{KL}(q(\mathbf{x}_{t\!-\!1 }|\mathbf{x}_{t})||p_{\theta}(\mathbf{x}_{t\!-\!1}|\mathbf{x}_{t})], \tag{6}\]
where \(D_{KL}\) denotes the Kullback-Leibler (KL) divergence. Then Eq.6 can be further transformed and reparameterized as a simple regression problem to optimize the MSE loss between the sampled and predicted noise terms [13].
## 3 Durian-E
### Architecture
The model structure of the proposed DurIAN-E is depicted in Fig. 2. DurIAN-E basically adopts the original model architecture of DurIAN. The _skip-encoder_ mechanism and AR decoder networks are reserved and auxiliary duration information is also involved to reduce word skipping/repeating errors. The variance predictors [2] which include phoneme-level duration, pitch and pitch range predictors are employed to add enough variance information to the hidden sequence and improve the prosody modeling ability. The ground-truth values of duration, pitch and pitch range extracted from the recordings are used as inputs into the hidden sequence in the training stage and the predicted values are utilized in the inference stage. At the same time, those ground-truth values are also used as targets to train the variance predictor. Style and speaker embeddings are also added to the hidden input sequence to better distinguish different utterances from multiple styles and speakers.
Different from the original DurIAN, we split the encoder into a phoneme-level linguistic encoder using SwishRNN-based Transformers and a frame-level encoder combining SAIN to improve expressiveness. Meanwhile motivated by diffusion model based TTS models [15], the _post-net_ is replaced by a style controllable DDPM-based denoiser to achieve better speech quality. Section 3.2 and 3.3 introduce these modules in detail.
### Encoders
#### 3.2.1 Linguistic encoder
The linguistic encoder is a phoneme-level model and only employs linguistic features as input. As described in Section 2.1, the linguistic information sequence includes the phoneme sequence and the prosodic boundaries among different phonemes. The linguistic encoder is conducted by 4 Transformer blocks with hidden size of 256 and each block interleaves a two-headed attention module, a SwishRNN layer, residual connection and layer normalization shown
Figure 3: Blocks in the encoders of DurIAN-E.
Figure 2: Model structure of DurIAN-E.
in left side of Fig. 3. Comparing with the FFN blocks in standard Transformer [16], the recurrent architecture in SwishRNN is more capable of modeling the temporal relationship and order between different phonemes in the linguistic sequences and meanwhile combining recurrence and attention can also improve the model stability.
#### 3.2.2 Frame-level encoder
For the frame-level encoder, the input sequence is expanded using given phoneme duration, of which the length is equal to that of the target mel-spectrogram and much larger than the phone-level encoder. Due to the longer input sequence, frame-level encoder doesn't follow the recurrent backbones of the linguistic encoder for greater efficiency. Similar with FastSpeech, 4 Transformer blocks with hidden size of 256 using 2 convolutional layers with kernel size of 9 rather than SwishRNNs are employed to encode the expanded intermediate hidden state. As displayed in the right side of Fig. 3, the layer normalization is substituted by a SAIN layer to improve the expressiveness and achieve style control. SAIN is defined as following:
\[\text{Style-AdaIN}(\mathbf{x},\mathbf{s})=G(\mathbf{s})\frac{\mathbf{x}-\mu(\mathbf{x})}{\sigma( \mathbf{x})}+B(\mathbf{s}), \tag{7}\]
where \(\mathbf{x}\) is a single channel of the feature maps, \(\mathbf{s}\) is the style embedding, \(\mu(\cdot)\) and \(\sigma(\cdot)\) denote the channel mean and standard deviation, and \(G\) and \(B\) are learned linear projections for computing the adaptive gain and bias according to the style vector \(\mathbf{s}\). The normalization is operated along the sequence and the style-specific statistics (mean and variance) for different channels are independent of each other in every utterance. Style vector \(\mathbf{s}\) shares the identical embedding with the variance predictor and DDPM-based denoiser, which is generated from categorical style labels.
### DDPM-based denoiser
As illustrated in Fig. 2, the output mel-spectrograms generated frame-by-frame through the AR decoder are delivered into a denoiser to further improve the speech quality. The _post-net_ used in DurIAN is replaced by a DDPM. Similar with other TTS applications using diffusion models [15, 18], the non-causal WaveNet [19] architecture is also adopted. As exhibited in Fig. 4, the denoiser is composed of a stack of 20 residual blocks including a convolutional layer with kernel size of 3, a gated activation unit and element-wise adding operations. The outputs of each block are added together through the skip-connection to predict the noise term \(\mathbf{\epsilon}_{\theta}(\cdot)\) at \(t\)-th step. A step encoder is established to make a distinction between different steps. The denoiser is conditioned on the output sequence of the frame-level encoder. The SAIN layer in Eq. 7 is also deployed on each residual block to enhance the effect of different style labels.
The shallow diffusion mechanism proposed in DiffSinger [15] is also adopted in DurIAN-E. For the training stage, the loss for the denoiser is calculated as
\[\mathbb{E}_{\mathbf{m}_{0},\mathbf{\epsilon}}\Big{[}\lambda_{t}\left\|\mathbf{\epsilon}- \mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{m}_{0},\sqrt{1-\bar{\alpha }_{t}}\mathbf{\epsilon},\mathbf{s},\mathbf{c},t)\right\|^{2}\Big{]}\,, \tag{8}\]
where \(\mathbf{m}_{0}\) is the growth truth mel-spectrogram, \(\mathbf{\epsilon}\) is the gaussian noise, \(\mathbf{s}\) is the style embedding, \(\mathbf{c}\) denotes the output of frame-level encoder, \(\lambda_{t}\) and \(\bar{\alpha}_{t}\) is the pre-defined coefficients corresponding to the step index \(t\). \(t\) is randomly sampled from 1 to the total step \(T\) for every training step. We decrease the gradient values from the denoiser back-propagated to other parts of the model by a factor of 10 so that influence of the auxiliary denoiser to other modules can be reduced. For the inference process, the denoised mel-spectrogram \(\tilde{\mathbf{m}}_{0}\) can be generated step by step as following:
\[\tilde{\mathbf{m}}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(\tilde{\mathbf{m}}_{t}-\frac {1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\mathbf{\epsilon}_{\theta}(\tilde{\mathbf{m} }_{t},\mathbf{s},\mathbf{c},t)\right)+\sigma_{t}\mathbf{z},\]
where \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(t\) decreases gradually from \(S\) to 1. Step \(S\) can be much smaller than the total step \(T\) for the shallow diffusion strategy. In DurIAN-E, \(\tilde{\mathbf{m}}_{S}\) is the actually the output mel-spectrograms from the AR decoder and we set \(T=70\) and \(S=30\) empirically3.
Footnote 3: Too big \(S\) may lead to the distortion of spectral details and too small \(S\) can degrade the denoiser performance.
## 4 Experiments
### Experiment setup
To evaluate the performance of the proposed DurIAN-E model, a multi-style Chinese corpus containing 11.8 hours of speech data pronounced by 7 different speakers was used
Figure 4: Network structure of the denoiser in DurIAN-E.
as the training dataset. We conducted a set of rich and fine-grained style tags including "neural, happy, sad, angry, exciting, annoying, amazing, doubtful, cunning, solemn, enchanting and taunting". 64 sentences those were not present in the training set were used as the test set. We conducted MOS and ablation preference tests to measure the performance of different systems and different modules by comprehensively assessing the expressiveness and speech quality. For each test, 20 test utterances randomly selected from the test set were synthesized by different systems and evaluated in random order by 10 listeners.4 All these systems besides the proposed DurIAN-E shared an unified BigVGAN vocoder [20] which was trained conditioned on ground truth mel-spectrograms to better compare the performances among different acoustic models.
Footnote 4: Examples of synthesized speech by different systems are available at [https://sounddemos.github.io/durian-e](https://sounddemos.github.io/durian-e).
### MOS test
Several state-of-the-art speech synthesis systems including _DurIAN_[4], _FastSpeech 2_[2] and _DiffSpeech_[15] were established for comparison and the MOS results are listed in Table 1. The auto-regressive model _DurIAN_ outperforms the parallel model _FastSpeech 2_, which proves the AR decoder in DurIAN which can make full use of previously generated acoustic features can synthesize better quality speech. Due to the employment of diffusion modules, _DiffSpeech_ achieves better MOS result than _DurIAN_ and _FastSpeech 2_. _DurIAN-E_ combines the advantages of those models, which consists of both AR mechanism in _DurIAN_ and the DDPM module in _DiffSpeech_ and other complicated structures such as SwishRNNs and SAIN layers. Therefore _DurIAN-E_ can generate more expressive and better quality speech and achieve the best MOS score among all systems, which demonstrates model capacity of the proposed system is sufficient.
### Ablation test
To demonstrate the effectiveness of using different proposed modules in DurIAN-E, two additional systems were conducted for ablation studies as following:
* _DurIAN-E_: The proposed system as described in Fig. 2;
* _DurIAN-E-postnet_: The model using _post-net_[1] as the denoiser instead of DDPM;
* _DurIAN-E-ffn_: Using standard Transformers as the linguistic encoder instead of SwishRNN-based ones.
The results of two preference tests are shown in Fig. 5. _DurIAN-E_ can produce better results than the two ablation systems, which verifies the effectiveness of adopting DDPM denoiser and using SwishRNNs as the linguistic encoder. Comparing with _DurIAN-E_, _post-net_ based denoiser rather than the one based on DDPM causes obvious speech quality drop and replacing the SwishRNN blocks with the FFN blocks can results in model stability and pronunciation drop. Meanwhile, the difference between _DurIAN-E_ and _DurIAN-E-postnet_ is bigger than that between _DurIAN-E_ and _DurIAN-E-ffn_ as depicted from Fig. 5, which indicates the effectiveness of the DDPM-based denoiser with SAIN layers is much more significant.
## 5 Conclusion
In this paper, we propose DurIAN-E, an improved model for expressive and high-fidelity TTS, which utilizes SwishRNN-based Transformers as phoneme-level encoder and SAIN-based frame-level encoder to achieve more natural prosody and more expressive speech. A DDPM-based denoiser for mel-spectrograms using SAIN layers is conducted to further improve speech quality and expressiveness. Experimental results of subjective tests prove that DurIAN-E can achieve better performance than the state-of-the-art approaches. We will increase the DDPM inference efficiency by reducing sample steps and further improve speech quality and accelerate inference speed by incorporating DurIAN-E with other modules such as conditional variational autoencoder and adversarial learning. We will also deploy the proposed algorithms on online products.
\begin{table}
\begin{tabular}{c|c c|c c c|c} \hline \hline
**System** & _GT_ & _GT (Mel + vocoder)_ & _DurIAN_ & _FastSpeech 2_ & _DiffSpeech_ & _DurIAN-E_ \\ \hline
**MOS** & 4.45 \(\pm\) 0.16 & 4.23 \(\pm\) 0.17 & 3.73 \(\pm\) 0.15 & 3.62 \(\pm\) 0.17 & 3.78 \(\pm\) 0.19 & **3.86 \(\pm\) 0.14** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The MOS values of different systems with 95% confidence intervals.
Figure 5: Preference test scores between _DurIAN-E_ and ablation systems. The \(p\)-values of \(t\)-test are 0.311 and 0.378. |
2310.20125 | Zephyr : Stitching Heterogeneous Training Data with Normalizing Flows
for Photometric Redshift Inference | We present zephyr, a novel method that integrates cutting-edge normalizing
flow techniques into a mixture density estimation framework, enabling the
effective use of heterogeneous training data for photometric redshift
inference. Compared to previous methods, zephyr demonstrates enhanced
robustness for both point estimation and distribution reconstruction by
leveraging normalizing flows for density estimation and incorporating careful
uncertainty quantification. Moreover, zephyr offers unique interpretability by
explicitly disentangling contributions from multi-source training data, which
can facilitate future weak lensing analysis by providing an additional quality
assessment. As probabilistic generative deep learning techniques gain
increasing prominence in astronomy, zephyr should become an inspiration for
handling heterogeneous training data while remaining interpretable and robustly
accounting for observational uncertainties. | Zechang Sun, Joshua S. Speagle, Song Huang, Yuan-Sen Ting, Zheng Cai | 2023-10-31T01:58:39Z | http://arxiv.org/abs/2310.20125v1 | zephyr : Stitching Heterogeneous Training Data with Normalizing Flows for Photometric Redshift Inference
###### Abstract
We present zephyr, a novel method that integrates cutting-edge normalizing flow techniques into a mixture density estimation framework, enabling the effective use of heterogeneous training data for photometric redshift inference. Compared to previous methods, zephyr demonstrates enhanced robustness for both point estimation and distribution reconstruction by leveraging normalizing flows for density estimation and incorporating careful uncertainty quantification. Moreover, zephyr offers unique interpretability by explicitly disentangling contributions from multi-source training data, which can facilitate future weak lensing analysis by providing an additional quality assessment. As probabilistic generative deep learning techniques gain increasing prominence in astronomy, zephyr should become an inspiration for handling heterogeneous training data while remaining interpretable and robustly accounting for observational uncertainties.
## 1 Introduction
Redshift measures cosmic distances and Universe expansion, making it fundamental in astrophysics [1; 2; 3]. Cosmology [4; 5; 6] and extragalactic science [7] both count on accurate redshifts with a wide range of sensitivity requirement. For example, weak lensing studies, which probe cosmological
structure growth and expansion history by tracing dark matter cosmic webs through the distortion of galaxy shapes [8], are currently limited by redshift errors [9; 10; 11].
High-precision redshifts require expensive, biased spectroscopic observations (spec-\(z\)/grism-\(z\)/prism-\(z\)). In contrast, photometric redshifts (photo-\(z\)) cover wider luminosity ranges but lack spectroscopic precision. In ongoing and future surveys like LSST [12], most galaxies will have only photometric data. Integrating spectroscopic and photometric data is therefore crucial for photo-\(z\) inference.
Photo-\(z\) estimation techniques fall into two categories: template-fitting [13; 14; 15], which matches photometry to spectroscopic or physical model templates, and machine learning [16; 17; 18], which trains supervised models to map photometry to reference redshifts. To maximize information gain for science study, photo-\(z\) estimation requires synthesizing all available data to achieve wide redshift coverage, high precision, and minimal selection bias.
Stitching high-quality spectroscopic redshifts [19], medium-quality grism/prism redshifts [20; 21], and lower-quality photometric redshifts [22] enables full exploitation of deep galaxy surveys like HSC-SSP [23] for photo-\(z\) estimation. To help accomplish this, we propose zephyr - an integrative framework that stitches heterogeneous training samples for photo-\(z\) inference using normalizing flows [24]. As a generalized extension of frankenz[25] for future large-scale sky surveys, zephyr : (1) improves photo-\(z\) inference, refines both point estimates and redshift probability density estimation; (2) interprets heterogeneous datasets and exerts uncertainty control; (3) more efficiently scales to high-dimensional feature spaces and large datasets.
## 2 Method
Our zephyr framework combines heterogeneous training data for photo-\(z\) inference via mixture density estimation as shown in Figure 1. The photo-\(z\) posterior probability density function (PDF) \(\mathrm{P}(z|\mathbf{g},\boldsymbol{\sigma})\) for redshift \(z\) given photometry \(\mathbf{g}\) and uncertainty \(\boldsymbol{\sigma}\) is expressed as a weighted sum of posteriors from distinct categories \(\mathrm{P}(z|\mathbf{g},c_{i})\), with each category \(c_{i}\), \(i=1,2,\ldots,\mathrm{M}\) weighted by \(\mathrm{P}(\mathbf{g}|c_{i})\mathrm{P}(c_{i})\) as shown in Equation 1.
\[\mathrm{P}(z|\mathbf{g},\boldsymbol{\sigma})=\sum_{i=1}^{\mathrm{N}}\mathrm{P} (z|\mathbf{g},\boldsymbol{\sigma},c_{i})\mathrm{P}(c_{i}|\mathbf{g}, \boldsymbol{\sigma})\propto\sum_{i=1}^{\mathrm{N}}\mathrm{P}(z|\mathbf{g}, \boldsymbol{\sigma},c_{i})\mathrm{P}(\mathbf{g}|\boldsymbol{\sigma},c_{i}) \mathrm{P}(c_{i}) \tag{1}\]
We treat the category prior weights \(\mathrm{P}(c_{i})\) as latent variables in our mixture density model. \(\mathrm{P}(z|\mathbf{g},\boldsymbol{\sigma},c_{i})\) and \(\mathrm{P}(\mathbf{g}|\boldsymbol{\sigma},c_{i})\) are estimated using normalizing flows, which are designed to transform simple distributions into complex ones via invertible, differentiable functions, enabling efficient sampling and density estimation ([26]). zephyr's use of normalizing flows over nearest neighbor methods for density estimation improves performance and high-dimensional scaling over frankenz.
Denoting \(\mathbf{g}^{*}\) as the true underlying photometric data, \(\mathrm{P}(z|\mathbf{g},\boldsymbol{\sigma},c_{i})\) can be formulated as:
\[\mathrm{P}(z|\mathbf{g},\boldsymbol{\sigma},c_{i})=\int\mathrm{P}(z|\mathbf{ g}^{*},c_{i})\mathrm{P}(\mathbf{g}^{*}|\mathbf{g},\boldsymbol{\sigma})\, \mathrm{d}\Omega_{\mathbf{g}}^{*}=\int\mathrm{P}(z|\mathbf{g}^{*},c_{i}) \mathrm{P}(\mathbf{g}|\mathbf{g}^{*},\boldsymbol{\sigma})\frac{\mathrm{P}( \mathbf{g}^{*})}{\mathrm{P}(\mathbf{g})}\,\mathrm{d}\Omega_{\mathbf{g}}^{*} \tag{2}\]
where \(\Omega_{\mathbf{g}}^{*}\) denotes the photometric space and \(\mathrm{P}(\mathbf{g}|\mathbf{g}^{*},\boldsymbol{\sigma})\) follows \(\mathcal{N}(\mathbf{g}^{*};\boldsymbol{\sigma})\). Although the unknown prior distribution over the true photometry \(\mathrm{P}(\mathbf{g}^{*})\) may be complex, in this work we approximate it as
Figure 1: Model architecture. zephyr integrates normalizing flows and mixture density estimation to give both accurate and interpretable photo-\(z\) inference with noisy, heterogeneous training data.
uniform. We justify this approximation for two reasons: (1) for high signal-to-noise cases, \(\mathrm{P}(\mathbf{g}|\mathbf{g}^{*},\boldsymbol{\sigma})\) is concentrated around \(\mathbf{g}^{*}\) and so \(\mathrm{P}(\mathbf{g}^{*})\) would be roughly constant; (2) for low signal-to-noise cases, a uniform \(\mathrm{P}(\mathbf{g}^{*})\) would generally lead to slightly broader PDFs.
Approximating Equation 2 through Monte Carlo integration [27; 28; 29; 30] with \(\mathrm{M}\) samples drawn from this distribution, we have:
\[\mathrm{P}(z|\mathbf{g},\boldsymbol{\sigma},c_{i})\approx\frac{1}{\mathrm{M}} \sum_{j=1}^{\mathrm{M}}\mathrm{P}(z|\mathbf{g}^{*},c_{i}) \tag{3}\]
Similarly, \(\mathrm{P}(\mathbf{g}|\boldsymbol{\sigma},c_{i})\) can be approximated the same way to get:
\[\mathrm{P}(\mathbf{g}|\boldsymbol{\sigma},c_{i})=\int\mathrm{P}(\mathbf{g}| \mathbf{g}^{*},\boldsymbol{\sigma})\mathrm{P}(\mathbf{g}^{*}|c_{i})\, \mathrm{d}\Omega_{\mathbf{g}}^{*}\approx\frac{1}{\mathrm{M}}\sum_{j=1}^{ \mathrm{M}}\mathrm{P}(\mathbf{g}_{j}^{*}|c_{i}), \tag{4}\]
where \(\mathbf{g}_{j}^{*}\), \(j=1,2,3,\ldots,\mathrm{M}\) follow the same distribution as in Equation 2. This approach allows us to learn the intrinsic distributions for \(\mathrm{P}(z|\mathbf{g},\boldsymbol{\sigma},c_{i})\) and \(\mathrm{P}(\mathbf{g}|\boldsymbol{\sigma},c_{i})\), providing a robust method to handle uncertainty. Equation 2 and 4 facilitate finer uncertainty control with the normalizing flow and are used during both model training and inference.
We showcase the quality of our uniform \(\mathrm{P}(\mathbf{g}^{*})\) assumption for low signal-to-noise cases on two toy datasets. In Figure 2, the intrinsic distributions (blue triangles) are a circle 2(a) and a double moon 2(b). Gaussian noise \(\mathcal{N}(0,1)\) is added to simulate low signal-to-noise observations (grey dots). The experiment shows that while the uniform prior may smear the true distribution slightly, it remains highly effective for recovering the underlying true density (red squares).
## 3 Data
We analyze a dataset with two reference redshifts: a collection of high-quality spec-\(z\)'s/grism-\(z\)'s/prism-\(z\)'s (high-confidence, biased to brighter objects) and lower-quality photo-\(z\)'s (lower-confidence, fainter objects). The photometry is taken from HSC-SSP survey PDR3 [31] in the _grizy_ filters. We cross-match HSC PDR3 against other surveys (see Table A) to obtain our collection of reference redshifts. COSMOS2015 [22] provides 30-band photo-\(z\). Our final dataset includes \(129,449\) photo-\(z\) sources from COSMOS2015 and \(21,591\) spec-\(z\) sources from spectroscopic surveys. We split the data 90/5/5 percent portions for training/validation/testing.
Figure 2: Recovering distributions from noisy data. The proposed method effectively estimates densities on circle (left) and moon (right) datasets despite low signal-to-noise ratios.
## 4 Experiment and Result
### Experiment Settings
We use neural spline flows [26] for density estimation. We set the spec-\(z\) and photo-\(z\) prior probabilities to their relative portions in the training data. We investigate: (a) zephyr's photo-\(z\) inference capability; and (b) its interpretability in quantifying contributions from different training samples. We compare to (1) frankenz4[25], a Bayesian nearest neighbor method conceptually similar to zephyr, and (2) EAzY5[14], a widely-used template-fitting method, with the sfhz6 template.
### Assessing the Quality of Photo-\(z\) Inference
To assess photo-\(z\) quality, we define the scaled residual \(\Delta z=(z_{pred}-z_{ref})/(1+z_{ref})\), bias \(b=\mathrm{Median}(\Delta z)\), scatter \(\sigma=1.4826\times\mathrm{Median}(|\Delta z-b|)\), and outlier rate \(\eta\) (outliers have \(|\Delta z|>0.15\)) [32; 33]. As shown in Table 1, zephyr exhibits lower scatter and outlier rate than frankenz and EAzY, attributable to the high expressiveness of normalizing flows used. For distribution reconstruction, zephyr shows comparable probability integral transform (PIT) performance to frankenz and outperforms EAzY (Figure 3(a)), demonstrating viability for cosmology [9; 34].
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Bias (\(b\)) & Scatter (\(\sigma\)) & Outlier Rate (\(\eta\)) \\ \hline zephyr & -0.003 & **0.053** & **0.198** \\ frankenz & **0.001** & 0.080 & 0.264 \\ EAzY & -0.011 & 0.170 & 0.458 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison on performance over the test data. The normalizing flow-based zephyr model achieves a much lower scatter and outlier rate compared to frankenz and EAzY.
Figure 3: (a) PIT distributions for different models, demonstrating zephyr ’s efficacy in reconstructing the redshift distribution as its PIT value closely follows a uniform distribution \(\mathcal{U}(0,1)\); (b) zephyr point estimates versus reference redshifts color-coded by spec-\(z\) training sample contributions. zephyr ’s results clearly show that the high-redshift end of the distribution and most redshift outliers predominantly are made up of predictions with little spec-\(z\) contributions.
### Interpreting Heterogeneous Datasets
Assessing impacts of heterogeneous data is crucial in cosmology. We show the zephyr model has strong interpretability, facilitating disambiguation of contributions from disparate datasets. We define \(\mathrm{P}_{spec}\) as the proportion of spec-\(z\) samples contributing to photo-\(z\) estimation:
\[\mathrm{P}_{\mathrm{spec}}=\frac{\mathrm{P}(\mathbf{g}|c_{1})\mathrm{P}(c_{1})} {\mathrm{P}(\mathbf{g}|c_{1})\mathrm{P}(c_{1})+\mathrm{P}(\mathbf{g}|c_{2}) \mathrm{P}(c_{2})}. \tag{5}\]
Figure 3(b) shows the spec-\(z\) training samples contribute less to high-redshift sources due to limited depth of spectroscopic surveys. In addition, we see predictions dominated by photo-\(z\) training samples also have larger scatter and outlier rates due to imprecise reference redshifts and noisier input photometry. zephyr exhibits superior interpretability in disentangling contributions from heterogeneous datasets, enabling unique quality assessment opportunities for cosmology analysis.
## 5 Broader Impact
As we enter an era of ultra-precise cosmology, accurate photo-\(z\) inference is critical. To achieve this, we need advanced algorithms to maximize photo-\(z\) precision and understand training data impacts. zephyr enables both - its flexible framework handles high-dimensional data while maintaining interpretability. Moreover, normalizing flows enable data-driven inferences for astrophysics. We will expand zephyr into a versatile framework for broader applications like stellar mass inference. Such capabilities will prove valuable for upcoming surveys.
## Acknowledgement
The authors thank Diana Blanco, Alexie Leauthaud, and Yifei Luo (\(\mathcal{Y}\)) from the Department of Astronomy and Astrophysics at the University of California, Santa Cruz for fruitful discussions regarding potential applications of zephyr in weak lensing analysis. The authors also thank the anonymous reviewers for their valuable comments.
|
2309.10185 | QoS-Aware Service Prediction and Orchestration in Cloud-Network
Integrated Beyond 5G | Novel applications such as the Metaverse have highlighted the potential of
beyond 5G networks, which necessitate ultra-low latency communications and
massive broadband connections. Moreover, the burgeoning demand for such
services with ever-fluctuating users has engendered a need for heightened
service continuity consideration in B5G. To enable these services, the
edge-cloud paradigm is a potential solution to harness cloud capacity and
effectively manage users in real time as they move across the network. However,
edge-cloud networks confront a multitude of limitations, including networking
and computing resources that must be collectively managed to unlock their full
potential. This paper addresses the joint problem of service placement and
resource allocation in a network-cloud integrated environment while considering
capacity constraints, dynamic users, and end-to-end delays. We present a
non-linear programming model that formulates the optimization problem with the
aiming objective of minimizing overall cost while enhancing latency. Next, to
address the problem, we introduce a DDQL-based technique using RNNs to predict
user behavior, empowered by a water-filling-based algorithm for service
placement. The proposed framework adeptly accommodates the dynamic nature of
users, the placement of services that mandate ultra-low latency in B5G, and
service continuity when users migrate from one location to another. Simulation
results show that our solution provides timely responses that optimize the
network's potential, offering a scalable and efficient placement. | Mohammad Farhoudi, Masoud Shokrnezhad, Tarik Taleb | 2023-09-18T22:24:42Z | http://arxiv.org/abs/2309.10185v1 | # QoS-Aware Service Prediction and Orchestration in Cloud-Network Integrated Beyond 5G
###### Abstract
Novel applications such as the Metaverse have highlighted the potential of beyond 5G networks, which necessitate ultra-low latency communications and massive broadband connections. Moreover, the burgeoning demand for such services with ever-fluctuating users has engendered a need for heightened service continuity consideration in B5G. To enable these services, the edge-cloud paradigm is a potential solution to harness cloud capacity and effectively manage users in real time as they move across the network. However, edge-cloud networks confront a multitude of limitations, including networking and computing resources that must be collectively managed to unlock their full potential. This paper addresses the joint problem of service placement and resource allocation in a network-cloud integrated environment while considering capacity constraints, dynamic users, and end-to-end delays. We present a non-linear programming model that formulates the optimization problem with the aiming objective of minimizing overall cost while enhancing latency. Next, to address the problem, we introduce a DDQL-based technique using RNNs to predict user behavior, empowered by a water-filling-based algorithm for service placement. The proposed framework adeptly accommodates the dynamic nature of users, the placement of services that mandate ultra-low latency in B5G, and service continuity when users migrate from one location to another. Simulation results show that our solution provides timely responses that optimize the network's potential, offering a scalable and efficient placement.
Edge-Cloud Computing, Cloud-Network Integration, Resource Allocation, Service Orchestration, Service Placement, Path Selection, Service Continuity, Optimization Theory, Beyond 5G, and 6G.
## I Introduction
In this fast-paced world, networking environments have evolved, leading to an increase in data flow [1]. The shift in paradigm has given birth to a range of entirely new services that require rigorous Quality of Service (QoS) requirements [2]. Some of these services include the Metaverse, Unmanned Aerial Vehicles (UAVs), and Augmented Reality/Virtual Reality (AR/VR) [3]. With the rise of these services, ensuring reliable and efficient data flow is now essential. The QoS requirements for these services are stringent, and any delay or interruption in data flow can have severe consequences. As the evolution of networks continues to accelerate, we anticipate more novel services demanding strict QoS. Therefore, the need for robust and reliable networks that can handle increased data flow is more critical than ever. To meet these demands, technological advancements in network infrastructure are continuously being made. The evolution of networks has paved the way for the development of innovative solutions that cater to the QoS requirements of these novel services [4].
Distributed edge-cloud architecture is one of the potential substrates to answer this need, which has become an indispensable part of today's computing landscape. This continuum is based on Service Oriented Architecture (SOA) and is gaining popularity due to its scalability, reliability, and availability of computing functionalities/facilities that can be used as resources. The purpose of edge computing is to bring data intelligence, processing, and storage closer to the network's edge, while cloud computing provides more capacity and a more reliable environment [5]. Through the edge-cloud continuum environment, computer-related service requests will be answered more promptly, the quality of services will be improved, and the location of users will be tracked more accurately. In Beyond 5G (B5G) networks [6], edge-cloud infrastructure is integrated into distinct domains, and Network Function Virtualization (NFV) virtualizes these resources, creating isolated virtual entities on top of physical infrastructure [7]. Hence, Virtual Network Functions (VNF) and service instances are available through Software-Defined Networks (SDNs) and NFV, offering users a range of services and computing resources.
Effective service orchestration is crucial to ensure optimal service delivery, which meets both network constraints and user requirements [8]. Considering it, the QoS and Quality of Experience (QoE) can be improved, and the continuity of services can be provided in an efficient manner [9]. One of the greatest challenges of effective service orchestration in edge-cloud computing is resource management, where the best suitable service instances should be selected for user requests, and computing and networking resources should be allocated and scheduled jointly, promoting resource sharing and maintaining a deterministic system to ensure that services and user requests are satisfied in terms of their QoS and QoE requirements, resulting in various system-level predefined objective functions, such as provider-level cost minimization (for example, through energy savings) or profit maximization (by, for instance, increasing resource utilization) [10].
As of now, different concepts, architectures, and paradigms have been considered in the approaches proposed for service orchestration. Zhang _et al._[11] have developed an adaptive interference-aware heuristic approach to optimize VNF placement, which has been shown to effectively handle traffic variation and improve the total throughput of accepted requests. Li _et al._[12] have presented a resource management and replica allocation strategy for edge-cloud computing systems, which
aims to reduce financial costs while maintaining performance and data consistency. Additionally, a heuristic near-optimal solution to the joint problem of networking and computing resource allocation for 5G networks was presented [13]. This work proposed an optimal approach to find the optimal solution to the joint problem. Dant _et al._[14] have presented the architecture of SDNized Information-Centric Networking (ICN) technologies which incorporate service placement.
Although the proposed methods in these studies are effective in addressing the resource allocation problem, their applicability to real-world scenarios remains a challenge, as they fail to cater to the dynamic nature of users and their requests, making service continuity a challenge. These approaches provided static allocation which will be useless in applications like the Metaverse whereby users and requests are subject to changes on a millisecond basis. Moreover, in most of the previous works, resource allocation has been isolated to the cloud domain, and the network is solely viewed as a pipeline with no cognitive ability to adapt regarding the changes in the system. Clearly, such solitary approaches are ineffective due to disregarding interdependencies among various domains and resources. A failure in one domain can have far-reaching effects on the other ones, so orchestrating services from a siloed perspective may not be adequate to achieve the desired system performance.
This study aims to address the existing gap in the literature by examining the joint problem of service instance placement and assignment, as well as path selection in the context of an edge-cloud continuum environment wherein users are moving and requests are changing their Point of Attachment (PoA) over time. To address this problem, the first step is to predict which requests will arrive at each PoA in the near future, followed by a joint assignment of networking and computing resources to meet their requirements. In particular, we consider capacity limitations of the resources, and End-to-End (E2E) delays, with the goal of minimizing total cost. Our main contributions to this paper are:
* Formulating the joint problem of service placement and resource allocation in the edge-network-cloud integrated infrastructure as a Mixed Integer Non-Linear Programming (MINLP) problem.
* Proposing a deep reinforcement learning method for predicting the arrival point of requests utilizing historical data for smoother handling of user dynamicity and improving service continuity.
* Devising a novel heuristic approach based on the water-filling algorithm to identify near-optimal solutions for the placement of service instances on edge-cloud nodes and allocating networking resources regarding the QoS requirements of requests, utilizing the output of the learning method to minimize delay and cost, resulting in the more efficient placement of resources.
The upcoming sections of this paper are arranged coherently as follows. Commencing with Section II, the system model is outlined, followed by a detailed resource allocation problem formulation in Section III. The heuristic approaches are then presented in Sections IV with utmost clarity, so as to easily comprehend the technical aspects of the proposed method of service prediction and orchestration. In Section V, the numerical results are illustrated with the help of appropriate figures, thereby providing a clear understanding of the research outcomes. Finally, Section VI offers concluding remarks and future directions that encapsulate the study's findings.
## II System Model
In the following, the system model is provided. This paper examines three main components of the system: edge-cloud infrastructure, service providers, and user requests.
### _Edge-cloud Infrastructure_
The edge-cloud infrastructure consists of a network that connects computing resources available for deploying instances of services. The network, denoted by \(\mathcal{G}(\mathbf{\mathcal{N}},\mathbf{\mathcal{L}},\mathbf{\mathcal{P}})\), consists of two domains (i.e., access and core), where \(\mathbf{\mathcal{N}}\) is the set of edge-cloud nodes with size \(\mathcal{N}\), \(\mathbf{\mathcal{L}}\subset\{l:(n,n^{\prime})|n,n^{\prime}\in\mathcal{N}\}\) is the set of links with size \(\mathcal{L}\), and \(\mathbf{\mathcal{P}}=\{p:(\mathcal{H}_{p},\mathcal{T}_{p})|p\subset\mathbf{\mathcal{ L}}\}\) represents the set of directional paths with size \(\mathcal{P}\). Each path \(p\) is determined by its head node (\(\mathcal{H}_{p}\)) and tail node (\(\mathcal{T}_{p}\)), and \(\mathcal{J}_{p,l}\) is a binary parameter equal to \(1\) if path \(p\) contains link \(l\). As each edge-cloud node is equipped with computing resources, it can be considered a host for deploying service instances. It is predetermined that the computing resources available on each node are limited by a predefined capacity threshold \(\widehat{\mathcal{C}}_{n}\), and the bandwidth available on each link is limited by a corresponding capacity \(\widehat{\mathcal{L}}_{l}\). The cost of using each node and a link is associated with a corresponding cost, denoted by \(\overline{\mathcal{C}}_{n}\) and \(\overline{\mathcal{L}}_{l}\). Note that the network is structured at different levels, and the nodes are distributed so that the more nodes close to the cloud, the higher the capacity and lower the costs. Thus, nodes near end-users or entry points have expensive but limited computing resources, while central nodes have cheaper and higher capacity computing resources [1].
### _Service Providers_
Participating in the system are \(\mathcal{S}\) service providers, each of which offers a set of service instances \(\mathbf{\mathcal{I}}_{s}=\{1,2,...,\mathcal{I}_{s}\}\) with size \(\mathcal{I}_{s}\). Consequently, the set of services is represented by \(\mathbf{\mathcal{S}}=\{\mathbf{\mathcal{I}}_{1},\mathbf{\mathcal{I}}_{2},...,\mathbf{ \mathcal{I}}_{S}\}\). Although each instance is capable of handling multiple requests, its capacity is limited by a predetermined threshold \(\widehat{\mathcal{I}}_{s,i}\), and the cost of using each service instance is denoted by \(\overline{\mathcal{I}}_{s,i}\).
### _User Requests_
The system contains a set of \(\mathcal{R}\) active requests, denoted by \(\mathbf{\mathcal{R}}\), where each request \(r\) arrives in the system at time \(\mathcal{T}_{r}\) and continuously demands a service \(\mathcal{S}_{r}\) to send its inquiry traffic to one of its instances for a particular operation and then receives the response. Users exhibit dynamic behavior in the system by changing their locations over time. For this reason, \(\mathcal{E}_{r}^{t}\) identifies the node (PoA) from which each request originates at
each time slot. Upon reaching the PoA, the most appropriate service instance should be selected for request \(r\) based on its requirements such as the minimum service capacity \(\tilde{\mathcal{I}}_{r}^{t}\), minimum network bandwidth \(\tilde{\mathcal{L}}_{r}^{t}\), maximum acceptable E2E delay \(\tilde{\mathcal{D}}_{r}^{t}\), traffic burstiness \(\tilde{\mathcal{B}}_{r}^{t}\), maximum packet size \(\tilde{\mathcal{Z}}_{r}^{t}\), and \(\tilde{\mathcal{O}}_{r}\) indicating the upper limit of overall E2E delay that can be tolerated by request \(r\) over \(\mathcal{T}\) time slots, also known as the Service-Level Agreement (SLA) requirement.
## III Problem Definition
In this section, the joint problem of resource allocation is defined as an MINLP formulation, taking into account instance placement and assignment (III-B), path selection (III-C), and delay constraints (III-D) with the aim of minimizing the overall cost (III-A) to ensure that QoS requirements of requests are continuously met at the lowest possible cost, given that they are changing their PoA over time.
### _Objective Function_
This objective function (OB) seeks to minimize the total cost of allocated resources over the time interval \(\boldsymbol{\mathcal{T}}\) (beginning at time \(1\) and ending at time \(\mathcal{T}\)). Specifically, this equation captures the cost associated with the assignment of requests to instances, the placement of instances on edge-cloud nodes, and the selection of inquiry and response paths for requests. \(\tilde{\mathcal{A}}_{r,i}^{t}\) and \(\tilde{\mathcal{E}}_{i,n}^{t}\) are binary variables that indicate the instance of request \(r\) (considering that the instance of request \(r\) must be chosen from among the instances of \(\mathcal{S}_{r}\)) and the host node of instance \(i\) respectively at time \(t\), and \(\tilde{\mathcal{L}}_{r}^{t}\) is a continous variable that shows the total cost of allocated paths to request \(r\) at time \(t\). In this equation and what follows, \(i\) is iterating over the instances of \(\mathcal{S}\).
\[\sum_{\boldsymbol{\mathcal{T}},\boldsymbol{\mathcal{N}},\boldsymbol{ \mathcal{S}}}\tilde{\mathcal{E}}_{i,n}^{t}\overline{c}_{n}+\sum_{\boldsymbol{ \mathcal{T}},\boldsymbol{\mathcal{S}},\boldsymbol{\mathcal{R}}}\tilde{ \mathcal{A}}_{r,i}^{t}\overline{\mathcal{I}}_{s,i}+\sum_{\boldsymbol{\mathcal{ T}},\boldsymbol{\mathcal{R}}}\tilde{\mathcal{L}}_{r}^{t}\] (OB)
### _Instance Placement and Assignment Constraints_
The first step is to ensure that each request is always assigned to a single instance of the service (C1). C2 ensures that each service instance selected by at least one request is placed on at least one available edge-cloud node at the time requested. To avoid congestion and ensure the framework's reliability, the total number of requests assigned to each service instance cannot exceed the capacity of the instance in each time slot (C3). Nodes are only able to handle a limited capacity as well (C4).
\[\sum_{\boldsymbol{\mathcal{S}}}\tilde{\mathcal{A}}_{r,i}^{t}=1 \quad\forall r\in\boldsymbol{\mathcal{R}},t\in[\mathcal{T}_{r},\mathcal{T}]\] (C1) \[\sum_{\boldsymbol{\mathcal{N}}}\tilde{\mathcal{E}}_{i,n}^{t}> \left(\sum_{\boldsymbol{\mathcal{R}}}\tilde{\mathcal{A}}_{r,i}^{t}\right)/ \mathcal{R}\quad\forall i\in\boldsymbol{\mathcal{S}},t\in[\mathcal{T}_{r}, \mathcal{T}]\] (C2) \[\sum_{\boldsymbol{\mathcal{R}}}\tilde{\mathcal{A}}_{r,i}^{t} \tilde{\mathcal{I}}_{r}^{t}\leq\widehat{\mathcal{I}}_{s,i}\quad\forall i,t\in \boldsymbol{\mathcal{S}},\boldsymbol{\mathcal{T}}\] (C3) \[\sum_{\boldsymbol{\mathcal{S}},\boldsymbol{\mathcal{R}}}\tilde{ \mathcal{E}}_{i,n}^{t}\tilde{\mathcal{A}}_{r,i}^{t}\tilde{\mathcal{I}}_{r}^{t} \leq\widehat{\mathcal{C}}_{n}\quad\forall n,t\in\boldsymbol{\mathcal{N}}, \boldsymbol{\mathcal{T}}\] (C4)
### _Path Selection Constraints_
In order to deliver inquiry traffic of a request to its assigned instance and return the response, it is necessary to assign a feasible E2E route for each request within the specified time slot (C5 and C6). To do so, a unique inquiry path is selected for each request, originating from its entry node (PoA) and concluding at the chosen service instance, denoted by \(\widetilde{\mathcal{R}}_{r,p}^{t}\). For each request, the corresponding response path, or \(\widetilde{\mathcal{R}}_{r,p}^{t}\), is also determined using a similar approach, but with the order of the nodes reversed. In other words, the response path starts at the selected service instance and ends at its PoA. Additionally, a capacity limitation applies to the number of requests assigned to each path at any given time (C7), and C8 computes the total path allocation cost for each request.
\[\sum_{\boldsymbol{\mathcal{P}}|\mathcal{H}_{p}=\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{ \epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{\epsilon} \underline{\epsilon}\underline{\epsilon}\underline{\epsilon}\underline{
worst-case scenario, the complexity of solving this problem could increase to the extent of the solution space size [17]. To determine the optimal allocation for a given request, each node, instance, and path must be evaluated at least once. Since allocating resources for any request at any time slot affects and is affected by allocations for others, all possible sequences of requests and time slots must be considered, yielding a solution space of size \(\mathcal{R}!\mathcal{TV}\mathcal{SP}^{2}\). As a result, identifying the optimal solution for large-scale instances in a timely manner is impractical, even with all the necessary information available and a fully-aware environment. Further complicating the situation is the fact that in a continuous network, not all the required information (such as the requests for future time slots and their PoA) are accessible. To conquer knowledge imperfectness/inadequacy and lead a quality result in a timely manner, an approach named Water-fIlling of Service placEment (WISE) is proposed in Algorithm 1 involving two separate sections: prediction (steps 2-15) and orchestration (steps 16 - 38). These mechanisms iterate for each time slot.
The prediction mechanism focuses on determining the next PoA of each request to adjust the allocated resources apriori to maintain the continuity of service provision. To do so, each PoA (through steps 2 to 15) employs a Double Deep Q-Learning (DDQL) agent at each point in time wherein Recurrent Neural Networks (RNNs) are used to approximate the likelihood that each request \(r\) will be requested at the next time (Q values). The agent's state (\(\theta\)) is the vector of arrived requests to this PoA during the last \(m\) time slots, the action (\(\alpha\)) returns the list of \(z\) requests with the highest likelihood, and the reward (\(\rho\)) is the number of requests predicted correctly. Note that the action in each iteration is chosen by the \(\epsilon\)-greedy policy that follows the evaluation function of the corresponding agent with probability \((1-\epsilon)\) and chooses a random action with probability \(\epsilon\). During the training process, the probability decreases linearly from \(\epsilon\) to \(\widehat{\epsilon}\). Besides, to improve the efficiency, the observed transitions are stored in a memory bank (\(mem\)), and the neural network is updated by randomly sampling from this pool [18].
```
Input:\(\mathcal{T}\), \(\epsilon\), \(\epsilon^{\prime}\), \(\widehat{\epsilon}\), \(\theta_{0}\leftarrow\{\}\), and \(\alpha_{0}\leftarrow\{\}\)
1foreach\(\tau\) in \([1:\mathcal{T}]\)do
2 update \(\theta_{\tau}\) using the arrived requests at the PoA
3if\(\tau<m\)then
4\(\alpha_{\tau+1}\leftarrow\) select a set of \(z\) random services
5
6else
7\(\zeta\leftarrow\) generate a random number from \([0:1]\)
8if\(\zeta>\epsilon\)then
9\(\alpha_{\tau+1}\leftarrow\) select \(z\) services with top Q values
10else
11\(\alpha_{\tau+1}\leftarrow\) select a set of \(z\) random services
12 calculate \(\rho_{\tau}\)
13\(mem\gets mem\cup\{(\theta_{\tau-1},a_{\tau-1},\rho_{\tau},\theta_{\tau})\}\)
14choose a sample form \(mem\) and train the agent
15if\(\epsilon>\widehat{\epsilon}\)then
16\(\epsilon\leftarrow\epsilon-\epsilon^{\prime}\)
17\(\boldsymbol{\alpha}\leftarrow\) collect \(\alpha_{\tau+1}\) of all PoAs
18 convert \(\boldsymbol{\alpha}\) to a (Requests, PoAs) table
19while\(\mathcal{R}\) is not emptydo
20\(r\leftarrow\) the tightest E2E delay requirement request \(\boldsymbol{\eta}\leftarrow\) the set of PoAs requesting \(\mathcal{S}_{r}\)
21\(\mathcal{D}\leftarrow\infty\)foreach\(n_{1}\in\mathcal{N}\)do
22\(\mathcal{D}_{n_{1}}\gets 0\)foreach\(n_{2}\in\boldsymbol{\eta}\)do
23\(\mathcal{P}_{1}\leftarrow\) the set of paths from \(n_{1}\) to \(n_{2}\)
24\(p_{1}\gets p\in\mathcal{P}_{1}\) with the lowest delay
25\(\mathcal{P}_{2}\leftarrow\) the set of paths from \(n_{2}\) to \(n_{1}\)
26\(p_{2}\gets p\in\mathcal{P}_{2}\) with the lowest delay
27\(\mathcal{D}_{p}\leftarrow\) calculate delay + cost for \(p_{1}+p_{2}\)
28\(\mathcal{D}_{n_{1}}\leftarrow\mathcal{D}_{n_{1}}+\mathcal{D}_{p}+\overline{ \mathcal{C}}_{n_{1}}\)
29if\(\mathcal{D}_{n_{1}}<\mathcal{D}\)then
30\(n\gets n_{1}\), \(\overrightarrow{p}\gets p_{1}\), \(\overleftarrow\)\(p_{2}\)
31while\(n\) is feasibledo
32 Place a new instance \(i\) of \(\mathcal{S}_{r}\) on \(n\)
33while\(i\) is feasibledo
34for\(r^{\prime}\in\mathcal{R}\)do
35\(\mathcal{\tilde{A}}^{t}_{r^{\prime},i}\gets 1,\mathcal{\widetilde{R}}^{t}_{r^{ \prime},\overrightarrow{p}}\gets 1,\mathcal{\widetilde{R}}^{t}_{r^{\prime}, \overleftarrow}\gets 1\)
36 remove \(r^{\prime}\) from \(\mathcal{R}\)
```
**Algorithm 1**WISE
The orchestration mechanism is dedicated to determining the most appropriate allocations for the predicted requests on available nodes, paths, and instances. After collecting the expected requests from all PoAs in a central controller, it is first necessary to transform them into a PoA requests table. The algorithm then proceeds to iterate through each request \(r\), beginning with the request with the most demanding time requirement. Then, a node is selected with the minimum overall delay and cost to all PoAs predicting to have requests demanding the same service as request \(r\). If the selected node is feasible in terms of E2E delay and computing capacity requirements, new instances will be located on it, and then requests with the same target service will be assigned to these instances. This operation will be continued till no more instances can be added to this node, so a new node will be selected based on the arrival point of the remaining requests, and the algorithm will be continued till all requests are investigated. Note that WISE has a worst-case complexity of \(O(\mathcal{TRN}^{2}\mathcal{P}^{2})\), since at each time, it investigates \(\mathcal{R}\) requests, and on each iteration, it checks inquiry and response paths between all nodes and the list of PoAs.
## V Simulation Results
The purpose of this section is to examine the efficiency of the WISE method numerically by considering the cost of consuming nodes, instances, and links, as well as the E2E delay, and the number of supported requests (as a metric for assessing service continuity). WISE is compared to various approaches such as finding the optimal solution through solving (1) using CPLEX, selecting instances and nodes randomly to meet requests, and implementing a service placement and
discovery method as described in [14] on Connected and Cooperative Autonomous Mobility (CCAM). The simulation parameters are enumerated in Table I. In so far, as the issue remains viable, the residual parameters can be selected in a flexible manner. Due to the inherent variations in parameters such as \(\overline{\mathcal{L}}_{l}\) and \(\overline{\mathcal{C}}_{n}\), it is reasonable to expect fluctuations across the costs of all methods.
As part of our evaluation procedure, we alter the number of edge-cloud nodes and requests in the system to determine the effect of these changes on the provided solutions. Considering future applications (such as Internet of Things (IoT) use cases for building smart cities [19], the Metaverse multiverses [3], and UAV-based surveillance and delivery scenarios [20]) where a massive amount of real-time data with stringent QoS requirements must be collected and processed, the B5G infrastructure size is expected to increase, including a large number and vast variety of networking and computing resources integrated from edge to cloud. In addition, the system may experience sudden spikes in the number of active requests when these applications are fully realized and implemented. Therefore, it is beneficial to validate the algorithm with varying numbers of nodes and requests to ensure that the system is scalable and able to provide a satisfying user experience.
Figure 1 presents the results, depicting the variations in the cost and E2E delay of allocated resources, as well as the total number of unsupported requests with increasing numbers of nodes in (a) and requests in (b). Notably, even in the optimal solution, the cost and delay are subject to change due to multiple factors, including changes in PoAs over time; variation in nodes, instances, and link capacities; and shifts in the minimum required capacity and bandwidth for requests.
Furthermore, the values indicated on unsupported requests represent the average of multiple runs on the system.
The sub-figures in Figure 1 illustrate the superior performance of the WISE approach compared to the CCAM method. Serving instances and requests for a single service with a single node is the primary drawback of the CCAM method. When the network contains a small number of nodes, i.e., when computing resources are closer to PoAs, the CCAM method performs adequately in terms of minimizing delay. However, the E2E delay increases when the number of nodes increases and high-capacity nodes are located far from entry points. In addition, it lacks in several areas, including the cost of path selection to reach particular nodes. Isolating the provisioning of each service to a single node in CCAM also results in an inability to handle all requests as the number of user requests (PoAs) grows.
Similarly, the random method is less efficient than WISE, regardless of the number of requests and nodes. This method involves randomly placing each instance on network nodes without taking into account the ever-changing nature of users; as a result, the number of supported requests is insufficient and service continuity is deteriorating. Besides, this approach incurs high costs each time it is employed, and despite having a low delay with a small number of nodes, it frequently fails to fulfill requests. It is imperative to note that only the delay of supported requests is considered in subfigures 2; thus, it is reasonable to observe samples where the WISE method, which supports all requests, exhibits longer delays than the random method, which does not support all requests.
In terms of service placement and resource allocation, WISE exhibits an average total cost exceeding 91% of the optimal, regardless of the size of the network, and a delay within the desired range for users' SLA. This indicates that the WISE algorithm can place services and allocate resources in a near-optimal manner, even in large networks. It is noteworthy that the average cost and delay remain significantly low regardless of the number of requests or the number of nodes. In spite of this, the delay is slightly swollen as the number of requests increases and the problem becomes more complicated due to a large number of nodes and links. Meanwhile, WISE is capable of timely responses and can place services and instances appropriately, while it is impossible to find the optimal solution to the ASCETIC problem in a reasonable amount of time. Accordingly, WISE demonstrates that it is efficient in placing services and allocating resources for large numbers of requests, is low in latency and cost, provides timely responses, and ensures service continuity compared to other approaches.
## VI Conclusion
In this study, we addressed the challenges of providing reliable and efficient service continuity in dynamic and ever-changing systems, particularly in the context of edge-cloud infrastructures for B5G networks. An MINLP problem of service placement and resource allocation in a network-cloud continuum environment, while accounting for capacity constraints, changing user behavior, and link and E2E delays, was first formulated with the objective of minimizing overall costs. Next, we proposed a water-filling-based algorithm empowered by a DDQL-based technique leveraging RNNs to solve the NP-hard problem. Simulation results demonstrated that our proposed approach is scalable, efficient, and reliable enough to be used in real-world use cases because it accommodates continuity of services when users move from one location to another and the placement of services that require extremely low latency. As a potential future direction, we plan to consider users with dynamic QoS requirements and resources with dynamic capacities and energy consumptions over time.
## Acknowledgment
This research work is partially supported by the Business Finland 6Bridge 6Core project under Grant No. 8410/31/2022, the Research Council of Finland (former Academy of Finland) IDEA-MILL project under Grant No. 352428, the European Union's Horizon Europe research and innovation programme under the 6GSandbox project with Grant Agreement No. 101096328, and the Research Council of Finland 6G Flagship Programme under Grant No. 346208.
|
2305.19512 | Fine-grained Text Style Transfer with Diffusion-Based Language Models | Diffusion probabilistic models have shown great success in generating
high-quality images controllably, and researchers have tried to utilize this
controllability into text generation domain. Previous works on diffusion-based
language models have shown that they can be trained without external knowledge
(such as pre-trained weights) and still achieve stable performance and
controllability. In this paper, we trained a diffusion-based model on StylePTB
dataset, the standard benchmark for fine-grained text style transfers. The
tasks in StylePTB requires much more refined control over the output text
compared to tasks evaluated in previous works, and our model was able to
achieve state-of-the-art performance on StylePTB on both individual and
compositional transfers. Moreover, our model, trained on limited data from
StylePTB without external knowledge, outperforms previous works that utilized
pretrained weights, embeddings, and external grammar parsers, and this may
indicate that diffusion-based language models have great potential under
low-resource settings. | Yiwei Lyu, Tiange Luo, Jiacheng Shi, Todd C. Hollon, Honglak Lee | 2023-05-31T02:51:26Z | http://arxiv.org/abs/2305.19512v2 | # Fine-grained Text Style Transfer with Diffusion-Based Language Models
###### Abstract
Diffusion probabilistic models have shown great success in generating high-quality images controllably, and researchers have tried to utilize this controllability into text generation domain. Previous works on diffusion-based language models have shown that they can be trained without external knowledge (such as pre-trained weights) and still achieve stable performance and controllability. In this paper, we trained a diffusion-based model on StylePTB dataset, the standard benchmark for fine-grained text style transfers. The tasks in StylePTB requires much more refined control over the output text compared to tasks evaluated in previous works, and our model was able to achieve state-of-the-art performance on StylePTB on both individual and compositional transfers. Moreover, our model, trained on limited data from StylePTB without external knowledge, outperforms previous works that utilized pretrained weights, embeddings, and external grammar parsers, and this may indicate that diffusion-based language models have great potential under low-resource settings. Our code is available at [https://github.com/lvyiwei1/DiffuSeq_StylePTB](https://github.com/lvyiwei1/DiffuSeq_StylePTB)
## 1 Introduction
Diffusion probabilistic models (Ho et al., 2020) have became the state-of-the-art technique in visual generative tasks. By starting from random gaussian noise and gradual denoising, they are able to generate images that look realistic in details. Moreover, conditional diffusion models such as stable diffusion (Rombach et al., 2022) are able to achieve detailed control over the generated output by conditioning on text, layouts, etc. The generated images are faithful to the text description or layouts, often to the finest details.
Analogically, researchers have tried to utilize the controllability of diffusion models to achieve more controllable language generation. For example, DiffuSeq (Gong et al., 2022) applies diffusion models to sequence-sequence text generation tasks such as paraphrasing, question generation and text simplification; Diffusion-LM (Li et al., 2022) combined diffusion models with language models to control language generation by specifying generation length, syntax tree, semantic context, etc. What made these diffusion-based language models impressive is that they are trained from scratch with zero external knowledge (i.e. no pre-trained word embeddings or model weights, no external grammar parsers, etc) and on very few data (on the order of \(10^{5}\) tokens) compared to any large language models (for example, GPT-3's (Brown et al., 2020) training data is on the order \(10^{11}\) tokens), so they have to learn representations at all levels (word embeddings, sentence structures, etc) from scratch with very limited data.
However, while the earlier tasks assessed on Diffusion-LM and DiffuSeq require a degree of control over the generated output, they are incapable of modifying the existing text to exhibit specific stylistic characteristics. In this paper, we would like to further examine the capabilities of diffusion-based language models on **fine-grained text style transfer**, an important task that requires more fine-grained control than the tasks from previous works on diffusion-based language modeling because it only allows changing the specified fine-grained stylistic properties of the input while leaving the rest unchanged. For example, "verb emphasis" is a fine-grained style transfer that requires the model to rewrite the sentence emphasizing a certain verb, without changing any other information that the original sentence conveys. In comparison, previous evaluation tasks such as controlling sequence length, semantic context, etc essentially control one aspect at a time and require no control over any other properties of generated text.
We use 13 non-lexical transfers from StylePTB (Lyu et al., 2021) dataset, where
there are at most a few thousand sentence pairs available for each transfer, as shown in Table 1. Since identifying the grammatical structure of the sentence can be very helpful for most of these transfers (such as active-to-passive), some previous methods (such as Neural QCFG (Kim, 2021)) utilizes external grammar parsers to gain such information. We trained a diffusion-based model on StylePTB data without any pre-trained weights or external grammar parsers. Therefore, our model has to start from zero grammar/linguistic knowledge and learn all of them from very limited training data (StylePTB only has 7719 sentences from Penn Tree Bank (Marcus et al., 1993) plus their transferred outputs). Even under these hard conditions, our model still managed to outperform previous works that do utilize external weights or grammar parsers. Moreover, we also evaluate the capabilities of diffusion-based language models on performing multiple transfers using one single model and composing multiple learned transfers on a single sentence. We list our contributions as follows:
* We trained a diffusion-based language model (adapted from DiffuSeq (Gong et al., 2022)) that can perform fine-grained text style transfer from scratch with very limited training data and no external weights or tools. The model also supports multitasking and composing multiple fine-grained transfers.
* Our model achieves state-of-the-art performance on fine-grained text style transfers in StylePTB. Our multitask model (i.e. one single model that can perform all 13 transfers) achieves **best performance** compared to previous works on the same tasks on **88 out of 91** metrics (7 metrics per transfer), and gets very close to human performance on tasks with easy and medium difficulties. We also evaluated our model on composition of multiple fine-grained transfers, and we achieved best performance on these tasks as well.
* Thr/ough the evaluations, we demonstrated the extraordinary capabilities of diffusion-based language models in asserting extremely fine-grained control over generated text, and that this type of language model have great potential in controllable natural language generation under low-resource settings as it is able to achieve state-of-the-art performance with
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Aspect & Transfers & Original Sentence & Additional & \multirow{2}{*}{Transformed Sentence} & Number of Pairs \\ & & & Information & & in StylePTB dataset \\ \hline \multirow{8}{*}{Syntax} & To Future Tense & She travels to Paris every & She **will** travel to Paris next & \multirow{2}{*}{7272} \\ & summer to visit her family. & summer to visit her family. & summer to visit her family. & \\ \cline{2-4} & To Present Tense & She had been studying & She **has** been studying architecture for & \multirow{2}{*}{4365} \\ & architecture for five years. & & five years. & \\ \cline{2-4} & To Past Tense & He walks to the store every day. & He **walked** to the store every day. & 4422 \\ \cline{2-4} & Activate to Passive & The cat chased the mouse. & The mouse **was chased** by the cat. & 2808 \\ \cline{2-4} & Passive to Activate & The proposal was approved by & The committee & \multirow{2}{*}{2808} \\ & the committee yesterday. & & **approved** the proposal yesterday. & \\ \cline{2-4} & PP Front to Back & Having watched the movie, & They left the theater after & \multirow{2}{*}{467} \\ & they left the theater. & & **having watched the movie.** & \\ \cline{2-4} & PP Back to Front & They have been planning & **For months**, they have been & \multirow{2}{*}{467} \\ & their vacation for months. & planning their vacation. & & \\ \hline \multirow{8}{*}{Semantic} & ADI/ADV Removal & The **extremely talented** musician & The musician played & \multirow{2}{*}{4863} \\ & played a **beautiful** melody on the piano. & a melody on the piano. & & \\ \cline{2-4} & PP Removal & She had been studying **for hours** & She had been studying & \multirow{2}{*}{4767} \\ & before taking the test. & before taking the test. & before taking the test. & \\ \cline{2-4} & Substatement Removal & He was **unhappy that he had failed.** & He was unhappy. & \multirow{2}{*}{1345} \\ & **the exam** & & & \\ \cline{2-4} & Infomation Addition & The stock was up three & \multirow{2}{*}{"man", "lazy"} & The stock was up three percent & \multirow{2}{*}{2114} \\ & percent according to the man. & & & according to the **lazy** man. & \\ \hline \multirow{3}{*}{Thematics} & Verb/Action Emphasis & She reads books in pstame. & ”read” & **Reading** books is her favorite pstame. & 1201 \\ \cline{1-1} \cline{2-4} & Adjective Emphasis & The scenic forest is & \multirow{2}{*}{"scenic"} & Michele’s favorite forest is **scenic.** & 696 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The 13 non-lexical fine-grained text style transfers from the StylePTB dataset (Lyu et al., 2021). We present one example sentence pair before/after each transfer, as well as the total number of sentence pairs available for each transfer in StylePTB. As we can see, the transfers require changing one specific stylistic aspect of the sentence while leaving all other aspects unchanged, and the amount of data available for training is limited (compared to the typical amount of data required to train large language models nowadays).
limited training data and no external knowledge.
## 2 Backgrounds
### Fine-grained Text Style Transfer and StylePTB
An import challenge for AI is to convey intentions using different stylistic attributes, and automated text style transfer is an essential step towards that. Text style transfer aims to controllably convert source text with targeted stylistic properties, with important applications in human-AI interactions including dialog systems Celikyilmaz et al. (2018) and intelligent agents Kim et al. (2013); Liang et al. (2020); Pittermann et al. (2010) that can communicate with specific text styles for different situations, target audiences, and environments Lample et al. (2019); Li et al. (2018).
There has been extensive research on high-level style transfers such as sentiment transfers Shen et al. (2017) and formality transfers Rao and Tetreault (2018). However, high-level style transfers lack the ability to fully control the style of the output. For example, there are many ways to convert a positive comment about a restaurant into a negative one, and high-level text style transfers do not allow control over which of the possible outputs (that may have different styles in non-sentiment aspects) can be generated. Fine-grained text style transfer is important because they allow fine-grained control over the generated output. Lyu et al. (2021) defined a set of fine-grained text style transfer along four linguistic axis:
* **Lexical Transfers:** Word changes
* **Syntax Transfers:** Grammar and sentence structure changes
* **Semantic Transfers:** Meaning changes
* **Thematic Transfers:** Situational changes or word emphasis
Along these 4 axes, it defined 21 individual fine-grained transfers, 13 of which are non-lexical. Examples of the non-lexical transfers are shown in Table 1. Compared to other forms of controllable text generation, fine-grained text style transfer has the advantage of being able to assert control over text generated by uncontrollable models. For example, we can use fine-grained text style transfers to add specific stylistic properties to free-form text generated by large language models while keeping the content of the generated text unchanged. Fine-grained text style transfers can be composed to achieve higher-level style transfers, and they even have the potential to mitigate social bias in large text generation models Lyu et al. (2021). Therefore, it is important to develop techniques to achieve automated fine-grained text style transfer. Existing works are still quite far from perfect on a lot of the fine-grained style transfers compared to human performance Lyu et al. (2021); Kim (2021), and composing multiple fine-grained style transfers remains challenging.
### Diffusion Probabilistic Models
Recently, diffusion models Ho et al. (2020) is widely used to generate high quality and diverse images. Its methodology consists of two phases: the first phase is the forward diffusion phase, which adds Gaussian noise to the input image \(x_{0}\) as the time stamp increases, and after enough steps the image is reduced to pure Gaussian noise \(x_{t}\). The second phase is the recovery phase, in which a model is trained to gradually remove noise from \(x_{t}\) until it recovers the original image \(x_{0}\). During inference, we start from a randomly sampled gaussian noise \(x_{t}\) and use the denoising model to gradually infer an image \(x_{0}\).
Diffusion-based language generation models follows a similar approach where we perform the diffusion and denoising process in the token embedding space. We will explain the model we use, which is built upon DiffuSeq Gong et al. (2022), in details in the next section.
## 3 Methodology
We adapt DiffuSeq Gong et al. (2022) to be able to perform fine-grained text style transfer given a source sentence and specified transfer operation(s), as illustrated in Figure 1. We model the transfer as a conditional generation process, where the condition includes the source sentence and the specified transfer operation(s). We first define a set of special style tokens, one for each possible individual fine-grained transfer. If we wish to perform one or more transfer on the source sentence, we will prepend the corresponding special token(s) to the beginning of the source sentence to form the condition \(S\).
We use BERT tokenizer to tokenize the input into discrete token ids, and adopt a token embedding layer to encode both the source (including prepended style tokens) and the ground truth target sentence (during training) to obtain the embedded source \(Z^{S}\) and target \(Z^{TRG}_{0}\). For the diffusion pro
cess, we use a transformer model to recover the target embedding. Both the diffusion transformer and the token embeddings are initialized randomly and jointly optimized. In other words, our model does not rely on any prior knowledge about our task or the English Language in general.
We use the simplified diffusion objective during training: for each input \((S,TRG)\) where \(S\) is the source sentence (with style tokens) and \(TRG\) is the ground truth target sentence, we randomly sample a step number \(t\) from \(1,2,...T\), where \(T\) is the maximum number of steps, and add \(t\) steps of random Gaussian noise to \(Z_{0}^{TRG}\) following a linear diffusion schedule to obtain \(Z_{t}^{TRG}\). We then concatenate \(Z^{S}\) and \(Z_{t}^{TRG}\) and input the concatenated sequence into our diffusion transformer, where we only take the output embeddings at the locations corresponding to \(Z_{t}^{TRG}\) as \(Z_{0}^{TRG}\). Our training objective is simply going to be the MSE Loss between \(Z_{0}^{TRG}\) and \(Z_{0}^{TRG}\).
During inference, we randomly initialize \(Z_{T}^{TRG}\sim N(0,1)\), and encode the condition (source sentence and style tokens) into \(Z^{S}\). Then we concatenate them and use our transformer to predict a temporary \(Z_{0temp}^{TRG}\), add \(T-1\) steps of noise back to the temporary \(Z_{0temp}^{TRG}\) to obtain \(Z_{T-1}^{TRG}\). We repeat this process until we get \(Z_{0}^{TRG}\). For each embedding in \(Z_{0}^{TRG}\), we find the closest embedding in our token embedding layer by cosine distance, and decode the embedding to that token. Then we combine the tokens to form the output sentence in natural language.
## 4 Experiments
### Dataset
StylePTB (Lyu et al., 2021) contains paired sentences before/after each transfer for 21 fine-grained transfers, as well as paired data for compositions of multiple fine-grained transfers. For single transfers, we will focus on the 13 non-lexical fine-grained style transfers following (Lyu et al., 2021). The number of sentence pairs available from StylePTB for each transfer and examples of sentences before/after each transfer are shown in Table 1. For compositional transfers, we will use the Tense + Voice and Tense + PP Removal transfers from the compositional part of StylePTB dataset (same as the ones used for evaluation in (Lyu et al., 2021)). Each compositional dataset contains all combinations of valid transfers (for example, Tense + Voice dataset contains all valid combinations of 0/1/2 transfers regarding tense and voice, such as To-Future + Active-To-Passive or To-Past + No-Voice-Change).
StylePTB was built with only 7719 different sentences from Penn Tree Bank (Marcus et al., 1993) plus their stylistic variations, so both the amount and the diversity of training data are very limited, thus making this task even more challenging for DiffuSeq since it does not have access to external knowledge or pre-trained weights and have to extract all linguistic knowledge from limited data.
Figure 1: An illustration of the training and inference process of our diffusion-based language model. The diffusion process is performed over the sequence of token embeddings of the target sentence \(Z_{0}^{TRG}\), and the source sentence’s token embeddings (\(Z^{S}\)) are concatenated before \(Z^{TRG}\). During the backward diffusion process, the combined sequence is fed into the transformer model to gradually recover/generate \(Z_{0}^{TRG}\).
For fair comparison, we preprocess the data following the same criterion as (Lyu et al., 2021): we replace numbers with NUM token, and we replace each word that occurs less than 3 times in the training set with UNK token. We also split the data into train/valid/test splits with proportions of 0.9/0.05/0.05 using the same splits as all previous works.
### Evaluation Metrics
We use the same evaluation methods as (Lyu et al., 2021) and report 7 metrics from nlg-eval package (Sharma et al., 2017) (BLEU 1-4, METEOR, ROUGE-L, CiDER) between the generated transferred sentence and the ground truth target sentence from the dataset.
### Single style transfer experiment
#### 4.3.1 Baselines
We report performance of the following baselines for single style transfer:
1. **GPT-2**: Directly finetuning GPT-2 medium model (Radford et al., 2019) with paired data. Performance reported from (Lyu et al., 2021).
2. **Seq2Seq**: GRU sequence-to-sequence language model (Sutskever et al., 2014) with attention. Performance reported from (Lyu et al., 2021).
3. **RetrieveEdit (Hashimoto et al., 2018)**: For an input data \(x\), a retriever model will go through the training set to find a similar sentence pair \((x^{\prime},y^{\prime})\) and a trained editor edits \(y^{\prime}\) into desired output \(y\). Performance reported from (Lyu et al., 2021).
4. **Steering Vector (Subramani et al., 2022)**: extract steering vectors directly from pre-trained LMs to guide generation
5. **TAILOR (Ross et al., 2021)**: output sentences conditioned on control codes by a pre-trained seq2seq model
6. **Neural QCFG (Kim, 2021)**: It presents a sequence-to-sequence text learning by explicitly modeling the alignment between target trees with the source.
7. **Neural QCFG + copy (Kim, 2021)**: Neural QCFG with an option to copy certain tokens from source sentence
Among these baselines, **GPT-2**, Steering Vector** and **TAILOR** uses pre-trained language models, **Neural QCFG** and **Neural QCFG + copy** requires external grammar parsers, and **RetrieveEdit** uses GLOVE word embeddings.
We also included **Human** performance on these tasks (reported in (Lyu et al., 2021) by asking human annotators to manually perform the style transfer tasks) for comparison.
#### 4.3.2 Results and Analysis
For single style transfers, we tried two different diffusion-based approaches: (1) we train a separate diffusion model for each individual style transfer, and (2) we train one diffusion model for all 13 transfers evaluated. For approach (2), we add a style token at the beginning of the input sentence to indicate which of the 13 transfers needs to be performed. We call approach (2) DiffuSeq Multitask.
The original StylePTB paper (Lyu et al., 2021) puts the non-lexical transfers into 3 difficulty categories (easy, medium, hard) by average hamming distance between input and output of the transfer. We report the results of our experiment using the same categorization, where we show results on easy and medium transfers in Table 2 and hard transfers in Table 3.
Surprisingly, DiffuSeq Multitask outperforms DiffuSeq on all transfers, even though DiffuSeq Multitask has to handle 13 different transfers in one model while each DiffuSeq model only needs to handle 1 transfer. This is possibly due to the additional training data from all the tasks that the multitask model learns better representations for words and sentences and gains more accurate knowledge of grammatical patterns of English, which is shared across all tasks.
Moreover, DiffuSeq Multitask significantly outperforms all baselines in all easy and medium transfers, and also achieves state-of-the-art on most metrics on hard transfers, only falling slightly behind Neural QCFG + copy in some metrics. This is really impressive considering that our approach leverages no external knowledge while all baselines except Seq2Seq utilizes either pretrained language models, pretrained word embeddings, or external grammar tree parser. Neural-QCFG-based methods are especially dependent on external linguistics knowledge and existing grammar parsers. DiffuSeq Multitask's performance is also on par with human performance on easy and medium transfers, indicating that DiffuSeq Multitask is close to fully solving the easy and medium difficulty transfers.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline
### Compositional style transfer experiment
#### 4.4.1 Baselines
We will report performance of the following baselines for compositional fine-grained style transfers:
1. **SeqGPT**: Sequentially applying fine-tuned GPT-2 for each single style transfer. Performance reported from (Lyu et al., 2021).
2. **CS-GPT**: A modified GPT-2 model that takes in style tokens as indication of which style transfers to apply. Performance reported from (Lyu et al., 2021).
#### 4.4.2 Results and Analysis
For compositions of multiple fine-grained style transfers, we train one single DiffuSeq model to handle all compositions and use style tokens to indicate which transfers to compose for the input sentence, similar to CS-GPT (Lyu et al., 2021). The results are shown in Table 4. DiffuSeq significantly outperforms baselines in all tasks and all metrics. Therefore, not only does our diffusion model work well for single fine-grained style transfers, it also works well for compositions of multiple fine-grained style transfers.
## 5 Related Works
### Automated Text Style Transfer
The goal of the text style transfer (TST) task is to change the style of the sentence while retaining its style-independent content. Previous works in TST includes the following approaches: statistical NLP methods (Hovy, 1987; Xu et al., 2012), neural generative models (Prabhumoye et al., 2018; Lample et al., 2019; He et al., 2020), Retrieve-and-Edit approaches (Li et al., 2018; Hashimoto et al., 2018; Guu et al., 2018; Sudhakar et al., 2019; Madaan et al., 2020), and Transformer-based approach (Lyu et al., 2021). Some of these methods can already achieve high performance on certain high-level transfers (such as sentiment transfers (Shen et al., 2017) and formality transfers (Rao and Tetreault, 2018)), but fine-grained text style tranfer remains challenging for the above approaches (Lyu et al.,
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline
**Hard** Transfers & Baseline Model & BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 & METERO & ROUGE\_L & CIDER \\ \hline \multirow{8}{*}{Active To Passive} & GPT2 & 0.476 & 0.329 & 0.238 & 0.189 & 0.216 & 0.464 & 1.820 \\ & Seq2seq & 0.373 & 0.220 & 0.141 & 0.103 & 0.131 & 0.345 & 0.845 \\ & RetrieveEdit & 0.681 & 0.598 & 0.503 & 0.427 & 0.383 & 0.663 & 4.535 \\ & Steering Vector & 0.666 & - & - & - & - & - & - \\ & TAILOR & 0.556 & - & - & - & - & - & - \\ & Neural QCFG & 0.431 & 0.637 & 0.548 & 0.472 & 0.415 & 0.695 & 4.294 \\ & Neural QCFG + copy & 0.836 & 0.771 & 0.713 & 0.662 & 0.499 & 0.803 & 6.410 \\ & DiffuSeq & 0.839 & 0.580 & 0.302 & 0.196 & 0.225 & 0.512 & 2.344 \\ & DiffuSeq MultiTask & **0.918** & **0.835** & **0.752** & **0.681** & **0.521** & **0.844** & **6.913** \\ \cline{2-8} & Human & 0.931 & 0.881 & 0.835 & 0.795 & 0.587 & 0.905 & 8.603 \\ \hline \multirow{8}{*}{Passive To Active} & GPT2 & 0.433 & 0.271 & 0.167 & 0.120 & 0.191 & 0.434 & 1.329 \\ & Seq2seq & 0.339 & 0.214 & 0.160 & 0.132 & 0.126 & 0.331 & 1.062 \\ & RetrieveEdit & 0.714 & 0.659 & 0.559 & 0.474 & 0.397 & 0.732 & 5.024 \\ & Steering Vector & 0.574 & - & - & - & - & - & - \\ & DiffuSeq & 0.829 & 0.550 & 0.282 & 0.192 & 0.205 & 0.502 & 2.224 \\ & DiffuSeq MultiTask & **0.955** & **0.896** & **0.834** & **0.777** & **0.555** & **0.913** & **8.028** \\ \cline{2-8} & Human & 0.977 & 0.962 & 0.942 & 0.919 & 0.685 & 0.973 & 9.409 \\ \hline \multirow{8}{*}{Adjective Emphasis} & GPT2 & 0.263 & 0.079 & 0.028 & 0.000 & 0.112 & 0.188 & 0.386 \\ & Seq2seq & 0.187 & 0.058 & 0.018 & 0.000 & 0.059 & 0.179 & 0.141 \\ & RetrieveEdit & 0.387 & 0.276 & 0.211 & 0.164 & 0.193 & 0.369 & 1.679 \\ & Steering Vector & 0.774 & - & - & - & - & - & - \\ & Neural QCFG & 0.348 & 0.178 & 0.062 & 0.000 & 0.162 & 0.317 & 0.667 \\ & Neural QCFG + copy & 0.676 & 0.506 & 0.393 & 0.316 & 0.373 & 0.683 & 3.424 \\ & DiffuSeq & 0.620 & 0.382 & 0.215 & 0.152 & 0.243 & 0.335 & 2.231 \\ & DiffuSeq MultiTask & **0.775** & **0.600** & **0.477** & **0.386** & **0.423** & **0.673** & **4.007** \\ \cline{2-8} & Human & 0.834 & 0.753 & 0.679 & 0.611 & 0.522 & 0.811 & 6.796 \\ \hline \multirow{8}{*}{Verb/Action Emphasis} & GPT2 & 0.309 & 0.170 & 0.095 & 0.041 & 0.140 & 0.292 & 0.593 \\ & Seq2seq & 0.289 & 0.127 & 0.066 & 0.038 & 0.098 & 0.275 & 0.300 \\ & RetrieveEdit & 0.416 & 0.284 & 0.209 & 0.148 & 0.223 & 0.423 & 1.778 \\ \cline{1-1} & Steering Vector & 0.548 & - & - & - & - & - & - \\ \cline{1-1} & Neural QCFG & 0.431 & 0.250 & 0.14 & 0.073 & 0.219 & 0.408 & 1.097 \\ \cline{1-1} & Neural QCFG + copy & 0.664 & 0.512 & **0.407** & **0.319** & 0.370 & 0.589 & **3.227** \\ \cline{1-1} & DiffuSeq & 0.453 & 0.210 & 0.101 & 0.054 & 0.205 & 0.379 & 0.785 \\ \cline{1-1} & DiffuSeq MultiTask & **0.693** & **0.516** & 0.370 & 0.261 & **0.373** & **0.596** & 2.950 \\ \cline{1-1} & Human & 0.649 & 0.569 & 0.493 & 0.421 & 0.433 & 0.693 & 5.668 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results on hard transfers. DiffuSeq Multitask achieves State-of-the-art performance on most metrics, and is only slightly behind Neural QCFG + copy on some metrics.
2021). In this paper, we explored a new approach for fine-grained TST utilizing Diffusion Models.
### Natural language processing with diffusion model
There have been two approaches for leveraging diffusion models into text data: the first approach takes advantage of the diffusion model in the continuous domain, like Diffusion-LM Li et al. (2022), and DiffuSeq Gong et al. (2022), where we start from a gaussian noise vector, and gradually denoise this noise vector to the desired sentence; the second approach applies diffusion model into discrete state space, like Multinomial Diffusion Hoogeboom et al. (2021), DDPMs Austin et al. (2021), and DiffusionBERT Austin et al. (2021). In this paper, we chose to build upon the first type of model, because they are closer to the original diffusion models for images (where diffusion happens in continuous space) and they have shown successes on tasks that requires control over generations.
## 6 Limitations and Future works
One significant limitation of our work is that we only explored the capabilities of diffusion-based language models under a challenging circumstance where it is not allowed to use pre-trained weights or grammar parsers, which means we did not utilize this kind of model to its full potential, so a future research direction could be exploring possible ways to further improve the model's performance by leveraging pretrained weights or word embeddings, and train with enough data to find the full potential of these models.
Another limitation of our work is that we only explored one typical diffusion-based language model, so our conclusions may not generalize to special types of diffusion-based language models (such as ones that uses discrete state space). We also conducted all experiments using the exact same model architecture design. In the future, we plan to experiment with different architectures for the diffusion model, such as more sophisticated conditioning methods (currently we just concatenate the source to the target, but we would like to try other ways of conditioning on the source, such as cross attention, as these conditioning methods for diffusion models have promising performance in the image generation domain).
Lastly, we found that diffusion-based language models work well with limited data and no external knowledge or pre-trained weights, thus these mod
els may have great potential under low-resource settings, but we didn't apply them to any real low-resource settings (such as low-resource languages, rare domains, etc) in this paper, and we would like to do that in the future to explore the full potential of diffusion-based language models.
## 7 Conclusions
In this paper, we explored the capabilities of diffusion-based models on fine-grained text style transfer, a task that requires a high level of control over generated text, with no external knowledge or pre-trained weights and with very limited training data. Our diffusion-based language model, which builds upon DiffuSeq Gong et al. (2022), achieves state-of-the-art performance on all transfers as well as composition of transfers, outperforming all previous works on this dataset, including ones that uses pre-trained weights, word embeddings, and external grammar parsers. It is even on par with human performance on many transfers. Therefore, our model is a great step towards solving automated fine-grained text style transfer.
Moreover, our work, together with previous works such as Diffusion-LM Li et al. (2022), demonstrates that diffusion-based language models could have great potential in controllable text generation under low-resource settings. Under low-resource settings (such as rarely spoken language or uncommon tasks), it would be difficult to find existing large language models or pre-trained weights, and available training data will likely be very limited, so most approaches based on finetuning existing models or large amounts of training will not work well, and diffusion-based language models could be an alternative to consider.
## Acknowledgement
This work is supported in part by grants from NSF IIS 1453651, NIH K12 NS080223, Cook Family Brain Tumor Research Fund, Mark Trauner Brain Research Fund: Zenkel Family Foundation, Ian's Friends Foundation, and the Investigators Awards grant program of Precision Health at the University of Michigan. Any opinions, findings, conclusions, or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of the NSF, NIH, Cook Family Brain Tumor Research Fund, Mark Trauner Brain Research Fund: Zenkel Family Foundation, Ian's Friends Foundation, or Precision Health at the University of Michigan. We are grateful to the reviewers for their helpful review and feedback.
|
2309.07566 | Speech-to-Speech Translation with Discrete-Unit-Based Style Transfer | Direct speech-to-speech translation (S2ST) with discrete self-supervised
representations has achieved remarkable accuracy, but is unable to preserve the
speaker timbre of the source speech. Meanwhile, the scarcity of high-quality
speaker-parallel data poses a challenge for learning style transfer during
translation. We design an S2ST pipeline with style-transfer capability on the
basis of discrete self-supervised speech representations and codec units. The
acoustic language model we introduce for style transfer leverages
self-supervised in-context learning, acquiring style transfer ability without
relying on any speaker-parallel data, thereby overcoming data scarcity. By
using extensive training data, our model achieves zero-shot cross-lingual style
transfer on previously unseen source languages. Experiments show that our model
generates translated speeches with high fidelity and speaker similarity. Audio
samples are available at http://stylelm.github.io/ . | Yongqi Wang, Jionghao Bai, Rongjie Huang, Ruiqi Li, Zhiqing Hong, Zhou Zhao | 2023-09-14T09:52:08Z | http://arxiv.org/abs/2309.07566v2 | # Speech-to-speech translation with discrete-unit-based style transfer
###### Abstract
Direct speech-to-speech translation (S2ST) with discrete self-supervised representations has achieved remarkable accuracy, but is unable to preserve the speaker timbre of the source speech during translation. Meanwhile, the scarcity of high-quality speaker-parallel data poses a challenge for learning style transfer between source and target speech. We propose an S2ST framework with an acoustic language model based on discrete units from a self-supervised model and a neural codec for style transfer. The acoustic language model leverages self-supervised in-context learning, acquiring the ability for style transfer without relying on any speaker-parallel data, thereby overcoming the issue of data scarcity. By using extensive training data, our model achieves zero-shot cross-lingual style transfer on previously unseen source languages. Experiments show that our model generates translated speeches with high fidelity and style similarity. Audio samples are available at [http://stylelm.github.io/](http://stylelm.github.io/).
Yongqi Wang, Jionghao Bai, Rongjie Huang, Ruiqi Li, Zhiqing Hong, Zhou Zhao Zhejiang University, China direct speech-to-speech translation, style-transfer, spoken language model
## 1 Introduction
Speech-to-speech translation (S2ST) aims to translate spoken utterances in one language into corresponding ones in another language, which can bring immense convenience to communication between speakers of different languages. Conventional S2ST systems employ pipelines comprising automatic speech recognition (ASR), machine translation (MT), or speech-to-text translation (S2T), followed by text-to-speech synthesis (TTS). More recent research has been exploring direct S2ST without intermediate text generation which has a more concise pipeline, reducing computation cost and error propagation, and facilitating application to unwritten languages.
Recent mainstream approaches of direct S2ST [1, 2, 3, 4] utilize discrete representation of speech from self-supervised models (such as HuBERT [5]) as prediction target, and then use them to reconstruct the waveform. Such representation eliminates speaker identity and prosody of the speeches and retains only semantic contents, which simplifies the target distribution and makes the translation less challenging. However, it also leads to the drawback of losing the style information of source speeches. Extra voice conversion systems are needed if users want to keep the source speaker timbre, which may bring in additional quality degradation and raise the cost and complexity of the application.
Some works propose direct S2ST with style transfer [6, 7]. These methods depend on paired data that source and target speech have the same speaker. This can be a shortcoming since such data from the real world is extremely scarce as it requires a large number of multilingual speakers. Simulated data generated by multilingual TTS systems, which are adopted by these works, however, also brings in extra cost of data collection, and its style diversity does not match that of real-world speech due to limitations of TTS systems.
Inspired by recent progress in spoken language models [8, 9, 10], we propose a novel approach for direct S2ST with the ability of cross-lingual style transfer, and does not rely on any speaker-parallel data. We utilize two types of discrete representations, namely semantic and acoustic units, from self-supervised speech model and neural codec, separately. Our method encompasses the following stages: 1) speech-to-semantic-unit translation, which translates source speech to target semantic units; 2) acoustic unit modeling, which generates target acoustic units from translated semantic units using style information in the source speech; and 3) unit-to-wave generation, which reconstructs high-fidelity waveform of target speech from the acoustic units. Our design decomposes the translation of linguistic contents and the transfer of style characteristics, and finally generates target speeches with accurate content and rich acoustic properties.
For the acoustic unit modeling stage, we introduce an acoustic language model based on discrete units. It employs a self-supervised training approach and learns style transfer through in-context learning from the sequence formed by semantic and acoustic units, without relying on any speaker-parallel data, and thus addresses the issue of data scarcity. By utilizing extensive self-supervised training data, our model achieves zero-shot cross-lingual style transfer with source languages that are not included in training. Experiments show that our model generates results with superior audio quality and style similarity while maintaining accurate content.
Our contributions can be summarized as follows:
* We propose a novel approach for speech-to-speech translation with style transfer. With the utilization of extensive training data, our model possesses cross
lingual style transfer capability even on previously unseen source languages.
* By employing self-supervised training, our model can obtain style transfer ability without the need for any speaker-parallel data, thus addressing the issue of data scarcity.
* Experiments show that our method generates high-quality translated speeches while maintaining high style similarity to the source speech.
## 2 Method
The overall inference pipeline of our method is illustrated in Fig.1 (a). Our method comprises three consecutive stages, utilizing two distinct types of discrete units: 1) speech-to-semantic-unit translation stage \(S_{1}\), which converts source audio into semantic units of the target speech; 2) acoustic unit modeling stage \(S_{2}\), generating target acoustic units conditioned on the semantic output from the preceding stage and the acoustic units of the source speech as style prompt; 3) unit-to-wave generation stage \(S_{3}\), producing target audio that maintains consistent style with the source. We provide details about these two types of units and the three stages in the following subsections.
### Semantic and Acoustic Units
Discrete HuBERT [5] units obtained from the clustering of self-supervised speech representations are shown [2, 3] to be effective in providing semantic content information and are adopted in S2ST as prediction target [1, 2, 3, 4]. HuBERT encodes the target speech into continuous representations with a frame length of 20 ms, and these representations are then discretized with the k-means algorithm. Through this method, given an spoken utterance \(y\), we can obtain its semantic unit sequence \(\mathbf{s}=[s_{1},s_{2},...,s_{T}],s_{i}\in\{0,1,...,K_{s}-1\},\forall 1\leq i\leq T\), with \(T\) and \(K_{s}\) being number of frames and clusters.
On the other hand, audio codec models with encoder-decoder architecture such as SoundStream [11] have recently shown outstanding performance in learning acoustic information. Such a codec model can produce discrete representations of audio by employing a convolutional encoder followed by a residual vector quantizer, and these representations can be used to reconstruct waveforms with the corresponding decoder. Given an audio \(y\), we can get its acoustic unit sequence \(\mathbf{a}=[a_{1}^{1},a_{1}^{2},...,a_{1}^{C},a_{2}^{1},...,a_{T}^{C}],a_{i}^ {2}\in\{0,1,...,K_{a}-1\},\forall 1\leq i\leq T,1\leq j\leq C\), with \(T,C,K_{a}\) being number of frames, number of residual codebooks and codebook size. We rely on these two types of units as the intermediate representations for translation.
### Speech-to-Semantic-Unit Translation
The speech-to-semantic-unit translation stage generates target semantic units conditioned on source speech input, achieving translation of linguistic content. This procedure is commonly referred to as S2UT, and several models [1, 3, 4] have been proposed for it. These models combine a convolutional speech encoder with an encoder-decoder architecture based on transformer or conformer, taking source audios to produce discrete HuBERT units.
Due to the decoupling nature of our method between different stages, we have the flexibility to adopt various S2UT models in this stage. We conduct experiments using different S2UT models in different scenarios, and the details are given in the experimental section.
### Acoustic Unit Modeling
The acoustic unit modeling stage \(S_{2}\) generates target acoustic units from semantic tokens and style prompts. The core component of \(S_{2}\) is an acoustic language model, which is basically a decoder-only transformer. The model takes a pre
Figure 1: We propose an S2ST approach for style transfer based on discrete representations from a self-supervised speech model and a neural codec. Figure (a) shows the inference pipeline of our method; figure (b) illustrates the self-supervised training process of the acoustic language model of \(S_{2}\).
fix sequence formed by concatenating acoustic unit sequence \(\mathbf{a_{p}}\), which serves as a style prompt, and the target semantic sequence \(\mathbf{s}\), and generates the target acoustic sequence \(\mathbf{a}\) autoregressively. This procedure can be formulated as
\[p\left(\mathbf{a}\mid\mathbf{a_{p}},\mathbf{s};\theta_{AR}\right)=\prod_{t=1}^{T }\prod_{c=1}^{C}p\left(\mathbf{a}_{t}^{c}\mid\mathbf{a}_{<t},\mathbf{a}_{t}^{< c},\mathbf{a_{p}},\mathbf{s};\theta_{AR}\right) \tag{1}\]
The entire sequence is in the format of \([\mathbf{a_{p}}|\mathbf{s}|\mathbf{a}]\), with a separator token between each pair of adjacent parts. 3 codebooks are used for both prompt \(\mathbf{a_{p}}\) and target \(\mathbf{a}\).
The training procedure of \(S_{2}\) is illustrated in Figure 1(b). During training, we extract semantic and acoustic units from the training data. We divide each training sample into two separate parts, using the acoustic units from one part as prompt and those from the other as prediction targets, and train the model to generate corresponding acoustic units from the semantic units and prompt with cross-entropy loss. This in-context learning approach enables the model to grasp the correspondence in acoustic characteristics between the two parts, thus acquiring the ability for style transfer. And such a self-supervised training approach needs no speaker-parallel data, and can be scaled to massive training data. During inference, we use semantic tokens from the previous stage and acoustic units of source speech as the style prompt to realize cross-lingual style transfer.
### Unit-to-Wave Generation
In the waveform generation stage \(S_{3}\), the target acoustic units are mapped to high-fidelity waveforms. Instead of directly using the codec decoder, we adopt a GAN-based unit vocoder, which is proved to have higher perceptual quality when the codebook number is limited [10]. Specifically, our vocoder is derived from BigVGAN [12], with a generator built from a set of look-up tables (LUT) that embed the discrete units, and a series of blocks composed of transposed convolution and a residual block with dilated layers. Multi-period discriminator (MPD) and multi-resolution discriminator (MRD) are used for adversarial training.
## 3 Experiments
### Dataset
We use CVSS-C dataset [13] as the translation benchmark. We conduct experiments on two language pairs: French-English (Fr-En) and Spanish-English (Es-En). We also use CVSS-T [13] as a baseline for speaker similarity, whose target possesses speaker timbre transferred from the source.
For the acoustic part, we use the _umlab-60k_ subset of Libri-Light [14], which is a large-scale corpus containing about 57.7k hours of unlabeled English speeches, to train the \(S_{2}\) stage model. And we use LibriTTS [15] a to train the SoundStream for acoustic unit extraction and the vocoder. All audio is processed at a 16 kHz sampling rate.
### Model Configurations
For semantic representation, we apply the publicly available pre-trained multilingual HuBERT (mHuBERT) model and the k-means model of 1000 clusters to discretize 11th-layer features. For acoustic representation, we train a SoundStream model with 12 quantization levels and with a size of 1024 for each codebook and an overall downsampling rate of 320.
For \(S_{1}\) stage, we train an S2UT-conformer for Fr-En following [1], and an xm-transformer for Es-En following [4] but without mbar-decoder initialization. The decoder-only transformer of \(S_{2}\) has about 760M parameters, with 26 layers and a hidden dimension of 1536. Models are implemented with fairseq [16] and trained using 6 Tesla V100 GPUs.
### Metrics
We measure the performance of our model in terms of translation quality, speech quality, and style similarity with the source speech. For translation quality, we transcribe the generated speeches using a wav2vec2 ASR model and calculate BLEU scores between the generated and reference text. For speech quality, we employ a subjective evaluation with Mean Opinion Score (MOS), where testers on the Amazon MTurk platform rate audio naturalness using a 1-5 Likert scale. For style similarity, we calculate Speaker Cosine Similarity (Cos) and Similarity MOS (SMOS) between the source and results.
\begin{table}
\begin{tabular}{l l r r r} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Lang}} & \multicolumn{1}{c}{Model} & BLEU (\(\uparrow\)) & MOS (\(\uparrow\)) & Cos (\(\uparrow\)) \\ \hline \multirow{6}{*}{Fr-En} & S2UT & 18.08 & 3.73 \(\pm\) 0.06 & / \\ & + ms-vocoder & 17.59 & 3.35 \(\pm\) 0.08 & 0.48 \\ & Ours & 17.64 & 3.84 \(\pm\) 0.07 & 0.73 \\ & GT (CVSS-C) & 84.52 & 3.93 \(\pm\) 0.06 & / \\ & GT (CVSS-T) & 81.48 & 3.97 \(\pm\) 0.07 & 0.68 \\ \hline \multirow{6}{*}{Es-En} & S2UT & 23.78 & 3.74 \(\pm\) 0.07 & / \\ & + ms-vocoder & 23.26 & 3.31 \(\pm\) 0.09 & 0.50 \\ \cline{1-1} & Ours & 23.41 & 3.87 \(\pm\) 0.07 & 0.75 \\ \cline{1-1} & GT (CVSS-C) & 88.54 & 3.91 \(\pm\) 0.06 & / \\ \cline{1-1} & GT (CVSS-T) & 84.81 & 3.93 \(\pm\) 0.07 & 0.70 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Translation Quality on CVSS Dataset.
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Model}} & \multicolumn{1}{c}{MOS (\(\uparrow\))} & \multicolumn{1}{c}{SMOS (\(\uparrow\))} & \multicolumn{1}{c}{Cos (\(\uparrow\))} \\ \hline PPG-VC & 3.37 \(\pm\) 0.07 & 3.30 \(\pm\) 0.06 & 0.65 \\ NANSY & 3.56 \(\pm\) 0.06 & 3.47 \(\pm\) 0.05 & 0.68 \\ YourTTS & 3.74 \(\pm\) 0.05 & 3.60 \(\pm\) 0.06 & 0.69 \\ Ours & 3.86 \(\pm\) 0.05 & **3.69 \(\pm\) 0.05** & **0.74** \\ \hline CVSS-T target & 3.95 \(\pm\) 0.05 & 3.56 \(\pm\) 0.06 & 0.69 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quality and Style Similarity of Voice Conversion
We calculate BLEU scores over the entire test split and randomly sample 500 items from each language pair for other metrics, which represents approximately 3% of the test set.
### Results and Analysis
Table 1 summarizes the translated speech quality, where we compare our results with direct S2UT, together with ground truth target speech in CVSS-C and CVSS-T for reference. We can see that compared to S2UT, our model achieves a noticeable improvement of 0.11 and 0.13 on MOS, while having only slight declines of 0.44 and 0.37 in BLEU scores on the two language pairs. This indicates that the \(S_{2}\) stage in our method causes only minimal error in semantic content but enhances the quality of the resulting speech a lot. Meanwhile, the direct S2UT pipeline with a unit-based HiFi-GAN vocoder lacks a mechanism for modeling various acoustic conditions and can only generate speeches of a single speaker. We also train a multi-speaker semantic vocoder (ms-vocoder in Table 1) for the S2UT semantic output to verify if semantic units contain timbre information. However, it produces speeches with a mixed timbre and low quality, along with low Cos scores. This indicates that semantic units lack sufficient acoustic information for style maintenance. On the other hand, our model achieves scores of 0.73 and 0.75 for speaker similarity, which even surpasses the CVSS-T target, demonstrating that our model's mechanism achieves outstanding performance in zero-shot cross-lingual style transfer.
To further evaluate our model's style transfer capability, we conduct comparisons with three voice conversion models, which are PPG-VC[17], NANSY[18] and YourTTS[19]. We use them to conduct voice conversion on the vocoder output of direct S2UT and source speech. We also provide the results of target speech in CVSS-T for reference. Table 2 summarizes the results. It can be seen that our model achieves a MOS of 3.86, which significantly outperforms the three baselines. This indicates that in comparison to cascaded voice conversion, the style transfer based on discrete intermediate representations used by our model can mitigate quality losses during the transfer process and produce higher-quality audio. Our model also achieves superior values on style similarity metrics, with SMOS of 3.69 and Cos of 0.74, which surpasses all other results. This can be attributed to the larger model size and use of large training data. Through a language model with larger parameters and extensive training data, our model acquires strong zero-shot style transfer capability and can generalize effectively to unseen source languages.
### Ablation Studies
To further investigate the impact of training data volume and model size on the model's performance, we conduct ablation experiments. Table 3 summarizes the model's performance under two different training data volumes. LibriTTS comprises 585.5 hours of audio with 2,456 speakers, while Libri-Light unlab-60k consists of 57.7k hours of audio with 7,439 speakers. We observe that when using the small-scale LibriTTS dataset, there is a significant decrease in SMOS and Cos scores with reductions of 0.14 and 0.07, indicating a deterioration in style transfer performance. This suggests that the duration and speaker number of the training data are essential for the model's style transfer capability. On the other hand, the small-scale dataset has a relatively minor impact on MOS, resulting in only a 0.02 decrease. This suggests that the acoustic language model can achieve the ability to generate highly natural speech without requiring an extensive amount of data. Meanwhile, we try to add part of French and Spanish speeches from CVSS source to the training data, obtaining a marginal improvement of 0.02 on both Cos and SMOS. This indicates that our model's voice conversion performance on unseen source languages is close to that on seen languages, which is attributed to the use of extensive training data.
Table 4 summarizes the audio quality and style similarity of the model under different model sizes. All three metrics indicate that as the model's parameter size decreases, both audio quality and similarity decline. This illustrates that the superior performance of our acoustic language model is closely linked to its large parameter size.
## 4 Conclusions
In this work, we propose a direct S2ST approach with style transfer based on two distinct types of discrete representations. We adopt an acoustic language model that acquires the ability of style transfer through in-context learning. By adopting self-supervised training, our method addresses the scarcity of speaker-parallel data. And by leveraging large-scale training data, our model achieves cross-lingual style transfer with unseen source languages. Experimental results indicate that our approach achieves good results in terms of speech quality and style similarity, with only a minimal loss in translation quality. In future work, we plan to introduce more diverse prompt information into the acoustic language model to achieve a greater variety of translated speech results.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model Size & MOS (\(\uparrow\)) & SMOS (\(\uparrow\)) & Cos (\(\uparrow\)) \\ \hline Small (160M) & 3.73 \(\pm\) 0.06 & 3.58 \(\pm\) 0.05 & 0.70 \\ Base (430M) & 3.81 \(\pm\) 0.05 & 3.64 \(\pm\) 0.05 & 0.73 \\ Large (760M) & 3.86 \(\pm\) 0.05 & 3.69 \(\pm\) 0.05 & 0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation on Model Size of \(S_{2}\)
\begin{table}
\begin{tabular}{l c c c} \hline \hline Training Data & MOS (\(\uparrow\)) & SMOS (\(\uparrow\)) & Cos (\(\uparrow\)) \\ \hline LibriTTS & 3.84 \(\pm\) 0.05 & 3.55 \(\pm\) 0.05 & 0.67 \\ Libri-Light unlab-60k & 3.86 \(\pm\) 0.05 & 3.69 \(\pm\) 0.05 & 0.74 \\ + CVSS source & 3.85 \(\pm\) 0.05 & 3.71 \(\pm\) 0.05 & 0.76 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation on Training Data of \(S_{2}\) |
2306.17696 | Anomalous Scaling for Hydrodynamic Lubrication of Conformal Surfaces | The hydrodynamic regime of the Stribeck curve giving the friction coefficient
$\mu$ as a function of the dimensionless relative sliding speed (the Sommerfeld
number, $S$) of two contacting non-conformal surfaces is usually considered
trivial, with $\mu \sim S$. We predict that for conformal surfaces contacting
over large areas, a combination of independent length scales gives rise to a
universal power-law with a non-trivial exponent, $\mu\sim S^{2/3}$, for a thick
lubrication film. Deviations as the film thins (decreasing $S$) may
superficially resemble the onset of elastohydrodynamic lubrication, but are due
to a crossover between hydrodynamic regimes. Our experiments as well as recent
measurements of chocolate lubrication confirm these predictions. | James A. Richards, Patrick B. Warren, Daniel J. M. Hodgson, Alex Lips, Wilson C. K. Poon | 2023-06-30T14:20:25Z | http://arxiv.org/abs/2306.17696v1 | # Anomalous Scaling for Hydrodynamic Lubrication of Conformal Surfaces
###### Abstract
The hydrodynamic regime of the Stribeck curve giving the friction coefficient \(\mu\) as a function of the dimensionless relative sliding speed (the Sommerfeld number, \(S\)) of two contacting non-conformal surfaces is usually considered trivial, with \(\mu\sim S\). We predict that for conformal surfaces contacting over large areas, a combination of independent length scales gives rise to a universal power-law with a non-trivial exponent, \(\mu\sim S^{2/3}\), for a thick lubrication film. Deviations as the film thins (decreasing \(S\)) may superficially resemble the onset of elastohydrodynamic lubrication, but are due to a crossover between hydrodynamic regimes. Our experiments as well as recent measurements of chocolate lubrication confirm these predictions.
Controlling friction between sliding surfaces is important across multiple fields [1]. For example, friction losses in bearings account for a third of a car's fuel use and 23% of global energy production [2]. Tribological properties determine the sensory feel of topical products such as skin creams [3; 4]. Inter-particle friction is implicated in suspension rheology [5]. Lubricants, from mineral oils to the synovial fluid in our joints, reduce wear and frictional losses. The lubricant viscosity, \(\eta\), is one determinant of the friction coefficient, \(\mu=F/N\), the drag force (\(F\)) to load (\(N\)) ratio; but it also depends on the relative sliding speed, \(U\), of the surfaces, their geometry, and the load. Following Stribeck, Gumbel and Hersey independently proposed a dimensionless parameter to rationalise these dependencies [6; 7; 8]. For a bearing of width \(L\), this parameter is \(S=\eta UL/N\) (often called the Sommerfeld number).
By the 1920s a canonical view had emerged [9] that the 'Stribeck curve', \(\mu(S)\), generally displays a minimum, which is usually taken to correspond to the transition from hydrodynamic lubrication (HL) under low loads (high \(S\)), through elastohydrodynamic lubrication (EHL), to boundary lubrication (BL) at high loads (low \(S\)), with the transition determined by the microscopic asperity length scale. Deemed understood after the early twentieth century [10; 11; 12], the HL regime is usually dismissed: pedagogical discussions often claim casually that \(\mu\sim S\) in this regime (_e.g._, Fig. 2 in Ref. [13]), with no supporting data. Rather, in engineering tribology the focus shifted to the small-\(S\), heavily-loaded EHL \(\to\) BL transition, where the minimum in \(\mu(S)\) gives least wear and dissipation [1; 14]. The physics in these regimes is complex, and involves coupling solid and fluid mechanics [15] as well as lubricant molecular properties [14].
Recently, though, the high-\(S\), lightly-loaded (or high lubricant viscosity) regime has received renewed attention because of its relevance for human sensory perception, such as oral'mouth-feel' [16]. In particular, a recent study of the lubrication behaviour of molten chocolate [17] shows data (reproduced in Fig. 1) for a ball-on-flat contact where \(\mu(S)\) does indeed appear to tend towards \(\mu\sim S\) at high \(S\). Significantly, however, for a textured bio-mimetic tongue surface, a different high-\(S\) behaviour is found, with a clearly weaker dependence on \(S\).
Here, we show by experiment and theory that in bearing geometries characterised by _two_ length scales, a macroscopic bearing dimension and a mesoscopic surface profile length scale, we generally expect \(\mu\sim S^{2/3}\) in the high \(S\), large-gap, HL limit. The two length scales set a cross-over \(S^{*}\), as the lubrication film thins, below which deviations from \(S^{2/3}\) scaling can mimic the well-known EHL upturn but are entirely due to hydrodynamics. We argue, _inter alia_, that this explains the bio-mimetic tongue data in Fig. 1.
To introduce our theoretical framework, consider first a canonical _non-conformal_ contact comprising a cylinder of radius \(R\) sliding against a flat (Fig. 2a). Here, the gap is \(h\approx h_{0}+x^{2}/2R\) with \(h_{0}\ll R\) the minimum gap height and \(x\) the distance from this point. There is a region \(x_{0}\sim\sqrt{Rh_{0}}\) (Fig. 2b) in which the gap is \(O(h_{0})\), outside of which pressures and stresses are negligibly small. With a characteristic shear rate \(\dot{\gamma}\sim U/h_{0}\), the frictional drag force on a cylinder of length \(L\) due to Couette flow (Fig. 2b, dashed lines) is \(F\sim\eta\dot{\gamma}Lx_{0}\sim\eta UL\sqrt{R/h_{0}}\). To conserve volume for incompressible fluids, an additional, compensating Poiseuille flow is needed (Fig. 2b, solid lines). The associated 'Reynolds lubrication pressure' (Fig. 2c) generates the load-bearing normal force. In the 'long-bearing' limit (\(L\gg x_{0}\)), this compensating Poiseuille flow develops parallel to the Couette flow and, in the cylinder-on-flat case, of a similar magnitude [19]. The corresponding pressure \(p\) emerges from the Hagen-Poiseuille expression, \(U\sim(h_{0}^{2}/\eta)\times p/x_{0}\) ; together with the area \(\sim Lx_{0}\) this sets the
Figure 1: Stribeck curve, friction coefficient (\(\mu\)) as a function of non-dimensionalised sliding speed (Sommerfeld number, \(S\)) for various molten chocolate samples in different geometries. Symbols: (orange) squares, ball-on-flat (\(R=6.3\,\mathrm{mm}\) and \(N=0.01\,\mathrm{N}\)) with dashed line, \(\mu\sim S\); (blue) circles, textured-surface-on-flat with dotted line, \(\mu\sim S^{2/3}\). Replotted from Ref. [17]; see Ref. [18] for details.
normal force \(N\sim p\,Lx_{0}\sim\eta ULx_{0}^{2}/h_{0}^{2}\sim\eta ULR/h_{0}\).
This problem is symmetric about \(x=0\), so that equal but opposite pressures should be created in the converging and diverging regions (Fig. 2c). We appeal to a widely used 'half-Sommerfeld' boundary condition [1] and set the negative pressure in the diverging region to zero. This can be justified when, _e.g._, the maximum pressure is greater than the difference between the (atmospheric) inlet pressure and the lubricant vapour pressure and cavitation occurs.
Since \(h_{0}\) adjusts to support the load, the friction coefficient \(\mu=F/N\sim\sqrt{h_{0}/R}\) depends on \(S=\eta UL/N\). For the cylinder on flat, one finds \(\mu\sim S^{1/2}\) with an \(\mathcal{O}(1)\) numerical prefactor and no further dependence on \(R\) or lubricant properties. A similar analysis for a sphere gives \(\mu\sim S\)[19]. These scaling laws apply for non-conformal contacts for all \(h_{0}\) beyond contacting asperities. They occasion no surprise, and reflect the spatial dimension. This simplicity is traceable to the the fact that the extent of the narrow-gap region is \(x_{0}\sim\sqrt{Rh_{0}}\). Thus, the problem is specified by one length scale, \(R\), and the magnitude of the induced Poiseuille flow is always \(\Delta U\sim U\).
In contrast, and forming part of the historic foundation of tribology, conformal surfaces allow close contact over a wide area [20]. For soft surfaces, such as skin or ceramic green bodies, bulk deformation brings surfaces into broad close approach. At first sight, there are no obvious length scales in a flat-on-flat contact corresponding to \(R\) for the sphere or cylinder. However, large-area surfaces typically show both microscale roughness and mesoscale non-flatness. Studies of artificial 'textured' conformal contacts [21] suggest that a general (macroscopically) flat surface can be modelled as the sum of many elementary 'texture cells', each of which is a form of slider bearing. Common slider bearing geometries include the pedagogical examples of a (Rayleigh) step [11] and a wedge, to which we add an inlet-half-cylinder, Fig. 2d. The HL problem in each case can be reduced to quadratures, as detailed in a companion paper [18]. Here, we extend our scaling analysis to identify the key generic features.
The key idea is that a textured surface is characterised by _two_ length scales: a'step height' \(d\) and'step length' \(D\). To conserve volume and balance the changing Couette flow as the gap narrows from \(h_{0}+d\) to \(h_{0}\), a Poiseuille flow of order \(\Delta U\sim Ud/\lfloor d+\mathcal{O}(h_{0})\rfloor\) is required (assuming a 'long bearing' limit \(L\gg D\) ; see below for short bearings). At modest gaps (\(h_{0}\lesssim d\)) one has \(\Delta U\sim U\), as in the case of non-conformal contacts. However, in the large-gap limit (\(h_{0}\gg d\)), \(\Delta U\sim Ud/h_{0}\ll U\). Hagen-Poiseuille, with \(D\) replacing \(x_{0}\), now gives \(\Delta U\sim(h_{0}^{2}/\eta)\times p/D\), and a lift force \(N\sim pLD\sim\eta LD^{2}\Delta U/h_{0}^{2}\) hence the Sommerfeld number \(S=\eta UL/N\sim h_{0}^{3}/D^{2}d\) for \(h_{0}\gg d\). The Couette flow generates a drag force \(F\sim\eta ULD/h_{0}\) for \(h_{0}\gg d\), so that \(\mu=F/N\sim h_{0}^{2}/Dd\) for \(h_{0}\gg d\). Eliminating \(h_{0}\) parametrically between \(\mu\) and \(S\) yields \(\mu\sim S^{2/3}\) for \(S\gg S^{*}\), where \(S^{*}\) corresponds to \(h_{0}\approx d\).
The replacement of the power-law \(\mu\sim S^{1/2}\) (expected on dimensional grounds) by \(\mu\sim S^{2/3}\) is an example of the failure of a'regularity assumption' [22], which in this problem amounts to the 'naive' assertion that \(\Delta U\sim U\) for the compensating Poiseuille flow for all gap sizes. Whilst this is correct for the non-conformal cases of the cylinder and the sphere, and for moderate gaps (\(h_{0}\sim d\)) in conformal contacts, it misses a dimensionless factor of \(d/h_{0}\) in the wide-gap (\(h_{0}\gg d\)) limit.
These results are confirmed by an analysis of the rigorously-derived expressions in lubrication theory [18], and also hold for the so-called DuBois-Ocvirk'short-bearing' approximation [23], which will be relevant for our experiments. In this latter limit, volume conservation of the lubricant in the gap occurs through side leakage and the induced Poiseuille flow is _perpendicular_ to the Couette flow. This appears quite different to the 'long bearing' case: indeed the lift force is reduced, with side leakage occurring over a wider width, \(D\gg L\), and travelling a shorter length, \(L\ll D\). However, the same anomalous exponent still arises in the large-gap HL regime [18], with pre-factors modified by \(L^{2}/D^{2}\):
\[\mu\sim\frac{Dd}{L^{2}}\Big{(}\frac{h_{0}}{d}\Big{)}^{2},\ S\sim\frac{d^{2}}{ L^{2}}\Big{(}\frac{h_{0}}{d}\Big{)}^{3}\Rightarrow\mu\sim\Big{(}\frac{D^{3}}{L^{2}d} \Big{)}^{1/3}S^{2/3}\,. \tag{1}\]
Thus, the prediction of anomalous scaling at large \(S\) is robust, and should hold for both the long- and short-bearing limits in conformal contacts with an \(h_{0}\)-independent step height \(d\).
For \(S\lesssim S^{*}\) the gap shrinks to \(d\lesssim h_{0}\) and the above scaling arguments no longer apply; rather, the actual gap profile, \(h(x)\), must be used to calculate \(\mu(S)\) parametrically for different profiles (Fig. 2d) and bearing types, Fig. 3 (inset) [18]. Deviations can be highlighted by reporting the 'running exponent' \(\alpha=\mathrm{d}\ln\mu/\mathrm{d}\ln S\) as a function of \(h_{0}/d\)[18], Fig. 3. Asymptotically, all profiles collapse to \(\alpha=2/3+\mathcal{O}(d/h_{0})\) verifying \(S^{2/3}\) scaling for short and long slider bearings. For \(h_{0}\lesssim d\), \(\alpha\) deviates from this large-gap scaling, with the leading order correction depending on moments of the height profile [18]. Typically, the Stribeck curve deviates positively (\(\alpha<2/3\)) as \(h_{0}\to d\) ; this is the case for most long-limit bearings, Fig. 3 (dashed lines), and for surface profiles that are 'blunt' in the sense that \(\langle\delta h\rangle/d<1/2\), where \(\delta h=h-h_{0}\). Thus, \(\mu(S)\) resembles the onset of EHL, but the physics arises entirely from HL with two independent length scales. In the limit where the gap shrinks to zero the behaviour is set
Figure 2: Lubrication geometries. (a) Cylinder-on-flat: gap, \(h(x)\); radius, \(R\); and \(h_{0}=\min(h)\). Conditions: load, \(N\); sliding velocity, \(U\); and, drag, \(F\). Long-bearing into plane, \(L\gg x_{0}\). (b) Narrow-gap, with Couette (dashed line) and Poiseuille flow (arrows). (c) Resulting pressure, \(p(x)\). Hatching, \(p<0\) neglected with half-Sommerfeld approximation. (d) Conformal contacts with step, \(d\); and length, \(D\): upper, wedge; lower, inlet–half-cylinder (solid) and step (dashed).
by the type of profile. For example the inlet-half-cylinder notably segues into a cylinder-on-flat geometry as \(h_{0}\to 0\), with \(\alpha=1/2\). For other details, see Ref. [18].
Experimental verification of these predictions requires bespoke measurements, as the overwhelming majority of literature data pertains to non-conformal geometries in the EHL-BL regime. We modified a commercial rheometer (Kinexus Ultra+, Malvern Instruments) to incorporate a ring-plate geometry (Fig. 4, lower inset), inner and outer radii (\(R_{i}\), \(R_{o}\)) respectively 17.5 and 22.5 mm [24], giving \(L=5\) mm. The ring can be considered a narrow slider bearing [\(L\ll 2\pi R=\pi(R_{o}+R_{i})\)] wrapped around upon itself. A \(\mu\sim 5^{2/3}\) regime has previously been observed for a ring-plate geometry and interpreted in terms of geometry misalignment, where non-parallelism creates an effective wedge angle [25]. However, such misalignment creates an ill-defined, rotation-dependent gap profile. For a consistent gap profile, we use a self-aligning mechanism adapted from Ref. [26]. A flexible foam mounting allows the plate to tilt about a central ball bearing, but not freely rotate, and the applied load dynamically pushes the shearing surfaces parallel. Surfaces are used as machined.
To measure gap profiles we rigidly mount the plate or ring as the lower geometry and attach a 10 mm polytetrafluoroethylene-coated sphere centred at 20 mm from the upper geometry rotated at \(\Omega=0.1\) rad s\({}^{-1}\) while imposing a 0.02 N normal force through a feedback loop. The change in gap needed to maintain contact over a cycle to first order gives the tilt of the rigid mounting, which is compensated for in the self-aligning geometry. Subtracting the tilt leaves the bearing surface profile, \(\delta h\) (Fig. 4, upper inset). The plate is flat on the \(\sim 1\) um level, but the rixng has undulations of \(d\approx 22\) um and \(D\sim\pi R/2\approx 30\) mm acting as two symmetric wedge bearings in series in which we set \(p=0\) in the two divergent halves by appealing to the half-Sommerfeld boundary condition [21] (_cf._ Fig. 3(a) in Ref. [18]). In such a short bearing, for which Eq. 1 applies, lubricant leaks from the sides; to prevent bearing starvation, excess fluid from loading is left as a reservoir [27].
Three poly(dimethyl siloxane) silicone oils (Merck, UK) were used (\(\eta=5\), 50 and 500 mPa s) [28]. We controlled the initial temperature of the sample using a Peltier plate at \(T=20\,^{\circ}\)C. The maximum temperature rise during measurements due to viscous heating is \(\lesssim 2\,^{\circ}\)C [28]; \(\mathrm{d}\eta/\mathrm{d}T\) data on silicone oils [29] suggest that this has negligible effect for our work. The load was varied (\(N=0.1\) to 1.0 N) for logarithmic sweeps of the rotation rate, \(\Omega\), from 0.1 rad s\({}^{-1}\) upwards at 5 pts/decade, until reaching a maximum torque (0.05 N) or sample ejection (at \(\Omega_{\mathrm{max}}\approx 150\) rad s\({}^{-1}\)). To average over multiple rotations, the step time was 100 s for \(B<1.0\) rad s\({}^{-1}\), and 20 s above, leaving 10 s to reach a steady state. From the torque, \(\mathcal{T}\), \(\mu=\mathcal{T}R/(R_{o}^{2}+R_{i}^{2})\) [\(\approx\mathcal{T}/RN\) for \(L\ll R\)]. In this context \(S\) (or Gumbel number) = \(2\eta QRL/N\), featuring the linear speed of the bearing (\(\Omega R\)) and a factor of two from the ring undulations forming _two_ slider bearings.
At \(S\geq 6\times 10^{-5}\), Fig. 4 (bold), the majority of our data collapse with \(N\) (increasing, dark to light) and \(\eta\) (symbols). Power law fits of \(\mu\sim S^{\alpha}\) for \(\eta_{s}=50\) and 500 mPa s give \(\alpha=0.72\pm 0.05\), close to the predicted \(2/3\) scaling for the large-gap lubrication regime (bold dashed line). Further, using the measured \(d\) the predicted \(\mu(S)\) is within a factor of \(\approx 7\) (fine dot-dashed line), consistent with a scaling argument neglecting \(\mathcal{O}(1)\) prefactors. Selected runs were also performed with \(\Omega\to-\Omega\) and gave similar results [28], consistent with the near-symmetrical surface profile, Fig. 4 (upper inset).
At \(S\gtrsim 10^{-3}\) and the lowest viscosity (\(\eta=5\) mPa s), the curve steepens (greyed symbols). In this regime, fluid inertia becomes important: the predicted \(h_{0}\gtrsim 80\) um [Eq. (1)] with \(\Omega\gtrsim 100\) rad s\({}^{-1}\) give a Reynolds number \(\mathrm{Re}=\rho\Omega Rh_{0}/\eta\gtrsim 30\)
Figure 3: Running exponent of Stribeck curve, \(\alpha=\mathrm{d}\ln\mu/\mathrm{d}\ln S\), against gap, \(h_{0}/d\), from Reynolds lubrication theory, for various conformal profiles (legend) [18]. Lines: solid, short-bearing; dashed, long bearings (relative inlet length = 0.5). Short-bearing profiles: wedge, \(\langle\delta h\rangle/d=0.5\); ‘blunt’ inlet-half-cylinder with relative inlet length such that \(\langle\delta h\rangle/d=0.4\). Inset: corresponding \(\mu(S)\).
Figure 4: Self-aligning ring-plate tribo-rheology. Stribeck curves, \(\mu(S)\) at different loads, \(N\) (see legend), and viscosities [\(\eta=5\) (triangles), 50 (squares) or 500 mPa s (circles)] with \(\mu=\mathcal{T}/RN\) and \(S=2\Omega\eta RL/N\). Lines, \(\mu\sim S^{2/3}\): bold dashed, data fit; fine dot-dashed, scaling with unity pre-factor. Upper inset: profile, \(\delta h(x)\) for plate (fine) and ring (bold) with bearing dimensions \(D\) and \(d\). Lower inset: geometry cross-section with light grey, aluminium; dark grey, steel; and, yellow, foam. Ring width, \(L\); radius, \(R\); rotation rate, \(\Omega\); and torque \(\mathcal{T}\).
where secondary flows [30] and other complications arise.
There are also deviations from \(2/3\) scaling at \(S\lesssim 6\times 10^{-5}\), Fig. 4 (open symbols). The data no longer collapse when plotted against \(S(N,\eta)\), but depend on \(N\) and \(\eta\) separately. As \(S\to 0\), \(\mu\) converges to \(\approx 0.3\), consistent with BL for aluminium-aluminium contact [31]. Between \(\mu\approx 0.3\) and \(\mu\sim S^{2/3}\), the behaviour appears similar to EHL [14]. However, for \(N=0.1\,\mathrm{N}\) [dark (purple)] the deviation point, \(S^{*}=6\times 10^{-5}\sim(d/L)^{2}(h_{0}/d)^{3}\), corresponds to \(h_{0}\approx d\sim 20\,\mathrm{\SIUnitSymbolMicro m}\), far above the scale of asperities whose deformation triggers the onset of EHL. On the other hand, \(h_{0}\approx d\) is where we predict the onset of _hydrodynamic_ deviation from \(\mu\sim S^{2/3}\) scaling, Fig. 3. Our geometry has a calculated average surface profile of \(\langle\delta h\rangle/d=0.41<1/2\), _i.e._ just 'blunt' enough for us to expect weak positive deviations in the Stribeck curve (\(\alpha<2/3\) as \(h_{0}\to d\)). (Compare the solid dark orange curve in Fig. 3 (inset) calculated for an inlet-half-cylinder with \(\langle\delta h\rangle/d=0.4\)[18]). This is not the form of deviations we observe. One possible reason is that deformations in our geometry lead to a load-dependent \(d(N)\), although measurements of the axial compliance of our rheometer [28] reveal no such deformations.
More interestingly, if our interpretation of the \(S^{2/3}\) scaling in terms of an \(h_{0}\)-independent step height \(d\) is correct, then as \(h_{0}\to d\), multiple other length scales should become relevant and change the functional form of \(\mu(S)\). Likely candidates include mesoscale roughness in the plate (Fig. 4, upper inset, black line; see also Ref. [28]) or the ring. It is only if highly polished components are used that such 'extra length scales' will disappear and allow short-wedge-like deviations from \(S^{2/3}\) scaling to show through. Instead, the form of \(\mu(S)\) we obtain using 'as-machined' components, Fig. 4, probably represents the most likely encountered generic case.
There are few modern Stribeck curve data for the HL regime extensive enough to test for scaling. Recently, Classen has twice reported \(\mu\sim S^{2/3}\) at high \(S\), but interpreted this, and earlier data [24], in terms of an effective geometry misalignment [25; 26]. Studies in which there has also been independent measurement of the surface profile are even rarer. One exception is the work already mentioned on molten chocolate in conditions corresponding to oral processing [17]. This compares a single lightly loaded smooth elastomeric ball on flat glass to a comparably loaded bio-mimetic surface with multiple _rough_ contact points (predominantly flat-topped cylinders). At large gaps and sliding speeds the molten chocolate can be considered a Newtonian fluid with \(\eta\approx 1\,\mathrm{Pa\,s}\).
The high-\(S\) scaling behaviour of the two surfaces is notably different (Fig. 1). The data for a ball-on-flat geometry increases from a plateau (\(S\lesssim 10^{-3}\)) with a steepening gradient. At \(S\gtrsim 10^{-2}\) the trend reaches \(\mu\sim S^{\alpha}\) with \(\alpha=0.8\pm 0.1\) from linear regression, and the data plausibly tends towards a linear dependence (bold dashed line). In contrast, for a conformal textured surface in contact over a large area (\(2\times 2\,\mathrm{cm}^{2}\)), for \(S>4\times 10^{-3}\) we find a similar power law but with an exponent \(\alpha=0.6\pm 0.1\) and no sign of tending to linear scaling (light dotted line) over \(1.5\) decades. Instead, the data appear consistent with our predicted \(\mu\sim S^{2/3}\) for two competing length scales (bold dotted line). In Ref. [18], we analyse this data in detail as HL between a smooth steel plate and individual 'papillae' on the biomimetic tongue that are step-textured on the \(d\sim 50\,\mathrm{\SIUnitSymbolMicro m}\) scale, which we can deduce from the point at which deviations from \(S^{2/3}\) scaling is first observed.
To summarise, revisiting the classic HL regime for conformal contacts reveals a Stribeck curve distinct from that expected for non-conformal contacts. In particular, our analysis and experiments support \(\mu\sim S^{2/3}\) in the high-\(S\) limit, wherein the exponent is not set by dimensionality, but signals the presence of two independent length scales, the bearing length and a step height. This anomalous scaling applies in the large-gap limit, where the gap is greater than the step height. When these become comparable at low enough \(S\), deviations from \(\mu\sim S^{2/3}\) are expected. We tested these predictions using tribo-rheology, with a novel combination of surface profile characterisation and a bespoke self-aligning geometry. The results were consistent with our HL scaling analysis at large \(S\). At small \(S\) the results indicate the presence of additional length scales for surfaces 'as machined'. Comparison with literature data under lightly-loaded conditions relevant to oral processing [17] provides further experimental support for our contention that the high-\(S\) HL behaviour of non-conformal contacts with a single length scale, \(\mu\sim S\), differs fundamentally from that of conformal contacts with competing length scales, \(\mu\sim S^{2/3}\).
Beyond intrinsic interest, the subtleties of the HL regime in flat-flat contacts that we have uncovered may have particular relevance for sensory physics. The application of topical cosmetics and medicines involves traversing the entire Stribeck curve from high to low \(S\) with the product as the lubricant, starting with a low load and large gap [3]. The same considerations also apply to the oral perception of many foods [16]. In all these cases, the two length scales traceable to machining in our experimental geometry are also likely present, but as surface texturing or roughness. The generic features of the Stribeck curve in Fig. 4 should therefore recur in these and other areas of applications involving human texture perception [32; 33].
P. B. W., W. C. K. P. and J. A. R. conceptualised the work and drafted the manuscript. Experiments were carried out by J. A. R., and calculations by P. B. W. and W. C. K. P.; all authors interpreted data and revised the manuscript. We thank Rory O'Neill for technical assistance and Andreia Silva and John Royer for useful discussions.
|
2307.16879 | Image Synthesis under Limited Data: A Survey and Taxonomy | Deep generative models, which target reproducing the given data distribution
to produce novel samples, have made unprecedented advancements in recent years.
Their technical breakthroughs have enabled unparalleled quality in the
synthesis of visual content. However, one critical prerequisite for their
tremendous success is the availability of a sufficient number of training
samples, which requires massive computation resources. When trained on limited
data, generative models tend to suffer from severe performance deterioration
due to overfitting and memorization. Accordingly, researchers have devoted
considerable attention to develop novel models that are capable of generating
plausible and diverse images from limited training data recently. Despite
numerous efforts to enhance training stability and synthesis quality in the
limited data scenarios, there is a lack of a systematic survey that provides 1)
a clear problem definition, critical challenges, and taxonomy of various tasks;
2) an in-depth analysis on the pros, cons, and remain limitations of existing
literature; as well as 3) a thorough discussion on the potential applications
and future directions in the field of image synthesis under limited data. In
order to fill this gap and provide a informative introduction to researchers
who are new to this topic, this survey offers a comprehensive review and a
novel taxonomy on the development of image synthesis under limited data. In
particular, it covers the problem definition, requirements, main solutions,
popular benchmarks, and remain challenges in a comprehensive and all-around
manner. | Mengping Yang, Zhe Wang | 2023-07-31T17:45:16Z | http://arxiv.org/abs/2307.16879v1 | # Image Synthesis under Limited Data: A Survey and Taxonomy
###### Abstract
Deep generative models, which target reproducing the given data distribution to produce novel samples, have made unprecedented advancements in recent years. Their technical breakthroughs have enabled unparalleled quality in the synthesis of visual content. However, one critical prerequisite for their tremendous success is the availability of a sufficient number of training samples, which requires massive computation resources. When trained on limited data, generative models tend to suffer from severe performance deterioration due to overfitting and memorization. Accordingly, researchers have devoted considerable attention to develop novel models that are capable of generating plausible and diverse images from limited training data recently. Despite numerous efforts to enhance training stability and synthesis quality in the limited data scenarios, there is a lack of a systematic survey that provides 1) a clear problem definition, critical challenges, and taxonomy of various tasks; 2) an in-depth analysis on the pros, cons, and remain limitations of existing literature; as well as 3) a thorough discussion on the potential applications and future directions in the field of image synthesis under limited data. In order to fill this gap and provide a informative introduction to researchers who are new to this topic, this survey offers a comprehensive review and a novel taxonomy on the development of image synthesis under limited data. In particular, it covers the problem definition, requirements, main solutions, popular benchmarks, and remain challenges in a comprehensive and all-around manner. We hope this survey can provide an informative overview and a valuable resource for researchers and practitioners, and promote further progress and innovation in this important topic. Apart from the relevant references, we aim to constantly maintain a timely up-to-date repository to track the latest advances in this topic at
G +
Footnote †: star}\)_corresponding author._
Generative modeling, limited data, few-shot image generation, data-efficiency, generative domain adaptation.
## 1 Introduction
Deep generative models have made tremendous development and have been applied to a wide range of intelligent creation tasks, particularly in image and video composition [1, 2, 3, 4, 5, 6, 7, 8, 9], audio and speech synthesis [10, 11, 12, 13, 14, 15], multi-modal generation [16, 17, 18], _etc._ Their technical breakthroughs have also directly facilitate our daily life in many aspects including content creation of various representations (_e.g._, 3D/2D representations) [19, 20, 21, 22], customized generation and editing [23, 24, 25, 26, 27], and artistic synthesis/manipulation [28, 29, 30, 31]. Despite these remarkable advances, most existing generative models require massive amounts of data and computational resources for training. For instance, the most commonly used datasets, the human face FFHQ [2, 32] (\(70K\)), the outdoor/indoor scene LSUN [33] (\(1M\)), and the object ImageNet [34] (\(1M\)), all contains sufficient training samples. Such prerequisite poses a significant challenge for practitioners and researchers who only have limited training samples, like paintings from famous artists and medical images of scarce diseases. Accordingly, there is an increasing need to learn a generative model under limited training data, which has drawn extensive attention in recent years.
The main challenge of image synthesis under limited data is the risk of model overfitting and memorization, which can significantly affect the fidelity and diversity of the generated samples [35, 36, 37, 38, 39]. Namely, the model might simply duplicate training images instead of generating novel ones due to overfitting, leading to degraded synthesis quality. For instance, when generative adversarial networks (GANs) [40] are trained under limited data, the discriminator is prone to memorize the training images and thus provides meaningless guidance to the generator, resulting in unfavorable synthesis. In order to address these limitations, many research works have been developed to ameliorate the synthesis quality in the few-shot scenarios [35, 36, 37, 41, 42]. These works propose various strategies to mitigate the risk of overfitting and memorization from different perspectives, such as data augmentation, regularization, and novel architectures.
Despite conspicuous progress has been made in the field of image synthesis under limited data, there is a lack of a unified problem definition and taxonomy for this field. Few-shot image generation, for instance, is defined as producing diverse and realistic images for a unseen category given a few images from this category in [43, 44, 45, 44], whereas in [46, 47, 48, 49, 50], few-shot image generation refers to adapting the prior knowledge of a large-scale and diverse source domain to a small target domain. However, they are significantly different in problem requirements, model training, and testing setups. This inconsistent definition might lead to ambiguity and misunderstandings among readers who are not familiar with these works. Therefore, a comprehensive problem
definition and taxonomy are vital to facilitate a clearer understanding of this field. Moreover, considering the lack of a systematic survey and the increasing interest in limited data generation, we believe that it is necessary to organize one to help the community track its development. To this end, this paper first presents a clear problem definition for various tasks in the few-shot regimes and categorizes them into four categories: data-efficient generative models (Sec. 4), few-shot generative adaptation (Sec. 5), few-shot image generation (Sec. 6), and one-shot image synthesis (Sec. 7). Then, this paper presents an all-around overview of prior studies in this field. In particular, the technical evolution, advantages, and disadvantages of existing alternatives are presented. Additionally, we present several related applications and highlight open problems that require further investigation for future works (Sec. 8).
Overall, this survey aims to provide a comprehensive and systematic understanding of image synthesis under limited data for scholars who are new to the field. Hopefully, our work could serve as a guideline for researchers who are willing to develop their own generative models with only dozens of training images. The contributions of this survey are summarized in the following:
* **A clear problem definition and taxonomy.** This survey presents a clear and unified problem definition for various synthesis tasks in image synthesis under limited data. Moreover, this survey proposes a systematic taxonomy that divides these tasks into four categories: data-efficient image generation, few-shot generative adaptation, few-shot image generation, and one-shot image synthesis.
* **Comprehensiveness.** This survey provides a comprehensive overview of existing state-of-the-art generative models in the few-shot regimes. We compare and analyze the main technical motivations, contributions, and limitations of existing approaches, which can inspire potential solutions for further improvement.
* **Applications and open research directions.** In addition to the technical investigation, this survey also discusses potential applications and highlights open research problems that require further investigation for the improvement of image synthesis under limited data.
* **A timely up-to-date repository.** In order to continuously track the rapid development of this field, we provide a curated list of the latest relevant papers, code, and datasets at GitHub/awesome-few-shot-generation.
The remainder of this paper is organized as follows. Sec. 2 presents the scope of this survey and discusses the differences with other surveys. Sec. 3 introduces the fundamentals of image synthesis under limited data, namely deep generative models, few-shot learning, and transfer learning. Sec. 4, Sec. 5, Sec. 6, and Sec. 7 respectively provides the detailed comparison and discussions on the existing approaches for the aforementioned tasks. Sec. 8 discusses the downstream applications and highlights several future research directions. Finally, Sec. 9 concludes this survey.
## 2Scope and Overview
**Scope.** This survey focuses on methods that train deep generative models to produce diverse and plausible images under limited training data. The main objective of these approaches is to mitigate the overfitting problem by fully leveraging the internal information of limited training data and producing novel samples within the data distribution. However, these methods differ in the model input, training diagrams, and evaluation. Thus, in this survey, we aim to 1) give readers a clear understanding of various problem settings in the field of image synthesis under limited data, 2) provide in-depth analysis and insightful discussion about the model concepts, method characteristics, and applications of prior arts, and 3) pose some research directions for future investigation, and inspire more interesting works for further improvement. In particular, based on the problem definition and experimental settings, we categorize existing approaches into four groups: data-efficient generative models, few-shot generative adaptation, few-shot image generation, one-shot image generation. It is important to note that all these categories aim to synthesize photorealistic and diverse images corresponding to the data distribution. This is in contrast to generative modeling in few-shot learning, which explicitly estimates the probability distribution to compute the class label of given samples [51, 52]. Regarding the progress of few-shot learning, we refer readers to [53, 54] for a more comprehensive review.
**The differences between our survey and others.** Although there are already some other surveys that discuss the developments, main challenges, potential applications, and future opportunities of deep generative models [55, 56, 57, 58, 59], very few have focused on the development of deep generative models in limited data scenarios. The most relevant work to ours is [60], which analyzes the degradation of data-efficient GANs and provides a novel taxonomy and opportunities. However, our survey has several advantages over [60]: 1) _Comprehensive investigation_. In [60], only the traditional noise-to-image scheme of data-efficient GANs is discussed. In contrast, conditional generative models that use few conditional images [41, 43, 61] as input are also taken into account in this survey. Besides, this survey covers one-shot image synthesis tasks that are solely trained on single image [62, 63, 64] and few-shot image generation tasks that produce novel samples for a category given few images from the same category, whereas [60] ignores. 2) _Up-to-date investigation_. Since the publication of [60], the limited-data synthesis field has seen significant progress, such as the integration of diffusion models [65, 66, 67] and the consideration of inversion techniques [68, 69, 70, 71]. Our survey provides a timely and up-to-date review of these recent advances. 3) _Thorough discussion and analysis_. Our survey provides a more detailed comparison of the design concepts, model details, and method characteristics of existing approaches. Additionally, we highlight the potential applications in practical domains and the technical limitations that require further investigation. In brief, this survey covers all related works presented in [60] while containing the most recent advances and more comprehensive investigations. Notably, our survey complements existing overviews of generative
models by providing a comprehensive and systematic understanding of image synthesis under limited data.
**Overview.** In this survey, we aim to provide a lucid comprehension of various tasks concerning image synthesis under limited data. To achieve this goal, we present the definition and formulation of each task, taking into account the training paradigms and task-specific requirements that underlie each problem. The four independent problems that we have formulated are data-efficient generative models, few-shot generative adaptation, few-shot image generation, and one-shot image generation. In order to better illustrate these problems, we consider one representative category in the family of deep generative models, namely Generative Adversarial Networks (GANs), to depict the training pipelines of these problems in Fig. 1. It is important to note that the presented pipeline is not intended to represent all approaches utilized in each task, but rather serves as an exemplar. Moreover, we summarize the definitions, model requirements, and primary challenges of each task in Tab. I. The detailed methodology design and taxonomy are presented respectively in the corresponding sections.
## 3 Fundamentals
This section briefly introduces the fundamentals of image synthesis under limited data, encompassing deep generative models, few-shot learning, and transfer learning. The general concepts and overall pipelines of these fundamentals are presented, enabling readers to gain a preliminary understanding of our survey. Notably, the detailed taxonomy and model designs of these fundamentals are beyond the scope of this section. For a more comprehensive survey of few-shot learning and transfer learning, we refer readers to [53, 54] and [72, 73].
### _Deep Generative Models_
Generative models aim to capture the actual underlying distribution of a given set of training data, with the goal of generating novel samples that closely follow this distribution. To achieve this, generative models must capture fine-grained details as much as possible. Typically, existing generative models can be broadly categorized into two categories: likelihood-based models and likelihood-free models. Likelihood-based models explicitly maximize the likelihood probability of the given data distribution, while likelihood-free models implicitly capture the data distribution through a mini-max game between two sub-networks. In particular, likelihood-based generative models could be further categorized into Variational AutoEncoders (VAEs) [74], Diffusion Probabilistic Models (DPMs) [75, 7], and Normalizing Flows (NFs) [76], whereas the likelihood-free one usually refers to Generative Adversarial Networks (GANs) [40]. Fig. 2 presents the overall pipeline of each generative model. In the following, we briefly introduce the formulations and concepts underlying each generative model.
**Generative Adversarial Networks (GANs).** Generative Adversarial Networks (GANs) typically consist of two sub-networks, a generator and a discriminator. The two sub-networks are trained in an adversarial manner, where the generator tries to fool the discriminator by producing images that are difficult to distinguish from real ones, and the discriminator tries to correctly identify whether an image is real or generated. This process is optimized with a two-player mini-max game between the generator and the discriminator in an adversarial manner. Formally, they are optimized by
\[\mathcal{L}_{D} =-\mathbb{E}_{\mathbf{x}\sim p_{\mathbf{z}}}[\log(D(\mathbf{x}, \mathbf{c}))]-\mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}}[\log(1-D(G(\mathbf{ z},c)))], \tag{1}\] \[\mathcal{L}_{G} =-\mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}}[\log(D(G(\mathbf{z}, \mathbf{c})))].\]
where \(D(\cdot)\) and \(G(\cdot)\) denote the discriminator and the generator, respectively. \(\mathbf{x}\) represents the real samples and \(G(\mathbf{z})\) denotes the generated ones. \(\mathbf{c}\) is the additional condition for conditional geneartion, which is optional during training. Targeting at reaching a Nash equilibrium between the generator and the discriminator, GANs are notoriously difficult to train. As a result, several issues like gradient vanishing, mode collapse, and training divergence are prone to happen. In order to improve the training stability and model performances of GANs, enormous research efforts have been poured, mostly focus on designing loss-variant and architecture-variant models. For instance, PG-GAN [77], BigGAN [1], and the StyleGAN series [78, 2, 3] have developed dedicated architectures that enable the injection of fine-details into the generating process, resulting in photo-realistic image synthesis. In contrast, WGAN [79, 80] and f-GAN [81] employ different optimization objectives to ameliorate the generation quality. Conditional GANs have been formulated to enable more controllable generation by injecting additional conditions (such as class labels [82, 83, 84], segmentation masks [85, 86, 87], or training images [88, 89]) into both the generator and the discriminator. However, many of the previous GANs are trained on large-scale datasets. When the training data is limited, problems like overfitting [35, 90], memorization [36, 37] and non-convergent training [91] might arise.
**Variational Autoencoders (VAEs).** Variational AutoEncoders (VAEs) aim to learn a latent variable model that captures the underlying distribution of the data, thereby enabling the learning of a compressed representation of the data that captures its essential features. In other words, VAEs seek to learn a probabilistic mapping from the input data to the latent space. VAEs are trained to maximize the likelihood of the data while simultaneously minimizing the distance (_i.e._, KL divergence) between the learned latent variable distribution and a prior distribution, _e.g._, a standard normal distribution. This is achieved by optimizing a variational lower bound on the log-likelihood of the data, which consists of two terms: the reconstruction loss and the KL divergence loss. Formally, VAEs minimize a variational lower bound on the log-likelihood of the training data:
\[\mathcal{L}_{VAE}=-D_{KL}\left(q_{\phi}(\mathbf{z}\mid\mathbf{x})\|p_{\theta}( \mathbf{z})\right)+\mathbb{E}_{\mathbf{z}\sim q_{\phi(\mathbf{z})}}\log\left( p_{\theta}(\mathbf{z}\mid\mathbf{x})\right). \tag{2}\]
where \(q_{\phi}(\mathbf{z}\mid\mathbf{x})\) denotes the approximation of posterior probability, and \(p_{\theta}(\mathbf{z}\mid\mathbf{x})\) represents log-likelihood of the training data. Variational AutoEncoders (VAEs) are trained to minimize Eq. 2 with respect to the parameters of the encoder and decoder neural networks. Once trained, the decoder of VAEs can be used to generate new samples by sampling from the prior distribution and decoding the
resulting latent variables. Compared to GANs, VAEs are more stable to optimize due to the use of log likelihood estimation. However, the synthesized samples of VAEs are often blurry and noisy due to the injection of noise distribution and imperfect pixel-level reconstruction. Moreover, the imbalance between the prior distribution (_i.e._, a Gaussian) and the limited training data may make VAEs difficult to optimize, leading to unsatisfactory synthesis performance and unstable training, particularly in few-shot scenarios. Therefore, VAEs are not suitable for image synthesis under limited data.
**Diffusion Probabilistic Models (DPMs).** The basic idea of DPMs is to learn a stochastic process that can transform a pure distribution of noise, _i.e._, Gaussian distribution, into a complex distribution that approximates the given data distribution. In particular, the process of DPMs involves iteratively adding noise to the clean image \(\mathbf{x}0\) by a noise schedule \(\beta 1:T\). Here, \(T\) denotes the total time step, and when \(T\) is large enough, \(\mathbf{x}_{T}\) is pure Gaussian noise. Parameterized by a Markov chain, the process of adding noise to the clean images is referred to as the diffusion
\begin{table}
\begin{tabular}{l l l l} \hline \hline Problem Categories & Problem Formulation & Challenges & Key Solutions \\ \hline \multirow{2}{*}{Data-efficient Generative Models} & \multirow{2}{*}{Directly trained on \(\mathbf{D}\)} & Model overfitting & Data Augmentation \\ & & Model memorization & Architecture-variants \\ & & Unstable training & Loss Regularization \\ \hline \multirow{2}{*}{Few-shot Generative Adaptation} & \multirow{2}{*}{Transfer from \(\mathbf{D_{s}}\) to \(\mathbf{D_{t}}\)} & Model overfitting & Fine-tuning \\ & & Domain gaps & Introducing extra branches \\ & & Incompatible knowledge & Loss regularization \\ \hline \multirow{2}{*}{Few-shot Image Generation} & \multirow{2}{*}{Learn to generate \(\mathbf{D_{s}}\) from \(\mathbf{D_{s}}\)} & Catastrophic forgetting & Optimization-based \\ & & Knowledge transfer & Transformation-based \\ & & Fail to generalize & Fusion-based \\ \hline \multirow{2}{*}{One-shot Image Generation} & \multirow{2}{*}{Learn the internal distribution of a single image} & Collapse to replicate input image & Multi-stage training \\ & & Synthesis variance & Patch-based training \\ \cline{3-4} & & Training time & Distribution matching \\ \hline \hline \end{tabular}
\end{table} TABLE I: Problem categories, formulation, key challenges, and primary solutions of various tasks for image synthesis under limited data.
Fig. 1: Various problem settings of image synthesis models under limited data. In particular, (a) represents the data-efficient GAN training pipeline that learns to capture the observed distribution from scratch with limited data; (b) denotes the pipeline of few-shot generative adaptation, which transfers prior knowledge from pre-trained large-scale source generative models to target domains with very few images, _e.g._, 10-shot images; (c) shows the learning scheme of few-shot image generation, the model is expected to produce novel samples given several input conditional images; (d) presents the training process of one-shot image generation, the model is trained solely on one single image in a coarse-to-fine manner to capture the underlying internal distribution of the reference image. It is imperative to note that the sub-figures presented herein serve solely as illustrative aids to convey the problem settings of diverse image synthesis tasks. As such, it should be understood that not all approaches utilized in these tasks are consistent with the pipeline depicted in the diagrams.
process.
\[\begin{split} q\left(\mathbf{x}_{t}\mid\mathbf{x}_{t-1}\right)& =\mathcal{N}\left(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}, \beta_{t}\mathbf{I}\right),\\ q\left(\mathbf{x}_{1:T}\mid\mathbf{x}_{0}\right)&= \prod_{t=1}^{T}q\left(\mathbf{x}_{t}\mid\mathbf{x}_{t-1}\right).\end{split} \tag{3}\]
Diffusion Probabilistic Models (DPMs) are trained to recover the original image \(\mathbf{x}_{0}\) from Gaussian noise \(\mathbf{x}_{T}\) by gradually modeling the reverse of the transition distribution \(q(\mathbf{x}_{t-1}\mid\mathbf{x}_{t})\). However, calculating the posterior \(q(\mathbf{x}_{t-1}\mid\mathbf{x}_{t})\) directly from \(\mathbf{x}_{t}\) is a non-trivial task, and thus, DPMs are optimized in a maximum likelihood manner, akin to Variational AutoEncoders (VAEs).
\[\mathcal{L}_{DPM}=\mathbb{E}_{t\sim[1,T]}\mathbb{E}_{x_{0}\sim p(x_{0})} \mathbb{E}_{z_{t}\sim\mathcal{N}(0,\mathbf{I})}\left\|z_{t}-z_{\theta}\left( x_{t},t\right)\right\|^{2}, \tag{4}\]
where \(z_{\theta}\left(x_{t},t\right)\) is the training network predicting the noise in \(\mathbf{x}_{t}\). Benefiting from the intractable Markov chain and the simple loss function, DPMs provide satisfactory coverage of data distribution and can produce high-quality samples with sharp details and textures. Furthermore, large-scale text-to-image diffusion models, such as Stable Diffusion [5] and DALLE-2 [17], have empowered various downstream applications, including image-editing [27, 92], image-inpainting [93, 94, 95, 96], and so on. Meanwhile, the development of data-efficient DPMs for limited-data generation has also garnered significant attention from the community [97, 66, 98].
**Normalizing Flows (NFs).** Normalizing Flows (NFs) learn a sequence of invertible transformations capable of mapping samples from a simple distribution to samples from a complex distribution. Once trained, these transformations can be composed to form a complex function that captures the underlying structure of the original data. To be more specific, NFs begin with a simple distribution, _e.g._, Normal distribution, and a series of invertible functions \(f_{1:N}(\cdot)\) transform the simple distribution to the complex data distribution:
\[\mathbf{z}_{i}=f_{i-1}\left(\mathbf{z}_{i-1}\right). \tag{5}\]
Note that \(f_{i}\) is invertible thus the probability distribution of \(z_{i}\) can be estimated by:
\[\begin{split} p\left(\mathbf{z}_{i}\right)&=p \left(\mathbf{z}_{i-1}\right)\left|\frac{df_{i}}{d\mathbf{z}_{i-1}}\right|^{-1 },\\ \log p\left(\mathbf{z}_{i}\right)&=\log p\left( \mathbf{z}_{i-1}\right)-\log\left|\frac{df_{i}}{d\mathbf{z}_{i-1}}\right|.\end{split} \tag{6}\]
In this way, the final probability distribution is calculated by:
\[\log p\left(\mathbf{z}_{N}\right)=\log p\left(\mathbf{z}_{0}\right)-\sum_{1} ^{N}\log\left|\frac{df_{i}}{d\mathbf{z}_{i-1}}\right|. \tag{7}\]
As evident from the learning scheme, NFs can generate samples from the target distribution exactly, rather than approximating it through sampling. However, their computational cost can be prohibitively high for large datasets and complex distributions. Additionally, NFs struggle with high-dimensional data because of the curse of dimensionality. The invertibility requirement further limits the synthesis performance, as the complexity of the transformations grows exponentially with the dimensionality of the data.
### _Few-shot Learning_
Inspired by human's ability to learn new concepts from a few observations, few-shot learning (FSL), which seeks to learn novel classes from a _few_ samples, has gained significant attention. Concretely, as demonstrated in Fig. 3, a dataset is divided into two sets of classes: _base classes_\(\mathbf{C}_{b}\) and _novel classes_\(\mathbf{C}_{n}\), where \(\mathbf{C}_{b}\cap\mathbf{C}_{n}=\emptyset\). Then, the model is trained on the _base classes_\(\mathbf{C}_{b}\) in an episodic task-by-task manner, where each episode consists of a _support set_\(\mathcal{S}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{|\mathcal{S}|}\) and a _query set_\(\mathcal{Q}=\{(\mathbf{x}_{j})\}_{j=1}^{|\mathcal{Q}|}\). Finally,
Fig. 3: The general concepts of few-shot learning and transfer learning.
Fig. 2: An overview of various deep generative models.
the model is expected to adapt the learned knowledge from the _base classes_ to _thenoval classes_\(\mathbf{C}_{n}\), _i.e._, capturing samples of new categories efficiently from only a few samples. This learning paradigm has made remarkable progress in various tasks of the few-shot learning field, such as classification [51, 52, 99, 100, 101], object detection [102, 103] and segmentation [104, 105, 106, 107, 108], as well as image generation [41, 43, 44, 109]. Existing surveys on few-shot learning [53, 54, 110] mainly discuss classification and perception tasks, while generation remains largely unexplored. Therefore, a comprehensive review of few-shot generation is essential and complementary to prior work.
### _Transfer Learning_
The generalization ability of traditional models is often impeded by the discrepancies between different domains. To address this challenge, transfer learning has been developed to explicitly reduce the shift across data distributions. Transfer learning aims to transfer learned prior knowledge from a _source domain_\(\mathcal{D}_{S}\), where sufficient training data is available, to a _target domain_\(\mathcal{D}_{T}\) with limited data, as illustrated in Fig. 3 (b). As a mature and well-defined topic, transfer learning has been successfully applied in many practical domains, ranging from natural language processing (NLP) [111], speech recognition [112, 113, 114], to various computer vision tasks [114, 115, 116, 117, 118, 119]. Based on the relationship between the source and the target domains, transfer learning can be further classified into several different types [120], such as multi-task learning, domain adaptation, unsupervised transfer learning, and so on. In the field of image synthesis under limited data, transfer learning techniques are leveraged to reuse the pre-trained source-domain models and transfer relevant knowledge to improve the synthesis quality of the target domain, which will be elaborated in the following context.
## 4 Data Efficient Generative Models
In this section, we provide a detailed discussion and analysis of data efficient generative models. The problem of data-efficient generative modeling is defined in Sec. 4.1, followed by a summary of existing models categorized into four distinct types in Sec. 4.2. Finally, we discuss popular benchmarks and the performance of existing models in Sec. 4.3.
### _Problem Definition_
Data efficient generative models refer to the scenario where a generative model is trained on a limited amount of training data, such as 100 images from 100-shot datasets [36, 91] or 1316 images from the MetFace [35], to produce diverse and plausible images that follow the given distribution. However, several issues such as model overfitting and memorization are prone to occur when training a generative model on limited samples \(D\) from scratch. Additionally, the imbalance between the discrete limited training samples and continuous latent distribution might lead to unstable training [60, 37, 121]. Therefore, data efficient generative models are expected to have satisfactory data efficiency to capture the given distribution. Notably, in contrast to the few-shot generative adaptation in Sec. 5, data efficient generative models do not require pre-trained models available for further fine-tuning. Lots of efforts have been endowed to improve the synthesis quality of data efficient generative models, and their advancements are discussed below.
### _Model Taxonomy_
According to different techniques and intuition in data-efficient generative models, existing approaches could be categorized into four distinct categories. Namely, augmentation-based, regularization-based, architecture variants, and off-the-shelf models-based approaches. We will provide technical details of each approach below.
**Augmentation-based approaches.** One straightforward solution for training generative models under limited data is to enlarge the training sets with data augmentation. Various data augmentation techniques have been successfully applied to increase data diversity, thereby mitigating the overfitting of generative models under limited data [36, 35, 90]. For instance, Karras _et al._ developed an adaptive discriminator augmentation (ADA) strategy to adaptively control the strength of data augmentation [35], Similarly, Zhao _et al._ proposed differentiable augmentation (DiffAug) to impose various types of differentiable augmentations on both real and fake samples [36]. Furthermore, Jiang _et al._ designed adaptive pseudo augmentation (APA) to alleviate the overfitting by adaptively augmenting the real data with the generator itself, enabling healthier competition between the generator and the discriminator [90]. Formally, the objective of augmentation-based models is given as:
\[\mathcal{L}_{D} =-\mathbb{E}_{\mathbf{x}\sim\mathcal{P}_{\mathbf{z}}}[\log(D( \mathbf{T}(\mathbf{x})))]-\mathbb{E}_{\mathbf{z}\sim\mathcal{P}_{\mathbf{z}}}[ \log(1-D(\mathbf{T}(G(\mathbf{z}))))], \tag{8}\] \[\mathcal{L}_{G} =-\mathbb{E}_{\mathbf{z}\sim\mathcal{P}_{\mathbf{z}}}[\log(D( \mathbf{T}(G(\mathbf{z}))))],\]
where \(\mathbf{T}(\cdot)\) denotes various data augmentations. In order to enable the generative model to work with more robust augmentations, Jeong proposed ContraD [122], which leveraged contrastive learning [123, 124, 125] to incorporate a wide range of data augmentations in GAN training. Specifically, ContraD employed one network to extract a contrastive representation from a given set of data augmentations and samples (both real and generated images). Then, the actual discriminator is defined upon the contrastive representation to minimize the training loss. Notably, classical data augmentations, such as rotation and translation, have been identified as potentially manipulating the real distribution and misleading the generator to learn the distribution of the augmented data [39]. Consequently, prior studies employ either differentiable [35, 36] or invertible [39] transformations to augment the training sets. Moreover, Huang _et al._ randomly masked out spatial and spectral information of input images to encourage more challenging holistic learning from limited data [126]. The masked images can be viewed as augmenting images with random masks, enlarging the training set, and simultaneously building a robust discriminator. Furthermore, besides applying augmentations at the image level, Dai _et al._ developed an implicit augmentation method to accomplish sample interpolation at the feature level [127]. In this way, the
interpolated features could be viewed as new samples at the real data manifold, thus facilitating model training in low-data regimes. Data augmentation is an intuitive and promising solution to improve data efficiency, and it is often complementary to other alternatives since no modification of the model and loss functions is performed.
**Regularization-based approaches.** Regularization is a popular technique to stabilize the training of deep models by penalizing the training process with additional constraints [128]. In particular, various regularization approaches have been proposed [129, 130, 131, 132] to mitigate the overfitting of data efficient generative models. For instance, Tseng _et al._ first tracked the discriminator predictions with exponential moving average variables (_i.e._, anchors) and then calculated their proposed regularization term to push the discriminator to mix the predictions of real and generated images, enabling a more robust training objective [133]. Similarly, Fang _et al._ regularized the discriminator by narrowing the gap between the norm of the gradients of the discriminator's prediction _w.r.t_ real images and _w.r.t_ generated images, thus avoiding bad attractors within the loss landscape [129]. Furthermore, Yang developed a prototype-based regularization to improve fidelity and a variance-based regularization to facilitate diversity [134]. In general, the regularization technique is orthogonal to other solutions since the network architecture remains unchanged. For example, Zhao _et al._ augmented the real data with various augmentations and penalized the sensitivity of the discriminator to these augmentations with consistency regularization (CR) [130]. However, CR might introduce artifacts into the GAN samples since the augmentations are only applied to the real images, leading to an imbalanced training process. To address this, Zhao _et al._ proposed balanced consistency regularization (bCR) to apply the augmentations on both real and generated samples. They also introduced latent consistency regularization (zCR) to modulate the sensitivity of the generator and discriminator changes in the latent space [131]. Following the same motivation of improving the discriminator, Kim _et al._ devised feature statistics mixing regularization (FSMR) to encourage the discriminator to be invariant to different styles of images [132]. This is accomplished by mixing features of an original image and a reference image in the feature space, generating images with novel styles in the semantic feature space. However, the aforementioned methods still learned the discriminator by encouraging it to distinguish real and generated samples, which might provide insufficient feedback to the generator. To combat this, Yang _et al._ assigned an additional instance discrimination task to the discriminator, which required the discriminator to distinguish every individual instance [37]. Considering that the synthesized images can be infinitely sampled, this approach provided the discriminator with sufficient samples to improve its representation ability. In turn, the generator received meaningful feedback from the discriminator, enabling it to produce diverse images.
In addition to overfitting, generative models under limited data might display another undesirable property named latent discontinuity [121], which refers to the model that yields discontinuous transitions in the output space when smoothly interpolated in the latent space. To address this, Kong _et al._ proposed a two-side mixup-based distance regularization [135]. They first sampled a random interpolation coefficient \(\mathbf{c}\) from a Dirichlet distribution to enforce relative semantic distances between synthesized samples to follow the mixup ratio. Simultaneously, controlled interpolation on the discriminator's feature space enables the semantic mixups of scarce data points to be obtained and exploited to guide the feature space to align with semantic distances. By doing so, both the latent space and feature space become smoother, and the latent space further enjoys mode-preserved properties. In contrast, Yang _et al._ introduced a noise perturbation strategy to enforce the discriminator's invariance to small perturbations in the latent space, thus improving the discriminative power [37]. Following this philosophy, Li _et al._ revisited the noise perturbation scheme and devised three additional techniques, namely noise-related latent augmentation, diversity-aware queue, and forgetting factor of the queue, to integrate contrastive learning into data efficient GAN training [121]. In particular, noise-related latent augmentation adopts different latent sampling to provide stronger similarity priors in low-density regions. Diversity-aware queue defines a dynamic queue size of negative samples based on the estimated sample diversity. Moreover, the forgetting factor of the queue assigns higher importance to current synthesized samples and lower attention to previous synthesized samples in the negative queue.
Typically, regularization technique is a simple yet effective approach that introduces additional constraints or extra priors to improve model stability. The technique is easy to implement since it requires no modifications to the network architecture or original loss functions. Hence, regularization is often complementary to other solutions. However, in limited-data regimes, stronger regularization is often required due to challenges such as model overfitting and memorization caused by data scarcity. Accordingly, finding more representative priors and additional supervision signals is crucial in developing more effective regularization techniques.
**Architecture variants.** Another popular technique for improving the data efficiency of generative models is to design suitable network architectures. While style-based generative models [78, 2, 3] have achieved impressive performance, they still struggle when given limited data due to their massive parameters, which lead to severe overfitting. To address this issue, Liu _et al._ proposed a light-weight GAN, _i.e._, FastGAN [91], which constituted of a skip-layer channel-wise excitation module and a self-supervised discriminator to accomplish high-quality synthesis with minimum computing cost. Building on the success of FastGAN, Li _et al._ proposed a memory concept attention (MoCA) [136] to dynamically update and cache the prototype memories with a momentum encoder. This attention mechanism can also modulate the continuous response of intermediate layers, allowing for a hierarchical and flexible composition of novel concepts to produce new images. Yang _et al._ further enhanced the performance of FastGAN and MoCA by introducing a frequency-aware discriminator and a high-frequency alignment module, which aims to mitigate the unhealthy competition between
the generator and the discriminator [42]. In order to discover the optimal model designs of FastGAN, Shi _et al._ proposed AutoInfoGAN [137], which leveraged a reinforcement learning neural architecture search (NAS) approach to identify the best network architectures. Additionally, a contrastive loss was assigned to the discriminator to improve its representative ability. Alternatively, Cui _et al._ proposed a generative co-training framework named GenCo [138], which incorporated multiple complementary discriminators into the model. This framework enabled the generator to obtain diverse supervision from various distinct views of multiple discriminators. However, the computational cost of GenCo was substantially higher than that of FastGAN since multiple independent discriminators were jointly trained.
Besides designing novel architectures, reducing the parameter complexity of existing large-scale models is also a promising solution for improving data efficiency. For instance, Chen _et al._ discovered independently trainable and highly sparse subnetworks (_i.e._, lottery tickets [139, 140]) from the original model and focused on learning the sparse subnetworks to enable a data-efficient generative model [141]. However, finding GAN tickets required an additional resource-consuming process of train-prune-retrain, which is expensive in practice. To address this, Saxena _et al._ proposed Re-GAN [142], which dynamically reconfigured the model architecture during training. Specifically, Re-GAN repeatedly pruned unimportant connections of the model and grew the connections during training to reduce the risk of pruning important connections. Considering that the discriminator overfits easily to limited data in the early stage of training, Yang _et al._ proposed DynamicD [143] to gradually decrease the capacity of the discriminator. Concretely, DynamicD randomly Shrank the layer width with a shrinking coefficient throughout the training process. This scheme enabled decreased model capacity and introduced multiple discriminators benefiting from the random sampling. Compressing the model capacity relaxes the data requirement since the pruned model is more light-weighted, and this solution is orthogonal to approaches that keep the model unchanged for a further performance boost.
**Off-the-shelf models.** In contrast to generative modeling that is trained from scratch in an unsupervised manner, visual recognition tasks typically utilize off-the-shelf large-scale pre-trained models for downstream applications. These models have proven effective at capturing useful representations, thus one would naturally wonder whether these models can be employed in training generative models. Sauer _et al._ made the first attempt to introduce pre-trained models into GAN training via projecting generated and real samples into a pre-defined feature space [38]. The projected features were then mixed across channels and resolutions to exploit the full potential of pre-trained perceptual feature spaces. This method significantly reduced the sample efficiency, convergence speed, and training time since only a small number of parameters were learned. Mangla _et al._ transferred informative prior knowledge derived from self-supervised/supervised pre-trained networks to facilitate the GAN training [144]. In particular, they used representations of each instance of training data obtained from the pre-trained models as a prior instead of a data distribution itself. However, with so many off-the-shelf models available in the community, it was still unclear which one(s) should be selected and in what manner they could be most effective. Accordingly, Kumari _et al._ proposed to integrate the most accurate model by probing the linear separability between real and synthesized samples in the pre-trained model embedding space, enabling ensembled discriminators [145]. It turned out that ensembling off-the-shelf models can improve GAN training in both limited data and large-scale settings. Especially, by leveraging powerful pre-trained neural networks and a progressive growing strategy, StyleGAN-XL [146] achieved a new state-of-the-art performance on large-scale image synthesis tasks, _e.g._, obtaining the best \(2.30\) FID score on \(256\times 256\) ImageNet [34].
Unlike the above approaches that directly employed off-the-shelf models, Cui _et al._ proposed a knowledge distillation approach to leverage the prior information of pre-trained vision-language models [147]. Through distilling general knowledge from text-image paired information of the vision-language model (_i.e._, CLIP [148]), the data efficiency was effectively improved. Discriminative and generative models share similar objectives in learning meaningful representations of observed data. Therefore, stronger pre-trained visual perceptual models might bring further improvements to visual synthesis tasks, especially when training samples are limited. However, the selection of pre-trained models should be carefully considered since some models (e.g., Inception-V3 [149]) might have a large perceptual null space [150], leading to no actual performance gains [151].
### _Benchmarks and Performances_
In this part, popular benchmarks for evaluating data efficient generative models are introduced, and the performances of prior approaches on these benchmarks are summarized for a more comprehensive presentation.
**FFHQ.** The full set of FFHQ contains 70\(K\) human-face images [32], and it is the most commonly used dataset in the community. In order to evaluate the data efficiency of various generative models, several subsets are randomly sampled from the full set, such as \(1K\), \(2K\) images, and these images are usually resized to 256 \(\times\) 256 \(\times\) 3 resolution. Notably, although trained on a subset of all available training images, the quantitative metrics (_e.g._, FID [152], KID [153], LPIPS [154]) are evaluated between 50\(K\) synthesized images and the full 70\(K\) images. Tab. II presents the FID scores of existing models under various data volumes of the FFHQ dataset. We could tell from these results that 1) Augmentation-based approaches such as ADA [35] exhibit satisfactory compatibility to regularization-based methods (_e.g._, InsGen [37], FakeCLR [121]); 2) Various techniques present different data-efficiency under different data volumes, for example, InsGen surpasses other alternatives on 10\(K\) images, whereas FakeCLR is the best under 2\(K\) and 5\(K\) training images. Additionally, the potential of architecture variants methods on other types of approaches is under-explored. More research is needed to investigate their compatibility for a further performance boost.
**AFHQ and CIFAR-10 datasets.** AFHQ [155] consists of three sub-categories, including cat, dog, and wild, each containing about 5\(K\) training images with the resolution
of 512 \(\times\) 512 \(\times\) 3. The three subsets are usually trained and tested individually in an unsupervised manner to evaluate the data efficiency. Moreover, the full set of CIFAR-10 [156] contains 60\(K\) training images with the resolution of 32 \(\times\) 32 \(\times\) 3. Common choices are randomly sampling 10%, 20%, and 100% of the samples from the full set for evaluation. Tab. III and Tab. IV respectively show the FID quantitative results of prior methods on the AFHQ and CIFAR-10 datasets. From these results, we can see that different types of methods are complementary to each other. Consequently, combining these techniques could lead to further performance improvements.
**Low-shot datasets.** In addition to sampling subsets from large-scale datasets, there are also single-category low-shot datasets [36] used for evaluating data-efficient generative models, such as Animal-Faces-Cat and Animal-Faces-Dog, which respectively contain 160 and 389 images for training, and 100-shot-Obama, Grumpy_Cat (GCat), and Panda with 100 images. These datasets are used to evaluate whether generative models can capture the data distribution of a low-shot dataset and produce diverse and novel images. As for evaluation, 50\(K\) generated images represent the synthesized distribution and the training set is employed as the referenced distribution. The quantitative results of existing models on these datasets are presented in Tab. V. Despite existing models showing acceptable data efficiency and training stability on these low-shot datasets, there is a risk of memorizing the training images since these datasets contain only simple objects (_e.g.,_ clear cat and dog faces), leading to favorable synthesis diversity. Thus, it is recommended to consider evaluating data efficiency on datasets that contain more diverse objects. Moreover, evaluation metrics should consider aspects of synthesized images in terms of fidelity, diversity, and distributional discrepancy. Additionally, there are also some other benchmarks used for evaluating data-efficient generative models, such as higher resolution datasets like MetFace [35] (1336 images with 1024 \(\times\) 1024 \(\times\) 3) and BrecaHAD [157] (136 images with 512 \(\times\) 512 \(\times\) 3) used in StyleGAN-ADA [35], as well as small subsets of CIFAR-100 and ImageNet evaluated in [133, 142]. Interested readers can refer to the original papers for more experimental details.
Although the approaches mentioned above have achieved impressive data efficiency trained from scratch with limited data, these models still tend to replicate the training images and produce less diverse outputs due to memorization. Accordingly, stronger and more effective
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{CIFAR-10} \\ \cline{3-5} & & 10\% & 20\% & 100\% \\ \hline StyleGAN2 [2] & - & 5.13 & 19.37 & 3.48 \\ ADA [35] & Augmentation & 3.55 & 7.40 & 3.05 \\ APA [90] & Augmentation & 4.88 & - & - \\ FastGAN [91] & Architecture & 10.17 & 25.36 & 7.30 \\ FreGAN [42] & Architecture & 6.62 & 20.75 & 6.37 \\ DynamicD [143] & Architecture & 5.41 & 16.00 & 3.34 \\ ContrD [122] & Regularization & 7.16 & 3.82 & 2.54 \\ InsGen [37] & Regularization & 2.60 & 5.44 & 1.77 \\ FSMR + ADA [132] & Regularization & 11.76 & 5.71 & 3.24 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: FID (\(\downarrow\)) scores of previous data-efficient generative models on the CIFAR-10 dataset [156]. The results are quoted from the published papers.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{AFHQ} \\ \cline{3-5} & & Cat & Dog & Wild \\ \hline StyleGAN2 [2] & - & 36.02 & 23.08 & 11.07 \\ StyleGAN-ADA [35] & Augmentation & 23.34 & 14.53 & 8.75 \\ DiffAug [36] & Augmentation & 14.50 & 12.15 & 9.89 \\ FSMR + ADA [132] & Regularization & 5.71 & 11.76 & 3.24 \\ FSMR + DiffAug [132] & Regularization & 6.29 & 14.55 & 4.28 \\ GerC [138] & Architecture & 28.08 & 16.57 & 8.83 \\ GerC0 + ADA [138] & Architecture & 18.10 & 12.61 & 7.98 \\ LeCam + ADA [133] & Regularization & 6.56 & - & 2.47 \\ Vision-Aided + ADA [145] & Off-the-shelf & 2.69 & 4.81 & 2.36 \\ ProjectedGAN [38] & Off-the-shelf & 2.16 & 4.52 & 2.17 \\ \hline \hline \end{tabular}
\end{table} TABLE III: FID (\(\downarrow\)) scores of previous data-efficient generative models on the AFHQ dataset [28]. The results are quoted from the published papers.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{FFHQ} \\ \cline{3-5} & & 0.1\(K\) & \(1K\) & \(2K\) & \(5K\) & \(10K\) \\ \hline StyleGAN2 [2] & - & 179.21 & 100.16 & 54.30 & 49.68 & 30.73 \\ ADA [35] & Augmentation & 82.17 & 21.29 & 15.39 & 10.96 & 7.29 \\ CR [130] & Regularization & 179.66 & - & 71.61 & - & 23.02 \\ CR + ADA [131] & Regularization & - & 22.61 & - & 10.58 & 7.53 \\ DiffAug [36] & Augmentation & 61.91 & 25.66 & 24.32 & 10.45 & 7.86 \\ DISP [144] & Off-the-shelf & - & - & 21.06 & - & - \\ GenC0 [138] & Architecture & 148 & 65.31 & 47.32 & 27.96 & - \\ APA + ADA [90] & Augmentation & 65.31 & 18.89 & 16.91 & 8.38 & - \\ LeCam [133] & Regularization & - & 63.16 & - & 23.83 & 14.58 \\ ResNet + ADA [37] & Regularization & 53.93 & 19.95 & 18.19 & - & 4.9 \\ FakeCLR + ADA [121] & Regularization & 42.56 & 15.92 & 9.9 & 7.25 & - \\ Re-GAN [142] & Architecture & - & 36.30 & - & 19.13 & - \\ DynamicD [143] & Architecture & 50.37 & - & 23.47 & - & - \\ \hline \hline \end{tabular}
\end{table} TABLE II: FID (\(\downarrow\)) scores of previous methods on different amounts of training images of the FFHQ dataset [32]. The results are quoted from the published papers.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{Animal Faces} & \multicolumn{3}{c}{100-Shot} \\ \cline{3-5} & & Cat & Dog & Obama & GCat & Panda \\ \hline StyleGAN2 [2] & - & 69.84 & 129.90 & 80.45 & 486.83 & 34.07 \\ ADA [35] & Augmentation & 42.50 & 58.74 & 47.09 & - & 22.12 \\ DiffAug [36] & Augmentation & 42.44 & 58.85 & 46.78 & 27.08 & 12.06 \\ LeCam + ADA [133] & Regularization & 34.18 & 54.88 & 33.16 & 24.93 & 11.66 \\ ArTA + ADA [90] & Augmentation & 42.60 & 81.16 & 42.97 & 18.10 & 19.21 \\ GenC0 [138] & Architecture & 30.89 & 49.63 & 32.21 & 17.79 & 9.49 \\ Lottery Taket + DHAug [130] & Architecture & 47.40 & 68.28 & 52.86 & 40.10 & 14.75 \\ InsGen [57] & Regularization & 33.01 & 44.93 & 32.42 & 22.01 & 9.85 \\ FakeCLR [121] & Regularization & 26.34 & 42.02 & 26.95 & 19.68 & 8.22 \\ FragScan + DHAug [130] & Architecture & 35.11 & 50.26 & 43.05 & 10.03 \\ MocA + DHaffAug [134] & Architecture & 38.00 & 54.04 & 34.13 & 24.87 & 11.24 \\ ProtoCA + DHaffAug [134] & Architecture & 33.31 & 50.89 & 33.46 & 24.92 & 9.52 \\ FeGAN + DHaffAug [62] & Architecture & 31.05 & 47.85 & 33.39 & 24.93 & 8.97 \\ Re-GAN [142] & Architecture & 42.11 & 57.20 & 45.70 & 32.76 & 12.60 \\ AutoIndGAN [137] & Architecture & 33.33 & 49.72 & 35.54 & 24.83 & 9.36 \\ KD-GAN + ADA [147] & Off-the-shelf & 32.81 & 51.12 & 31.78 & 19.76 & 8.85 \\ KD-GAN + LeCam [147] & Off-the-shelf & 31.89 & 50.22 & 9.38 & 19.65
techniques such as stronger regularization with extra supervision signals and novel network architectures are critical for further data efficiency improvements.
## 5 Few-shot Generative Adaptation
This section discusses the task of few-shot generative adaptation, which aims to transfer knowledge from a pre-trained large-scale source-domain generative model to a target domain with limited data. In particular, the problem definition, a taxonomy of existing approaches, commonly used benchmarks and the performance are respectively provided in Sec. 5.1, Sec. 5.2, and Sec. 5.3.
### _Problem Definition_
Fig. 1 illustrates the overall pipeline of few-shot generative adaptation. Akin to transfer learning, the goal of few-shot generative adaptation is to transfer the knowledge of pre-trained generative models from large-scale source domains (_e.g._, FFHQ) to target domains with limited data (_e.g._, 10-shot images of baby faces) in a fast and efficient manner. Ideally, the adapted generative model should 1) inherent the attributes of the source generative models that are invariant to the distribution shift, such as the overall structure, synthesis diversity, and semantic variances of generated images, and 2) capture the internal distribution of the target domain to synthesize novel samples following the target distribution. However, the limited amount of training data available for adaptation may cause the model to overfit, leading to model degradation. Additionally, when the domain gaps between the source domain and the target domain are significant, negative transfer may occur, resulting in unrealistic generation. Furthermore, inappropriate knowledge transfer [160] may also lead to a deterioration in synthesis performance. Below we present the primary solutions and discuss their characteristics regarding few-shot generative adaptation.
### _Model Taxonomy_
The central idea of existing few-shot generative adaptation approaches is to preserve the useful knowledge of the source domain and adapt it to the target domain with limited data. According to their techniques and design philosophy, we categorize prior studies of this field into four categories, namely 1) fine-tuning the model parameters to fit the target domain, 2) introducing extra branches to capture the target distribution, 3) regularizing the learning process via additional criteria, and 4) modulating the kernel of the network to transfer adequate knowledge. In the following, we will introduce these methods in more detail and discuss their advantages and limitations.
**Fine-tuning.** One typical solution for knowledge transfer in few-shot generative adaptation is to fine-tune the pre-trained generative model with the limited data of the target domain. TransferGAN [159] was the initial attempt to transfer a pre-trained GAN by simply optimizing all parameter of the source model with the original GAN loss (see Eq. (1)). However, this straightforward strategy may lead to overfitting, especially when the target data are extremely limited. (_e.g._, 10-shot samples). To address the overfitting issue, FreezeD [48] fixed some low-level layers of the discriminator during the adaptation process. Moreover, Zhao _et al._ revealed that low-level filters of both the generator and the discriminator can be transferred to facilitate more diverse generation for the target domain [161]. An adaptive filter modulation was further developed to better adapt the filters of adaptation, enabling boosted diversity. Another approach is to update only partial parameters of the pre-trained model to reduce overfitting. For instance, Noguchi _et al._ proposed batch statistics adaptation (BSA) [158], which focused on updating the parameters for batch statistics, scale and shift, of the generator's hidden layers, reducing the amount of parameters while maintaining the synthesis quality. Similarly, elastic weight consolidation (EWC) [162] identified the importance of model parameters and penalized the change of important parameters. Furthermore, Robb _et al._ repurposed component analysis techniques for generative adaptation [163] by learning to adapt the singular values of the pre-trained weights with the corresponding singular vectors frozen. These methods constrain the changes of the parameters and reduce overfitting. However, these methods focus solely on fitting the target distribution and may lose the prior knowledge of the source domain that cannot be derived from the limited data of the target domain. Accordingly, these methods may not fully exploit the potential of the pre-trained model and may not generate diverse and high-quality outputs.
**Extra branches.** In order to identify the most beneficial knowledge to transfer for a specific target domain, Wang _et al._ proposed MineGAN [47], which employed an additional mining network to find the distributions of pre-trained GANs that produce samples closest to the target images. This mining network shifted the input distribution towards the most interested regions regarding the target distribution, enabling more effective and efficient knowledge transfer. Similarly, Yang _et al._ imported two extra lightweight modules for generative adaptation [46]. The first module is an attribute adaptor on the latent code to transfer the most distinguishable characters, while the second module is an attribute classifier attached to the discriminator to encourage the generator to capture appropriate characters from the target domain. These two modules are fast to optimize and bring appealing results under various settings, especially when only one reference image is available. Along this line, Wu _et al._ proposed a domain re-modulation (DoRM) structure [164], which incorporated new mapping and affine modules to capture the characteristics of the target domain. DoRM also enabled multi-domain and hybrid-domain adaptation by integrating multiple mapping and affine modules.
With the remarkable development of diffusion models, it is interesting to investigate their performance for knowledge transfer to limited target domains. Moon _et al._ demonstrated that fine-tuning only small subsets of pre-trained diffusion models' parameters can efficiently capture the target distribution [165]. They also proposed a time-aware adapter module to improve the synthesis quality by fitting inside the attention block of the pre-trained diffusion models according to different timesteps. Zhu _et al._ designed a pairwise adaptation model to preserve useful
information of the source domain by keeping the relative pairwise distances between synthesized samples [166]. Such that, the diversity and synthesis details of original model were well-preserved, enabling diverse generation for the target domain. Although introducing additional modules for generative adaptation is effective and efficient compared to fine-tuning-based approaches, their output images might resemble the source domain since the original generator remains unchanged. Therefore, it is important to strike a balance between utilizing the prior knowledge of the pre-trained model and adapting the model to the characteristics of the target domain to generate diverse and high-quality samples.
**Regularization.** Another solution for knowledge transfer is to explicitly introduce additional supervision or constraint in the adaptation process. For instance, Ojha _et al._ proposed two novel strategies to transfer the diversity information from a large-scale source domain to the target domain [49]. In particular, a cross-domain consistency (CDC) regularization item was integrated to preserve relative pairwise distances between the source and target generated images. An anchor-based approach was further designed to encourage different levels of synthesis fidelity in the latent space to mitigate overfitting. In order to align the spatial structural information between synthesized image pairs of the source and target, Xiao _et al._ developed a relaxed spatial structural alignment (RSSA) [167], which preserved and transferred the structural information and spatial variation tendency of the source domain to the target by compressing the latent space to a subspace close to the target domain and regularizing the self-correlation consistency and disturbance correlation consistency. Following the same idea of preserving the diversity of the source domain, Hou _et al._ proposed a dynamic weighted semantic correspondence (DWSC) to explicitly preserve the perceptual semantic consistency between generated images of source and target domain [168]. Zhao _et al._ discovered that all generative adaptation models achieved similar quality after convergence, and thus proposed a dual contrastive learning (DCL) framework to slow down the diversity degradation by preserving the multi-level diversity of the source domain throughout the adaptation process. [169]. For the global-level knowledge transfer, Zhang _et al._ leveraged the difference between the CLIP features of the source and target domain to constrain the target generator [170]. An attentive style loss that aligned the intermediate token between the adapted source image with the referenced target image was further integrated for local-level adaptation. Moreover, Zhang _et al._ proposed a generalized generative adaptation framework that realized both style and entity transfer [171]. The core intuition behind this was to employ sliced Wasserstein distance to regularize the internal distribution of the referenced target images and the synthesized samples. An auxiliary network was developed to explicitly disentangle the entity and style of referenced images, and a style fixation module was employed to obtain the exemplary style. A variational Laplacian regularization was further devised to improve the smoothness of the adapted generator. Differently, Mondal _et al._ observed that target images can be 'embedded' onto the latent space of a pre-trained model on source images [172]. Therefore, they optimized a latent learner network during the inference stage to find corresponding latent code to the target domain on the manifold of the source domain. In this way, the target embedding is employed by the source-domain generator to produce novel images. Following this, the most recent WeditGAN [173] achieved knowledge transfer via relocating the distribution of source latent spaces towards target latent spaces by learning a constant latent offset for editing the latent space.
Inspired by the remarkable generation capability of text-to-image diffusion models, Song _et al._ demonstrated that the generators can distill prior knowledge from large-scale text-to-image diffusion models by employing the classifier-free guidance as a critic [174]. Additionally, a directional and reconstruction regularizer was developed to avoid model collapse. This work revealed the potential of distilling prior knowledge from pre-trained large-scale diffusion models to other types of generative models. Stronger regularization and further investigation on this topic are interesting future work. Overall, regularization-based approaches render promising results for transferring knowledge from large-scale pre-trained generative models. However, one critical limitation is that they have a trade-off between preserving source domain priors and modeling target distributions. That is, too-strong regularization leads to overfitting, and too-weak regularization causes source domain output instead of the target domain. Consequently, finding a proper value of regularization is critical to achieving effective and efficient knowledge transfer. Meanwhile, developing stronger regularization techniques that leverage more prior information/supervision signals of the original data is also essential for further improvements.
**Kernel modulation.** One significant limitation of the aforementioned approaches is their sole reliance on the source domain, which disregards the target domain/adaptation and raises concerns about the generalization capability of these models for setups with varying proximity between the source and target domains. To address this, Zhao _et al._ proposed an adaptation-aware kernel modulation (AdAM) pipeline [50]. In particular, AdAM probed the importance of different kernels in the network and preserved crucial weights during the adaptation process. Building on this philosophy, they further devised RICK [160], which explored incompatible knowledge transfer in the adaptive process. AdAM's method of estimating the importance of various filters was utilized. Next, filters with lower importance below a predefined threshold were pruned, and more important filters were frozen. The rest of the filters were fine-tuned in the training process. This procedure successfully removed incompatible knowledge during target adaptation. However, this technique may not be adequate for more challenging setups with a wide gap between the source and target domains, as crucial kernels/filters may be scarce for adaptation. Future research is needed to develop more effective approaches that consider both the source and target domains in the adaptation process to ensure the models' generalization capability.
### _Benchmarks and Performances_
In this part, we present popular benchmarks of few-shot generative adaptation, and compare the performances of
prior studies on these benchmarks.
**FFHQ (source) to relevant human face target domains.** Typically, the performance of few-shot generative adaptation approaches is evaluated under varying degrees of proximity between the source and target domains. The Babies, Sunglasses, and Sketches datasets [49], each containing only 10-shot target images for adaptation, are the most commonly used datasets for this purpose. Furthermore, the pre-trained FFHQ dataset, which comprises 70,000 training images, is the prevalent choice of source generator. Quantitative evaluation entails computing various metrics on the entire set of Babies, Sunglasses, and Sketches datasets, consisting of approximately 2500, 2700, and 300 images, respectively. Tab. VI presents the FID scores of existing alternatives on the three datasets. The results indicate that: 1) augmentation-based approaches (_e.g._, ADA [35]) are complementary to fine-tuning based few-shot generative adaptation models; 2) among all prior methods, regularization and the introduction of extra branches appear to be more effective than fine-tuning-based approaches, enabling better knowledge transfer from the source to the target. However, the intersection of different types of methods remains underexplored, and it is essential to combine them to investigate the potential for further performance improvements.
**FFHQ (source) to irrelevant animal face target domains.** In addition to transferring knowledge from pre-trained source domains to relevant target domains, evaluating the model's performance under irrelevant source-target settings is crucial to determine its effectiveness. Given a generator that is pre-trained on the human face domain (_i.e._, FFHQ), the generator is then adapted to the animal face target domains, namely AFHQ-Cat, AFHQ-Dog, and AFHQ-wild. Tab. VII shows the quantitative results under this setting. Same conclusions akin to Tab. VIII 6 could be drawn from these results. Interestingly, kernel modulation based methods (_i.e._, ADAM [50] and RICK [160]) outperform other alternatives with a substantial margin, demonstrating that mining important weights is effective for distant domain knowledge transfer.
**Knowledge transfer between various source-target domains.** In addition to evaluating the transfer of knowledge between different target domains, generators pre-trained on various source domains can also be used for knowledge transfer. For instance, generative models trained on Cars (_resp._ Church) [33] can be employed for other target domains, such as Abandoned Cars (_resp._ Haunted House) [50]. Furthermore, Otto's Paintings dataset is utilized as the target domain for transferring prior information from the FFHQ source domain. Tab. VIII presents the LPIPS [154] scores under these source-target domains. Notably, the LPIPS score reflects the sample diversity of the target domain outputs. We could observe that regularization-based methods perform better than fine-tuning-based models, suggesting the effectiveness of introducing extra supervision/prior information to the adaptation. By contrast, the performance of importing extra branches and modulating the kernels/filters of the network under such settings still remains vacant. Thus, further investigation is required for future research. Additionally, it is crucial to test the generalization capability of these models under more challenging scenarios, such as 1) when only one single target image for adaptation [23, 175]; and 2) when multiple target domains are given, and source knowledge must be simultaneously adapted in a unified framework [176, 177]
## 6 Few-shot Image Generation
This section discusses the task of few-shot image generation, whose problem definition, main solutions, popular benchmarks and performances are respectively presented in Sec. 6.1, Sec. 6.2, and Sec. 6.3.
### _Problem Definition_
Following the prior philosophy of few-shot learning, few-shot image generation is formulated to synthesize diverse and photorealistic images for a new category given \(K\) input images from the same category. The model is trained in an episodic task-by-task manner, wherein each \(N\)-way-\(K\)-shot image generation task is defined by \(K\) input images from each of the \(N\) classes. The training and testing phases of few-shot image generation involve splitting the dataset into two disjoint subsets: seen classes \(\mathbb{C}_{s}\) and unseen classes \(\mathbb{C}_{u}\). During training, a considerable number of \(N\)-way-\(K\)-shot image generation tasks from \(\mathbb{C}_{s}\) is randomly sampled, with
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{Source: FFHQ} \\ \cline{3-4} & & Babies & Sunglasses & Sketches \\ \hline TransferGAN [159] & fine-tuning & 104.79 & 55.61 & 53.41 \\ TransferGAN + ADA [159] & fine-tuning & 102.58 & 53.64 & 66.99 \\ Scale/Batch [158] & fine-tuning & 140.34 & 76.12 & 69.32 \\ Free2D [48] & fine-tuning & 110.92 & 51.29 & 46.54 \\ MineGAN [47] & extra-modules & 98.23 & 68.91 & 64.34 \\ EWC [162] & fine-tuning & 87.41 & 59.73 & 71.25 \\ CDC [49] & regularization & 74.39 & 42.13 & 45.67 \\ DWSC [168] & regularization & 73.37 & 36.04 & 39.86 \\ DCL [169] & regularization & 52.56 & 38.01 & 37.90 \\ ADAM [50] & modulation & 48.83 & 28.03 & - \\ KDK [160] & modulation & 39.39 & 25.22 & - \\ KD-GAN [147] & modulation & 68.67 & 34.61 & 35.87 \\ GenDA [46] & extra-modules & 47.05 & 22.62 & 31.97 \\ WiedGAN [173] & regularization & 46.70 & 28.09 & 38.44 \\ ISLL [172] & regularization & 63.31 & 35.64 & 35.59 \\ DoRM [164] & extra-modules & 30.31 & 17.31 & 40.05 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: FID (\(\downarrow\)) scores of existing few-shot generative adaptation methods on transferring prior knowledge from the FFHQ source dataset to 10-shot target domains Babies, Sunglasses, and Sketches. The results are quoted from the published papers.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{3}{c}{Source: FFHQ} \\ \cline{3-5} & & AHHQ-Cat & AHHQ-Dog & AHHQ-Wild \\ \hline TransferGAN [159] & fine-tuning & 64.68 & 151.46 & 81.30 \\ TransferGAN + ADA [159] & fine-tuning & 80.16 & 162.63 & 81.55 \\ Free2D [48] & fine-tuning & 63.60 & 157.98 & 77.18 \\ EWC [162] & fine-tuning & 74.61 & 158.78 & 92.83 \\ CDC [49] & regularization & 176.21 & 170.95 & 135.13 \\ DCL [169] & regularization & 156.82 & 171.24 & 115.93 \\ ADAM [50] & modulation & 58.07 & 100.91 & 36.87 \\ RICK [160] & modulation & 53.27 & 98.71 & 33.02 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: FID (\(\downarrow\)) scores of existing few-shot generative adaptation methods on transferring prior knowledge from the FFHQ source dataset to animal face target domains. The results are quoted from the published papers.
the aim of encouraging the model to acquire the ability to generate novel samples. In the testing phase, the model is expected to generalize this ability to generate new images for \(\mathbb{C}_{u}\), based on only a few samples from each class. Few-shot image generation is known to suffer from catastrophic forgetting, whereby the model forgets previous knowledge and focuses excessively on new tasks, thus impairing its ability to generalize to unseen classes. Exiting approaches seek to address this challenge from various perspectives, we analyze their respective advantages and disadvantages in detail below.
### _Model Taxonomy_
Depending on the mechanisms of producing novel images with few input conditional images, prior methods of few-shot image generation can be categorized into optimization-based, fusion-based, transformation-based, and inversion-based approaches. These approaches differ in terms of model design, optimizing objectives, model training, and inference, we present their primary concepts in detail below.
**Optimization-based approaches.** Motivated by the tremendous advances in meta-learning [178, 179], particularly in the context of few-shot classification tasks [180, 181, 182, 183], optimization-based few-shot generation models have been proposed to leverage meta-learning for learning episodic tasks and generating novel samples from few observations. To be more specific, these models aim to learn optimal parameters \(\Phi\), via meta-training on a set of tasks \(T\) with multiple tasks \(\tau\), where each task \(\tau\) defines an image generation problem with few conditional images \(\mathbf{X}_{\tau}\) and a loss \(L_{\tau}\) that measures the score of discriminating generated images from real images sampled from task \(\mathbf{X}_{\tau}\). The goal is to enable fast adaptation to novel random tasks by minimizing the associated loss \(L_{\tau}\):
\[\min_{\Phi}\mathbb{E}_{\tau}\left[L_{\tau}\left(U_{\tau}^{k}(\Phi)\right) \right], \tag{9}\]
where \(U_{\tau}^{k}(\Phi\) represents the operator that update the parameters \(\Phi\) conditioned on the tasks \(T\). Several optimization-based few-shot generation models have been proposed in the literature. As a pioneering attempt, FIGR [184] incorporated the optimization-based meta-learning Reptile [185] into GAN training for few-shot image generation. Similarly, Liang _et al._ proposed a plug-and-play domain adaptive few-shot generation framework (DAWSON) which supported a broad family of meta-learning models and various GANs [186]. However, training these models is computationally expensive due to the requirement of two-stage training for both GANs and meta-learning. To address this issue, Phaphuangwittayak _et al._ developed a fast adaptive meta-learning (FAML) framework, which significantly reduced training time by training a simpler network with conditional feature vectors from the encoder [187]. Moreover, they extended FAML via applying a self-supervised contrastive learning strategy to the fast adaptive meta-learning framework [188], which improved both the speed of model convergence and the synthesis performance. Nevertheless, the output images produced by optimization-based models often suffer from poor fidelity and unrealistic appearance, despite their ability to produce novel samples.
**Transformation-based approaches.** Transformation-based approaches aim to capture the inter and intra-category translation to produce new images of unseen categories. For instance, DAGAN [189] produced novel samples by injecting random noises into the representation of a single input image. However, conditioned on a single input image, the sample diversity of images produced by DAGAN [189] was limited. To mitigate this, Hong _et al._ proposed a delta generative adversarial network (DeltaGAN) [190], which explicitly extracted the intra-category information (_a.k.a_ "delta") from same-category feature pairs during training to produce novel features and transform input conditional images into new images, thereby substantially improving synthesis diversity. In order to learn the relationship across seen and unseen categories, Huang _et al._ proposed an implicit support set autoencoder (ISSA) [191], which inferred the representation of the underlying distribution from trained ISSA to produce novel samples.
However, the transformation-based methods might fail to produce novel images when the intra- and inter-category transformation relations are complex. One solution is to build more advanced modules to encode the complex transformations between distinct samples. Following this philosophy, Hong _et al._ proposed Disco-FUNIT [192], a discrete vector quantization-based transformation method. Concretely, they first learned a compact dictionary of local content vectors through quantizing continuous content mapping. The encoded discrete content maps captured the translation relationships between various samples. Then, the autoregressive distribution of the discrete content vectors was modeled conditioned on style maps, alleviating the incompatibility between content and style maps. Finally, diverse images could be produced conditioned on the style vectors and the content maps. Similarly, Xie _et al._ proposed FeaHa [193], which explicitly learned and memorized reusable features from seen categories to generate new features for novel image generation. The image features were decomposed into category-related and category-independent features, the category-independent features were employed to generate new features with a feature hallucination module. Through sampling from the memorized reusable features, diverse and novel images can be generated. The successes of Disco-FUNIT [192] and FeaHa [193] demonstrate that transformation-based approaches benefit from stronger representations of the conditional input images, such as semantic, content, and style features. This suggests that future research in this area may benefit from exploring the development of more
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multicolumn{2}{c|}{FIFPA\(\rightarrow\)} & \multirow{2}{*}{Cars\(\rightarrow\)} \\ & & Onvo \& Failuring & Hand Hand & Handboard Cars \\ \hline TransferGAN [159] & fine-tuning & 0.51\(\pm\)0.02 & 0.52\(\pm\)0.04 & 0.46\(\pm\)0.03 \\ TransferGAN + ADA [159] & fine-tuning & 0.54\(\pm\)0.02 & 0.57\(\pm\)0.03 & 0.48\(\pm\)0.04 \\ Self/Shift [184] & fine-tuning & 0.46\(\pm\)0.02 & 0.43\(\pm\)0.02 & 0.41\(\pm\)0.03 \\ Frozen [184] & fine-tuning & 0.54\(\pm\)0.03 & 0.45\(\pm\)0.02 & 0.30\(\pm\)0.05 \\ MineGAN [17] & extra-modules & 0.53\(\pm\)0.04 & 0.44\(\pm\)0.06 & 0.49\(\pm\)0.02 \\ EVC [169] & fine-tuning & 0.56\(\pm\)0.03 & 0.58\(\pm\)0.06 & 0.43\(\pm\)0.02 \\ CDC [69] & regularization & 0.63\(\pm\)0.03 & 0.68\(\pm\)0.04 & 0.52\(\pm\)0.04 \\ DCL [90] & regularization & 0.64\(\pm\)0.02 & 0.63\(\pm\)0.01 & 0.53\(\pm\)0.02 \\ \hline \end{tabular}
\end{table} TABLE VIII: LPIPS (\(\uparrow\)) scores of existing few-shot generative adaptation methods under distant source-target domain transfer. The results are quoted from the published papers.
effective representation learning strategies.
**Fusion-based approaches**. Fusion-based models interpolate the conditional input images at the image or semantic level to generate novel samples. For instance, MatchingGAN [61] fused the features of input images from the same class with random vectors and generated new images for this class based on the fused features. GMN [194] combined VAE [74] with Matching networks [195] to capture the few-shot distribution. Although these models can generate diverse images, the details of the synthesized samples may be unfavorable. To address this issue, F2GAN [43] employed a non-local attention module to explicitly attend to more low-level details, resulting in a better generation of image details. In particular, after fusing the high-level features of conditional images with random interpolation coefficients, a non-local attention module was employed to explicitly attend more low-level details to the fused features, enabling better generation of image details. However, MathingGAN and F2GAN suffer from poor semantic fine-grained details due to severe semantic misalignment caused by global perspective interpolation of conditional images. To tackle this, Gu _et al._ proposed a local fusion GAN model (LoFGAN) [196], enabling local semantic fusion. Concretely, the input conditional images were first randomly divided into a base image and several reference images, where semantic similarities between local representations of the base image and the reference images were computed with the cosine distance, and the closest relevant representations are fused with the base image. enabling highly correlated semantic fusion. Together with a local reconstruction that facilitated the fidelity of fused local representations, the synthesis quality of LoFGAN surpassed prior studies by a substantial margin. Based on LoFGAN, Li _et al._ further improved the synthesis performance with an adaptive multi-scale modulation GAN (AMMGAN) [197], whose key contribution was introducing an adaptive self-metric fusion module that adaptively adjusted the mean and variance of the fused feature based on the high-level semantic features from the decoder.
F-principal [198] identified that deep neural networks preferentially capture frequency signals from low to high, with low frequency signals having higher priority than high frequency signals in the fitting process [199, 200, 201]. This phenomenon also exists in deep generative models, where they prefer to produce low frequency signals with more superficial complexity. In order to alleviate the generator's struggles of capturing high frequency components, Yang _et al._ designed a frequency-aware GAN dubbed WaveGAN [41], which decomposed features into multiple frequency representations from low to high and feed the high frequency components to the decoder via high-frequency skip connections, improving the frequency awareness of the generator. Low frequency skip connections and a frequency loss were further employed to maintain the textures and outline of generated images. Experimental results demonstrated that explicitly incorporating high-frequency signals into the generating process significantly improves the synthesis fidelity. However, it is still unclear which frequency components are the most important for high-fidelity few-shot image generation, and how to improve diversity with frequency signals. Further studies are required to investigate these aspects of frequency-based fusion approaches for few-shot image generation.
**Inversion-based approaches.** GAN inversion [68] aims to obtain the inverted latent code via inverting a given image back into the latent space of a pre-trained GAN model, which can be used for reconstructing [29, 202], manipulating [203], and editing [71] the original image. Based on the successes of GAN inversion techniques, it is natural to investigate their potential impacts on few-shot image generation. Based on the assumption that any image is a composition of attributes and the editing direction for a specific attribute should be the same regardless of different categories, inversion-based approaches were first developed in attribute group editing (AGE) [44] by Ding _et al._. Specifically, they employed class embeddings to represent the category-relevant attributes and learned a dictionary to collect the category-irrelevant attributes. The class embeddings and the dictionary were learned in the latent space through inversion, so that diverse images could be generated by editing the category-irrelevant attributes in the latent space. Moreover, through manipulating the latent codes toward specific directions of the attributes, this method enabled controllable image generation for the first time in this field. In order to achieve more stable class-relevant image generation, Ding _et al._ further extended AGE with an adaptive editing strategy to enhance the stability of category retention [45]. However, Li _et al._ found that AGE suffered from the trade-off between the quality and diversity of generated images and thus proposed a hyperbolic attribute editing-based (HAE) method to capture the hierarchy among input conditional images [204]. Their editing of the attributes was accomplished hierarchically in hyperbolic space, which can encode semantic representations with a large corpus of images, enabling a more controllable and interpretable editing process. In contrast to the above methods, which viewed the attributes in the latent space as discrete properties, Zheng _et al._ explored the continuity of the latent space for finding unseen categories [205]. The rationale behind their method was that the neighboring latent space around the novel class belongs to the same category. A two-stage latent subspace optimization framework was thus proposed, stage one used few-shot samples as positive anchors of the novel class and adjusted the latent space to produce the corresponding results, the second stage governed the generating process via refining the latent space of the unseen categories. Through the two-stage optimization, the latent space of the unseen categories can be refined and the generation ability of the latent codes were elevated. Although inversion-based approaches enable more diverse and controllable image generation, the transformation directions of attributes in the latent space still remain mysterious and require further investigation.
**Diffusion-based approaches.** Inspired by recent remarkable advances of denoising diffusion probabilistic models (DPMs) in visual synthesis, Giannonc _et al._ presented a few-shot diffusion models (FSDM) [66]. By leveraging conditional DDPMs and vision transformer (ViT) [206, 207], FSDM can capture image distribution at the image and patch level, enabling the generation of novel images given as few as 5 unseen images at test time. As the first
attempt to integrate DPMs into few-shot image generation, FSDM can inspire more interesting works that utilize the properties of DPMs to improve synthesis quality.
### _Benchmarks and Performances_
**Datasets.** There are three popular benchmarks for evaluating the performance of few-shot image generation, namely Flower [208], Animal Faces [209], and VGGFace [210]. Each of these datasets is divided into seen categories \(\mathbb{C}_{s}\) and unseen categories \(\mathbb{C}_{u}\), and the model is trained on \(\mathbb{C}_{s}\) and tested on \(\mathbb{C}_{u}\). Tab. IX presents the detailed splits of these datasets. The quantitative metrics are computed between the synthesized samples for the unseen categories and the real images from the unseen categories, and the seen categories are only used in the training process. These benchmarks provide a standardized way to evaluate the performance of few-shot image generation models.
**Performance.** Tab. IX presents the FID and LPIPS scores of previous few-shot image generation models on the three widely used datasets. The "3-shot" and "1-shot" settings indicate the number of input conditional images used during model training and testing. These results demonstrate that: 1) Fusion-based approaches consistently achieve better FID scores compared with other types of methods, suggesting that fusion-based models perform better in synthesis fidelity; 2) Transformation-based methods, which capture the transformation between intra- and inter-categories, can produce more diverse images, identified by higher LPIPS scores that measure the differences between the generated images; and 3) These models have made significant progress on these datasets, _e.g.,_, the FID score of WaveGAN on VGGFace is 4.96. However, the resolutions of these datasets (128 \(\times\) 128 \(\times\) 3) are relatively low, limiting their applications in some practical domains that require high-resolution images. Accordingly, benchmarks with higher resolution and more objects might further promote the advancement of this field.
## 7 One-shot Image Generation
In this section, we first provide the definition of one-shot image generation in Sec. 7.1, then we provide our taxonomy on existing approaches for one-shot image generation in Sec. 7.2, and finally, we present the quantitative metrics and compare the quantitative and qualitative performances of existing models on popular benchmarks in Sec. 7.3.
### _Problem Definition_
One-shot image generation refers to the task of training a generative model to produce novel and diverse images using only a single reference image, without the use of any pre-trained generative models for knowledge transfer. This task is of significant importance as it demonstrates the potential application of generative models in practical domains where collecting large-scale training samples is not feasible. Consequently, the model is expected to capture the internal distribution of the training image and generate diverse images that share the same internal distribution as the reference image, which is an extremely challenging task. Intuitively, synthesizing images directly from only one image presents a risk of low-variation generation, as the model may simply replicate the given sample. However, existing methods address this issue by modeling the internal statistics of patches within the training image, allowing the model to capture the information of the target distribution. Fig. 1 presents the classical framework of one-shot generative models, commonly employing a multi-scale training architecture. In the following context, popular solutions for one-shot image generation and their characteristics will be introduced.
### _Model Taxonomy_
According to whether training is required for generating novel images, existing one-shot generative models can be broadly two categories: non-parametric approaches and parametric approaches. Non-parametric approaches require model training to capture the internal distribution of each image, while parametric approaches employ classical patch-based nearest neighbor matching to produce new images. Model training based approaches could be further categorized into GAN-based and diffusion-based according to which generative model is used for training. The details of these methods are presented and discussed below.
**GAN-based approaches.** The idea of learning a generative model from a single image was first proposed in SinGAN [62], which introduced a multi-scale GAN framework that captures the internal distribution of patches within the input image in a coarse-to-fine manner. At each scale \(n\), the model generated an image by upsampling the previous scale's image through \(G_{n}\), and a random noise was added to the input of \(G_{n}\). Moreover, the discriminator at the \(n\)-th scale was responsible for distinguishing the patches in the down-sampled training image, \(x_{n}\), and the generated image \(G_{n}(z_{n},\hat{x}_{n-1})\). such a multi-scale pipeline formulated a pyramid of GANs, and these subnetworks were trained sequentially from coarsest to finest scale. Once trained, the model can hierarchically generate novel images. Recurrent SinGAN [213] replaced the pyramid of generators in SinGAN with a single recurrent generator, enabling a scale-agnostic one-shot generative model. In order to improve the generator's ability to capture the global structure of the training image, Chen _et al._ imported a self-attention mechanism to SinGAN [214]. However, training a pyramid of GANs sequentially was time-consuming, and ConSinGAN [215] proposed some best practices to improve the synthesis quality beyond that of SinGAN. Concretely, ConSinGAN concurrently trained several stages in a sequential multi-stage way, significantly improving the training speed and enabling fewer parameters. Furthermore, ExSinGAN [64] combined external prior obtained by GAN
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\#cls} \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\#img} \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \multirow{2}{*}{\#cls} \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} \multirow{2}{*}{\#img} \\ \end{tabular} } \\ \cline{1-1} \cline{5-5} \
inversion with the information of internal patches to obtain better generation quality and competitive generalization ability for manipulating the input image.
Unlike the two-stage approaches that first train the generator on low-resolution images and then optimize for high-resolution generation, Zhang _et al._ proposed PetsGAN [216], which leveraged the external and internal priors in one stage. In particular, the external priors were obtained via a lightweight deep external prior network, providing high-level information for generation, while the internal priors reduced the patch discrepancy between the synthesized image and the training image using a global reconstruction loss. PetsGAN was trained in an end-to-end manner with strong priors, thus effectively speeding up the training process and achieving performance improvements. Similarly, Sushko _et al._ developed an end-to-end one-shot GAN that could learn to generate samples from one image or one video [217]. The key component of their one-shot GAN was a two-branch discriminator with content and layout branches that respectively discriminate the internal content and the scene layout realism. Such designs allowed a realistic and diverse generation with varying content and layout. Recently, Jiang _et al._ pointed out that existing CNN-based one-shot GANs struggled to extract and maintain the global structural information [218]. Accordingly, they exploited vision transformer (ViT) [206, 207] to capture the global structure of an image and maintain the integrity of semantic-aware information. Together with a scaling formula that had scale-invariance, their proposed model TcGAN effectively improved the quality of image generation and super-resolution tasks.
Although existing GAN-based one-shot generative models have demonstrated impressive abilities in diverse generation and manipulating input images, they require retraining for any new input images, which is time-consuming and expensive. Approaches that can 1) quickly capture the internal distribution and 2) generalize to new samples are more favorable for practical applications.
**Diffusion-based approaches.** Denoising diffusion probabilistic models (DPMs) have become the most popular generative model in the community, achieving unprecedented improvements in image synthesis. As an emerging research topic in the last two years, there have been efforts to investigate the performance of DPMs in capturing the internal distribution of a single image. Wang _et al._ designed a patch-wise denoising framework dubbed SinDiffusion [65] with two key ingredients. First, in order to avoid the accumulation of errors and reduce the training cost, SinDiffusion was trained with a single model to progressively generate images over timesteps. Second, SinDiffusion learned to estimate the noise based on a local patch instead of the whole training image, enabling the model to have a reasonable receptive field for diverse output. Once trained, SinDiffusion could be applied to a variety of image manipulation tasks, including text-guided image generation, image output painting, image harmonization, and more. SinDDM [67] was a concurrent work that learned the internal statistics of the input image with a multi-scale diffusion process. A convolutional lightweight denoiser conditioned on the noise level and the scale was proposed to derive the reverse diffusion process for training. In this way, SinDDPM was capable of producing novel samples in a coarse-to
\begin{table}
\begin{tabular}{l|c|c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & \multirow{2}{*}{Setting} & \multicolumn{2}{c|}{Flowers} & \multicolumn{2}{c|}{Animal Faces} & \multicolumn{2}{c}{VGGFace} \\ & & & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) & FID (\(\downarrow\)) & LPIPS (\(\uparrow\)) \\ \hline FIGR [184] & Optimization & 3-shot & 190.12 & 0.0634 & 211.54 & 0.0756 & 139.83 & 0.0834 \\ GMN [194] & Fusion & 3-shot & 200.11 & 0.0743 & 220.45 & 0.0868 & 136.21 & 0.0902 \\ DAWSON [186] & Optimization & 3-shot & 188.96 & 0.0583 & 208.68 & 0.0642 & 137.82 & 0.0769 \\ DAGAN [189] & Transformation & 3-shot & 151.21 & 0.0812 & 155.29 & 0.0892 & 128.34 & 0.0913 \\ MatchingGAN [61] & Fusion & 3-shot & 143.35 & 0.1627 & 148.52 & 0.1514 & 118.62 & 0.1695 \\ F2GAN [43] & Fusion & 3-shot & 120.48 & 0.2172 & 117.74 & 0.1831 & 109.16 & 0.2125 \\ DeltaGAN [109] & Transformation & 3-shot & 104.62 & 0.4281 & 87.04 & 0.4642 & 78.35 & 0.3487 \\ FUNIT [209] & Transformation & 3-shot & 100.92 & 0.4717 & 86.54 & 0.4748 & - & - \\ DiscFOOTNOTE [211]Footnote *: footnotemark: [ENDFOOTNOTE] & Transformation & 3-shot & 84.15 & 0.5143 & 66.05 & 0.5008 & - & - \\ SAGE [45] & Optimization & 3-shot & 41.35 & 0.4330 & 27.56 & 0.5451 & 32.89 & 0.3314 \\ LoFGAN [196] & Fusion & 3-shot & 79.33 & 0.3862 & 112.81 & 0.4964 & 20.31 & 0.2869 \\ AMMGAN [197] & Fusion & 3-shot & 75.40 & 0.3990 & 105.11 & 0.5123 & 40.22 & 0.2987 \\ WaveGAN [41] & Fusion & 3-shot & 42.17 & 0.3868 & 30.35 & 0.5076 & 4.96 & 0.3255 \\ LSO [205] & Inversion & 3-shot & 34.59 & 0.3914 & 23.67 & 0.5198 & 3.98 & 0.3344 \\ \hline DAGAN [189] & Transformation & 1-shot & 179.59 & 0.0496 & 185.54 & 0.0687 & 134.28 & 0.0608 \\ DeltaGAN [109] & Transformation & 1-shot & 109.78 & 0.3912 & 89.81 & 0.4418 & 80.12 & 0.3146 \\ FUNIT [209] & Transformation & 1-shot & 105.65 & 0.4221 & 88.07 & 0.4362 & - & - \\ Disco-FUNIT [212] & Transformation & 1-shot & 90.12 & 0.4436 & 71.44 & 0.4411 & - & - \\ LoFGAN [196] & Fusion & 1-shot & 137.47 & 0.3868 & 152.99 & 0.4919 & 26.89 & 0.3208 \\ AGE [44] & Inversion & 1-shot & 45.96 & 0.4305 & 28.04 & 0.5575 & 34.86 & 0.3294 \\ WaveGAN [41] & Fusion & 1-shot & 55.28 & 0.3876 & 53.95 & 0.4948 & 12.28 & 0.3203 \\ HAE [204] & Inversion & 1-shot & 64.26 & 0.4739 & 28.93 & 0.5417 & 35.93 & 0.5919 \\ LSO [205] & Inversion & 1-shot & 35.87 & 0.4338 & 27.20 & 0.5382 & 4.15 & 0.3834 \\ \hline \hline \end{tabular}
\end{table} TABLE 10Comparisons of FID (\(\downarrow\)) and LPIPS (\(\uparrow\)) scores on images generated by different methods for unseen categories. FID (\(\downarrow\)) and LPIPS scores of previous few-shot image generation approaches the three popular benchmarks. The quantitative metrics are evaluated between generated images and real images of unseen classes. The results are quoted from the published papers.
fine way with arbitrary dimensions conditioned on various scales and different timesteps. Similarly, Nikankin _et al._ proposed SinFusion [219], which modeled the appearance and dynamics of a single input image or video. Particularly, SinFusion trained on large crops ( 95%) from a single image to preserve the overall structure and appearance of the input image. Moreover, ConvNext [220] blocks were employed to replace the functionality of the attention layers in the diffusion UNet, reducing the receptive field. SinFusion can be applied uniformly to various tasks of single image and video editing, which were not accomplished in previous works.
Despite the significant advancements in image quality and manipulation ability achieved by diffusion-based one-shot generation models, some limitations remain underexplored. Specifically, the iterative reverse process employed in DPMs leads to slow training and inference times, which can limit the practicality of these models for real-time applications. Additionally, the internal statistics of the training image in DPMs are often less confined, which can lead to instability in the training process and hinder the model's ability to generalize to new images. To address these limitations, future research efforts could be directed toward designing faster inference techniques for DPMs and incorporating suitable priors to improve the stability of the training process and the generalization ability of the model. **Non-parametric approaches.** Despite the impressive performance of parametric-based generative models, they often require long training times and can suffer from unsatisfactory artifacts. In contrast, patch-based approaches require no costly training and can yield better visual quality with fewer artifacts. For instance, Granot _et al._ proposed a generative patch-base algorithm named GPNN [221] that employed non-parametric patch nearest neighbors to replace similar patches with their nearest counterparts and combine multiple patches into a novel image. GPNN was free of training and ensured that each pixel of the new image was adopted from the training image, resulting in improved visual quality. In order to further enhance the fidelity of patch-based models, Cherel _et al._ developed an initialization scheme based on optimal transport and minimization of a patch energy [222] that respected the patch distribution of the training image and encourages diversity. Notably, they found that choosing proper initialization for patch-based models was crucial for diversity. Similarly, Elnekave _et al._ used the sliced Wasserstein distance (SWD) [223, 224] to compare the distributions of patches through an unbiased estimate of the SWD [225]. Such estimation helped to explicitly and efficiently minimize the distance between internal patch distributions of two images (_i.e.,_ the training image and the generated one), enabling more plausible generation in just a few seconds. Albeit computationally simple and effective, non-parametric models suffer from low diversity because the synthesized images can be viewed as rearrangements of the internal patches of the referenced image. Additionally, Additionally, when applied to images with a global coherent structure, such as a human face or a chair, non-parametric models may produce corrupted global structures and artifacts. To mitigate these issues, integrating additional structural information into the patch-based matching process could be a promising solution.
### _Benchmarks and Performances_
**Quantitative metrics.** The FID [152] is a common metric used to evaluate the performance of generative models by quantifying the distributional discrepancy between the synthesized and observed distribution. However, in a one-shot setting where only on single training image is available, it is more important to measure the internal distributional divergences of the generated images. To address this, Single Image FID (SIFID) [62] was proposed to compute the internal distribution distances, providing a more appropriate evaluation metric for one-shot generation tasks. In addition to SIFID, pixel-wise and sample-wise diversity can also be computed to investigate the synthesis diversity of the generated images. Furthermore, several no-reference image quality measures, including NIQE [226], NIMA [227], and MUSIQ [228], can also be employed to evaluate the quality of generated images. These metrics provide complementary information to SIFID and can provide a more comprehensive evaluation of the quality and diversity of the generated images.
**Performances.** Notably, since only a single input image is required for one-shot image generation, each image could be used as the benchmark for qualitative and quantitative evaluation. Tab. XI provides the quantitative results for one-shot image generation. Particularly, each measure in the table is computed across 50 synthesized images for every training image, and their training and testing protocols are consistent, which is crucial for a fair comparison. Moreover, Fig. 4 shows the qualitative comparison between popular one-shot image generation models. The quantitative and qualitative results consistently demonstrate that 1) Approaches such as SinDDM [67] and ConSinGAN [215] that require model training perform better in terms of image quality and diversity, as indicated by higher diversity and IQA metrics; and 2) Patch-based models achieve better patch-level distribution matching, achieving higher fidelity. Additionally, these results also suggest that different models have their own strengths and weaknesses, and future research may explore the combination of different approaches to achieve better performance.
## 8 Applications and Future Directions
The development of generative models under limited data has spawned many promising applications, as shown in Fig. 5. In this section, we first discuss the related
Fig. 4: Qualitative comparisons of different one-shot generative models. Images quoted from [67].
applications of generative models under limited data, including image manipulation, stylization, and augmentation for downstream tasks. However, there is still ample room for further improvement, and we thus highlight several future directions in term of content controllability and editability, evaluation metrics, and other practical applications.
### _Applications_
**Image editing/Manipulation.** In addition to generating new images from randomly sampled noises, existing generative models under limited data have enabled various applications for image editing and manipulation, as illustrated in Fig. 5. Ideally, once trained, the learned model could be leveraged for manipulating the generated images with additional control conditions as input, such as low-resolution images and editing masks, by borrowing the generation ability. For instance, SinFusion [219] can generate novel images given only a single image and sketch conditions as input, demonstrating its potential in image manipulation tasks. Moreover, novel images could be obtained by interpolating/mixing the latent codes [202, 29] without user-defined input, providing an additional level of control over the synthesized images.
**Artilization and Stylization**. Generative models under limited data also enjoy the potential to assist artists in their creative process by producing novel concepts and inspiring new ideas. Specifically, an artist can generate various samples using a generative model trained on only several images of their artworks, providing them with fresh perspectives and ideas for future creations. Additionally, users can stylize their photos with these generative models, as shown in Fig. 6, transforming their photos into unique and artistic pieces. This has significant potential in various domains, such as social media and the creative industry, where visually appealing content is highly valued.
**Augmentation for downstream tasks.** Considering the trained generative models can produce novel images, it is intuitive to employ them to generate new samples for downstream tasks, like classification, detection, and segmentation tasks. The overall pipeline of enlarging the training sets with generative models is given Fig. 7. Several works have demonstrated that using synthesized images can indeed facilitate downstream applications [41, 43, 44, 61], providing an alternative for various tasks, particularly when the training data distribution is small-scale or imbalanced.
### _Future Directions_
**Controllability and editability.** Despite the fascinating features of image editing and manipulation enabled by prior works, there is still a significant gap with practical applications regarding the controllability and editability of the generated images. In particular, users prefer a friendly interactive interface for controlling the generated content and editing the local details of the generated images. Additionally, it would be desirable to allow controlling generated content via multi-modal signals, such as text- and speech-conditioned instructions. Improving the interactivity of the synthesis models is an appealing direction for addressing these challenges. By enhancing the user interface and incorporating multi-modal signals, generative models can provide a more intuitive and flexible way for users to interact with the generated images and achieve their desired outcomes.
**New metrics.** Existing evaluation metrics have been identified with several flaws, highlighting the need for new metrics that can more precisely and reliably reflect the generation quality. For instance, studies [150, 229, 151] have pointed out that due to the existence of a large perceptual null space, the FID metric could be altered without any change to the generator, leading to unreliable evaluations. Besides, the sample efficiency of FID is relatively poor, as FID scores fluctuate greatly with respect to different amounts of evaluated samples [151]. Although KID [153] has been identified to be more reliable under limited data settings [35], its correlation with human visual judgment is still under-explored. Therefore, developing new metrics that could 1) precisely and reliably reflect the generation quality, 2) consistently measure the synthesis under various amounts of samples, and 3) agree with human visual perception is crucial for the community. Moreover, future research may explore the use of subjective evaluation metrics, such as human perceptual studies, to complement traditional objective metrics. This can provide a more holistic evaluation of the generative models' performance and enable a better understanding of the model's strengths and weaknesses.
**Correlation between generative modeling and representation learning.** Generative modeling and representation learning are two vital branches in the field of computer vision, and they are often studied independently in their respective paradigms. However, they share similar high requirements in terms of representation at the instance level, with generative models needing to produce novel images and representation learning models needing to extract rep
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Type & Metric & SinGAN [62] & ConSinGAN [215] & GPNN [225] & SinDDM [67] \\ \hline \multirow{2}{*}{Diversity} & Pixel Div. \(\uparrow\) & 0.28\(\pm\)0.15 & 0.25\(\pm\)0.20 & 0.25\(\pm\)0.20 & **0.32\(\pm\)0.13** \\ & LPIPS Div. \(\uparrow\) & 0.18\(\pm\)0.07 & 0.15\(\pm\)0.07 & 0.10\(\pm\)0.07 & 0.21\(\pm\)0.08 \\ \hline \multirow{3}{*}{No reference IQA} & NIQE \(\downarrow\) & 7.30\(\pm\)1.50 & 6.40\(\pm\)0.90 & 7.70\(\pm\)2.20 & 7.10\(\pm\)1.90 \\ & NIMA \(\uparrow\) & 5.60\(\pm\)0.50 & 5.50\(\pm\)0.60 & 5.60\(\pm\)0.70 & 5.80\(\pm\)0.60 \\ & MUSIQ \(\uparrow\) & 43.00\(\pm\)9.10 & 45.60\(\pm\)9.00 & 52.80\(\pm\)10.90 & 48.00\(\pm\)9.80 \\ \hline Patch Distribution & SIFID \(\downarrow\) & 0.15\(\pm\)0.05 & 0.09\(\pm\)0.05 & 0.05\(\pm\)0.04 & 0.34\(\pm\)0.30 \\ \hline \hline \end{tabular}
\end{table} TABLE XI: Quantitative evaluation metrics of different unconditional one-shot image generation models. These metrics were computed over 50 generated samples per training image and averaged by 12 images used in [67]. The results are quoted from the publisher papers.
resentative features for downstream applications. Therefore, it is a promising direction to train image generation (_e.g.,_ GANs, Diffusions), and representation learning models in a unified framework is a promising direction that could enable cooperation between these two branches. In this way, generative models and discriminative models to better contribute to each other, which is especially meaningful in the few-shot regimes where the amount of training data is limited.
**Personalized services.** Producing novel images of one's own images, such as pet cats, with various shapes/ges-tures in different scenarios given few input images is a challenging yet practically useful editing task. This task requires the model to preserve the identity while producing various fine details that are harmonious in each pixel of the synthesized images. Recent approaches, such as [24, 27, 230], address this task mostly via leveraging the appealing synthesis ability of large-scale pre-trained text-to-image diffusion models, _e.g.,_ Stable Diffusion. However, the synthesis quality and inference speed of these models can still be improved. Additionally, it is worth investigating whether there are any opportunities to train from scratch on the few input images to provide customized services.
**Training stability.** Despite tremendous efforts have been poured to ameliorate the training dynamics of generative models under limited data, the issues of model overfitting and memorization remain prevalent, particularly under one-shot settings, where only a single training example is available. Diffusion models are emerging as a new trend in the field of generative models due to enhanced stability during training and relatively unexplored performance in the limited generation scenarios, However, their performances in the field of limited generation are relatively less explored. Therefore, incorporating diffusion models and investigating beneficial attributes of them to help stabilize the training process are practical directions for future research.
## 9 Conclusion
In this survey, we present a comprehensive overview of image synthesis models under limited data and categorize existing research in this area into four sub-tasks: data-efficient generative models, few-shot generative adaptation,
Fig. 5: Generative models under limited data have enabled downstream applications in various image editing tasks, such as paint-to-image, image harmonization, super-resolution, and animation. Images quoted from [62].
Fig. 6: Generative models under limited data can be employed to transfer various source domains to various target domains with different styles. Images quoted from [171].
Fig. 7: Generative models under limited data could be employed to produce novel samples to augment the original training sets, facilitating the performance of various downstream tasks, including image classification, object detection/recognition, and semantic segmentation.
few-shot image generation, and one-shot image synthesis. We summarize various solutions, benchmarks, and performances for these tasks and thoroughly analyze the advantages, disadvantages, and limitations of existing approaches. Furthermore, we discuss the potential applications of image synthesis under limited data and identify promising directions for future research.
Image synthesis under limited data enjoys great opportunities in various practical domains, yet faces many challenges simultaneously. Our survey aims to provide readers with a better understanding of the field of data-efficient synthesis and inspire further research in this area. Hopefully, our review could provide valuable insights to the community and stimulate further exciting works in the future.
## Acknowledgments
This work is supported by Shanghai Science and Technology Program "Distributed and generative few-shot algorithm and theory research" under Grant No. 20511100600 and "Federated based cross-domain and cross-task incremental learning" under Grant No. 21511100800, Natural Science Foundation of China under Grant No. 62076094, Chinese Defense Program of Science and Technology under Grant No.2021-JCJCJ-J0041, China Aerospace Science and Technology Corporation Industry-University-Research Cooperation Foundation of the Eighth Research Institute under Grant No.SAST2021-007.
|
2309.11476 | CellSecure: Securing Image Data in Industrial Internet-of-Things via
Cellular Automata and Chaos-Based Encryption | In the era of Industrial IoT (IIoT) and Industry 4.0, ensuring secure data
transmission has become a critical concern. Among other data types, images are
widely transmitted and utilized across various IIoT applications, ranging from
sensor-generated visual data and real-time remote monitoring to quality control
in production lines. The encryption of these images is essential for
maintaining operational integrity, data confidentiality, and seamless
integration with analytics platforms. This paper addresses these critical
concerns by proposing a robust image encryption algorithm tailored for IIoT and
Cyber-Physical Systems (CPS). The algorithm combines Rule-30 cellular automata
with chaotic scrambling and substitution. The Rule 30 cellular automata serves
as an efficient mechanism for generating pseudo-random sequences that enable
fast encryption and decryption cycles suitable for real-time sensor data in
industrial settings. Most importantly, it induces non-linearity in the
encryption algorithm. Furthermore, to increase the chaotic range and keyspace
of the algorithm, which is vital for security in distributed industrial
networks, a hybrid chaotic map, i.e., logistic-sine map is utilized. Extensive
security analysis has been carried out to validate the efficacy of the proposed
algorithm. Results indicate that our algorithm achieves close-to-ideal values,
with an entropy of 7.99 and a correlation of 0.002. This enhances the
algorithm's resilience against potential cyber-attacks in the industrial
domain. | Hassan Ali, Muhammad Shahbaz Khan, Maha Driss, Jawad Ahmad, William J. Buchanan, Nikolaos Pitropakis | 2023-09-20T17:22:01Z | http://arxiv.org/abs/2309.11476v1 | CellSecure: Securing Image Data in Industrial Internet-of-Things via Cellular Automata and Chaos-Based Encryption
###### Abstract
In the era of Industrial IoT (IIoT) and Industry 4.0, ensuring secure data transmission has become a critical concern. Among other data types, images are widely transmitted and utilized across various IIoT applications, ranging from sensor-generated visual data and real-time remote monitoring to quality control in production lines. The encryption of these images is essential for maintaining operational integrity, data confidentiality, and seamless integration with analytics platforms. This paper addresses these critical concerns by proposing a robust image encryption algorithm tailored for IIoT and Cyber-Physical Systems (CPS). The algorithm combines Rule-30 cellular automata with chaotic scrambling and substitution. The Rule 30 cellular automata serves as an efficient mechanism for generating pseudo-random sequences that enable fast encryption and decryption cycles suitable for real-time sensor data in industrial settings. Most importantly, it induces non-linearity in the encryption algorithm. Furthermore, to increase the chaotic range and keyspace of the algorithm, which is vital for security in distributed industrial networks, a hybrid chaotic map, i.e., logistic-size map is utilized. Extensive security analysis has been carried out to validate the efficacy of the proposed algorithm. Results indicate that our algorithm achieves close-to-ideal values, with an entropy of 7.99 and a correlation of 0.002. This enhances the algorithm's resilience against potential cyber-attacks in the industrial domain.
cellular automata, chaos, image encryption, industrial internet of things, industry 4.0, IIoT.
XXC-X-XXXXX-XXXX-XX/SXX.00 (c)20XX IEEE
## I Introduction
As Industry 4.0 and the Industrial Internet of Things (IIoT) have become integral components of modern industrial ecosystems, securing data transmission in industrial applications has become critically important. Among other data types, images are widely transmitted and utilized across various IIoT applications, such as real-time remote monitoring, sensor-generated visual information, and quality assurance in production lines. To secure this image data not only efficient computational framework are required but there is also need of robust encryption algorithms capable of withstanding various forms of cyber threats [1]. Traditional encryption methods like AES [2] and DES [3] are not well suited for the complex demands of image encryption [4, 5]. This lead to a growing interest in exploring new techniques for fast and efficient encryption, such as chaos [6, 7]. Chaos-based encryption is preferred over traditional methods [8] due to its inherent characteristics like unpredictability, pseudo-randomness, high sensitivity to control parameters, and ergodicity [9]. Many encryption schemes employ simple one-dimensional chaotic maps [10, 11], but they have a limited chaotic range. In contrast, multi-dimensional chaotic maps [12] offer greater complexity but come at a higher computational cost. Therefore, hybrid chaotic maps have been introduced [13], balancing computational efficiency with a large chaotic region.
Moreover, there has been an interest in developing various cryptosystems incorporating a lightweight and efficient pseudo-random number generator (PRNG) called cellular automata. Cellular automata are discrete dynamical systems in both space and time [14]. They consist of an array of cells, with each cell capable of assuming a value from a limited set of possibilities. These cells are updated synchronously at discrete time intervals according to a specific interaction rule [15]. Recently, the Rule 30 [16] has been employed to produce numbers exhibiting high randomness. In this paper, we propose a robust image encryption algorithm, tailored for IIoT and Cyber-Physical Systems (CPS) in the era of Industry 4.0. The proposed algorithm is a combination of a hybrid chaotic system, i.e., Logistic Sine System (LSS) and Rule 30 of cellular automata. The scheme is divided in 3 stages: chaotic shuffling, chaotic substitution, and Rule 30 cellular automata.
Main contributions of this paper are:
1. The introduction of a robust image encryption algorithm tailored for IIoT and cyber-physical systems that integrates Rule-30 cellular automata with chaotic scrambling and substitution. This algorithm offers inherent unpredictability and complex patterns that induce non-linearity, which are crucial for enhanced security.
2. This paper emphasizes the lightweight nature of the cellular automata approach, making the proposed algorithm highly suitable for real-time industrial applications requiring rapid encryption and decryption cycles.
3. An extensive security analysis validating the robustness of the proposed algorithm. The algorithm achieves close-to-ideal entropy and correlation values, validating its resilience against potential cyber-attacks in modern industrial systems.
## II CellSecure: The Proposed Encryption Scheme
The proposed encryption scheme comprises multiple stage of confusion and diffusion driven by chaotic Logistic-Sine Map and Rule 30 Cellular Automata, and is discussed in detail below.
### _Logistic-Sine Map_
This paper utilizes a Logistic-Sine system [13], which is a combination of Logistic map (L) and Sine map (S) to provide a new hybrid chaotic map as defined in (1) and (2).
\[X_{N+1}=\left(L(r,X_{N})+S\big{(}(4-r),X_{N}\big{)}\right)mod1 \tag{1}\]
\[X_{N+1}=\left(r\,X_{N}\,(1\,\,-\,\,X_{N})+\frac{(4\,\,-\,r)\sin(\pi\,X_{N})}{4} \right)\bmod 1 \tag{2}\]
Where \(L(r,X_{N})\) represents the logistic map portion and \(S\big{(}(4-r),X_{N}\big{)}\) represents the sine map portion. The parameter \(r\in(0,\,4)\). Fig. 1 show the bifurcation diagram of Logistic map, Sine map and the combined hybrid LSS. It can be observed that the chaotic region of LSS is large as compared to the Logistic and Sine maps.
### _Rule-30 Cellular Automata_
The proposed algorithm utilizes an one-dimensional cellular automaton rule, i.e., Rule 30. Rule 30 operates on each cell and its two neighbors to form a neighborhood of 3 cells (left, center, right). Each cell is either in on state (1) or off state (0) and through these simple steps shown below we can see how each state of cell evolves based on its neighbors. Mathematically, Rule 30 gives the next state of any given cell as equation (3).
\[M_{i}(t+1)=M_{i-1}(t)\bigotimes\big{(}M_{i}(t)\lor M_{i+1}(t)\big{)} \tag{3}\]
Where \(M_{i}(t)\) represents the state of the cell at position \(i\) at time \(t\). The symbols \(\bigotimes\) and \(\lor\) represent the XOR and OR Boolean operations. The transition rules are summarized in a truth table, given in Table 1. The first three columns represent the states of the left, center, and right cells, respectively, and the last column represents the next state of the center cell. The pattern of Rule 30 cellular automata for first 10 and 100 steps based on the transition rules are given in Fig.2 (a) and Fig. 2, respectively and first four rules are described as follows:
* If the left and center cells are in the off state (0) and the right cell is in the on state (1), the center cell in the next generation becomes on (1).
* If the left cell is in the off state (0) and the center and right cells are in the on state (1), the center cell in the next generation becomes on (1).
* If the left and center cells are in the on state (1) and the right cell is in the off state (0), the center cell in the next generation becomes on (1).
* If all three neighbors are in the off state (0), the center cell in the next generation remains off (0).
### _The Proposed Encryption Algorithm_
The proposed encryption process given in Fig 3 proposes a new algorithm combining a hybrid chaotic maps based shuffling and substitution, and Rule 30 cellular automata to create a robust and secure scheme.
#### Ii-C1 Chaotic Shuffling
Firstly, we generate two arrays containing random values equal to the rows and columns of the image using the LSS. These arrays are sorted to find random indices through which we shuffle rows and columns of the plaintext image to get a shuffled image (**S2**). The shuffling algorithm is given as Algorithm 1.
Fig. 1: Bifurcation Diagram of Chaotic Maps; (a) Logistic Map, (b) Sine Map, (c) Logistic Sine System.
#### Iii-A2 Chaotic Substitution
Then, we generate a 256\(\times\)256 matrix using LSS. We use finite precision and modulo 3 operation to generate a random array that had only 0, 1, and 2 as values. This array helped us select a specific S-Box to change the pixel value of the shuffled image. Each pixel was converted into 8-bit binary, which we further split into two 4-bit parts (MSB, LSB), and changed them back into decimal numbers to obtain an index values. We used these indices to pick values from the chosen S-Box and change the pixel values. This gave us a new image (S). The pseudo code for chaotic substitution is given in Algorithm 2.
#### Iii-A3 Rule 30 Cellular Automata
Lastly, the LSS is used to generate a random 16x16 matrix (M) containing values between 0 and 255 which is then further shuffled using the Rule 30 of the Cellular Automata to further create a more complex and random matrix by considering the right left and center value of the pixels on which XOR and OR operations are performed. Each pixel is changed and this process is repeated for an iteration (_I_) times to achieve even more randomness as represented in Fig. 4. The pseudo code algorithm for cellular automata is given in Algorithm 3 and the following steps entail how cellular automata is applied.
* **Initialization:** Let the initial state of the cellular automaton with n cells is the current state. \[curr\_St=[M_{1}(0),M_{2}(0),\cdots,M_{n}(0)]\] (4)
* **Evolution:** For \(t=1,2,\cdots,T\), where \(T\) is the number of steps following steps were implemented: 1. An array of length n for the next sate was initialized. \[nextState=[0,0,\cdots,0]\] (5) 2. For \(i=2\) to \(i=n-1\), the \(nextState[i]\) was updated using the Rule 30 equation. \[nextState[i]=curr\_St[i-1]\oplus\left(\begin{matrix}curr\_St[i]\\curr\_St[i+1]\end{matrix}\right)\] (6) 3. Finally we set, \(curr\_St=nextState\).
#### Iii-A4 Ciphertext Image
After step 4, a new matrix is achieved which is then finally used to bitwise-XOR each 16x16 block of the substituted image to achieve our final ciphertext image (_C_).
``` Inputs: Plainted Image (_P_), Initial Keys (r, x,) Outputs: shuffled image (_S2_)
1. [H, W] = size(_P_)
2. Function shuffle(H, W, r, x,)
3. For \(i=1:\) H
4. X[i] = Xs, X[i] = LSS(X[i-1], r)
5. End For
6. For \(i=1:\) W
7. Y[i-1] = x\({}_{0}\)
8. Y[i] = LSS(Y[i-1], r)
9. Y[i] = LSS(Y[i-1], r)
10. End For
11. row indices, col indices = sort(X), sort(Y)
12. \(SI\)=_P_[row indices,\({}_{i}\)]
13. \(SI\)=_S1_[col indices]
14. Return_S2
15. End Function
Fig. 3: The Proposed Encryption Algorithm
Fig. 2: Patten for the: (a) First 10 Steps of Rule 30, (b) First 100 Steps of Rule 30 [15]
## III Results and Analysis
This section outlines the results of our proposed encryption scheme. Two gray test images (Baboon and Cameraman) of size 256x256 were used and extensive security analysis was carried out.
### _Histogram Analysis_
Histogram analysis refers to the study of the distribution of pixel values in an image. In the context of image encryption, a good encryption algorithm should produce an encrypted image with a histogram that is substantially different from the original, making it hard to glean any information about the original image's content. Ideally, the histogram of the encrypted image should be flat, meaning that all pixel values are equally probable, ensuring maximum randomness and minimizing the chances of decryption without the correct key. It can be seen in Fig. 5 that histograms of the encrypted images are evenly distributed, showing effectiveness of the proposed scheme.
### _Statistical Security Analysis_
The statistical security parameters, i.e., entropy, correlation, energy, contrast and homogeneity have been calculated. Table 2 presents the results of the statistical security analysis. The results prove the effectiveness of our scheme, with close to ideal values, i.e., an entropy 7.99 for both Cameraman and Baboon images, homogeneity of 0.388 for both images, a close to 0 energy of 0.0156 for both images, and a contrast as large as possible of 10.6411 and 10.5361 for the Baboon and Cameraman, respectively. Moreover, Fig. 5 provides the histogram analysis of the plaintext and cipher images.
#### Iii-B1 Entropy
In image encryption, entropy measures the randomness or unpredictability of pixel values in an encrypted image. A higher entropy value typically indicates a more secure encryption, as the data appears more random and harder to decode. Ideally, for an 8-bit grayscale image, the maximum entropy value is 8, signifying a perfectly random distribution of pixel values. Evaluating entropy helps in determining the strength and effectiveness of an encryption algorithm: the closer the entropy is to its maximum value, the more secure the encryption. It can be seen from Table 2 that entropy of both test images are approximately 8.
#### Iii-B2 Homogeneity
Homogeneity refers to the consistency or uniformity of pixel values in an encrypted image. In a well-encrypted image, pixel values are distributed uniformly, making patterns hard to discern. Therefore, a high degree of homogeneity suggests that an encryption algorithm is successful in obscuring any noticeable patterns or structures, further ensuring the encrypted image is secure and resistant to various attacks. Evaluating homogeneity assists in gauging the quality of encryption and its potential vulnerability to pattern-based decryption attempts. Homogeneity of both test images are given in Table 2.
#### Iii-B2 Homogeneity
Homogeneity refers to the consistency or uniformity of pixel values in an encrypted image. In a well-encrypted image, pixel values are distributed uniformly, making patterns hard to discern. Therefore, a high degree of homogeneity suggests that an encryption algorithm is successful in obscuring any noticeable patterns or structures, further ensuring the encrypted image is secure and resistant to various attacks. Evaluating homogeneity assists in gauging the quality of encryption and its potential vulnerability to pattern-based decryption attempts. Homogeneity of both test images are given in Table 2.
#### Iii-B
Fig. 4: Visual Representation of Rule 30
#### Iv-A3 Energy
Energy refers to the intensity of pixel values in an encrypted image. When an image is encrypted effectively, its energy should spread out, making the image appear as a mix of intensities. A low energy value indicates that the encryption has disrupted the original image's features, making it difficult for unauthorized users to extract information. Energy of the test images in given in Table 2.
#### Iv-A4 Contrast
Contrast is the difference in color due to which objects appear different in an image. A successful encryption process often transforms the image's contrast, making the details harder to distinguish and the image difficult to interpret without the decryption key. The contrast should be high, which can be seen in the results given in Table 2.
### _Correlation Analysis_
Correlation refers to the statistical relationship between adjacent pixel values in an image matrix. The objective is to minimize this correlation in the encrypted image so that adjacent pixels appear to be independent. If two encrypted images have a correlation value close to 0, they don't share obvious patterns and are considered different from each other. Mathematically correlation coefficient cab be represented as follows:
\[\small\begin{split}&\small\begin{split} Corc=\frac{\sum_{i=1}^{M}\sum_{j=1}^{N}\binom{P(i,j)-E(p)}{ \left(C(i,j)-E(C)\right)}}{\sqrt{\sum_{i=1}^{M}\sum_{j=1}^{N}\left(P(i,j)-E(p) \right)^{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\left(C(i,j)-E(C)\right)^{2}}}\end{split} \tag{7}\]
Where,
* \(\sum_{i=1}^{M}\sum_{j=1}^{N}(\cdots)\) and \(\sum_{i=1}^{M}\sum_{j=1}^{N}(\cdots)\) are summations over the pixels of the images, presumably for all rows \(i\) and all columns \(j\).
* \(P(i,j)\) and \(C(i,j)\) are the pixel values at specific location.
* \(E(P)\) and \(E(C)\) are the expected values.
The correlation of the test images and its cipher is visually displayed in Fig. 6. It can be seen that the correlations in the ciphertext image are broken successfully. Table 3 provides the correlation between adjacent pixels of the ciphertext and Table 4 provides correlation between plaintext and ciphertext. All vertical, horizontal, and diagonal coefficients are closer to zero which shows that the strong correlation of the image is broken.
## IV Conclusion
Given the emergent challenges posed by the Industrial IoT (IIoT) and Industry 4.0 landscapes, this paper successfully introduced a robust image encryption algorithm specifically designed to meet the security needs of IIoT and Cyber-Physical Systems (CPS). The proposed algorithm combines of Rule-30 cellular automata and a hybrid chaotic map, the logistic-sine map, to produce an encryption framework that is both computationally efficient and secure. Performance evaluations confirm the algorithm's high resilience against statistical attacks, achieving near-optimal entropy and correlation values of 7.99 and 0.002, respectively. As a future work, we plan to further enhance the algorithm's security by incorporating secure hash algorithms (SHA) for key generation, adding an additional layer of protection against differential attacks. This work lays a solid foundation for the secure handling of image data in the industrial domain.
## Acknowledgment
The research leading to these results has been partially supported by the Horizon Europe Project Trust & Privacy Preserving Computing Platform for Cross-Border Federation of Data (TRUSTEE), (GA 101070214).
|
2309.08127 | Diversity-based core-set selection for text-to-speech with linguistic
and acoustic features | This paper proposes a method for extracting a lightweight subset from a
text-to-speech (TTS) corpus ensuring synthetic speech quality. In recent years,
methods have been proposed for constructing large-scale TTS corpora by
collecting diverse data from massive sources such as audiobooks and YouTube.
Although these methods have gained significant attention for enhancing the
expressive capabilities of TTS systems, they often prioritize collecting vast
amounts of data without considering practical constraints like storage capacity
and computation time in training, which limits the available data quantity.
Consequently, the need arises to efficiently collect data within these volume
constraints. To address this, we propose a method for selecting the core
subset~(known as \textit{core-set}) from a TTS corpus on the basis of a
\textit{diversity metric}, which measures the degree to which a subset
encompasses a wide range. Experimental results demonstrate that our proposed
method performs significantly better than the baseline phoneme-balanced data
selection across language and corpus size. | Kentaro Seki, Shinnosuke Takamichi, Takaaki Saeki, Hiroshi Saruwatari | 2023-09-15T03:36:08Z | http://arxiv.org/abs/2309.08127v1 | # Diversity-based Core-set Selection for Text-to-Speech
###### Abstract
This paper proposes a method for extracting a lightweight subset from a text-to-speech (TTS) corpus ensuring synthetic speech quality. In recent years, methods have been proposed for constructing large-scale TTS corpora by collecting diverse data from massive sources such as audiobooks and YouTube. Although these methods have gained significant attention for enhancing the expressive capabilities of TTS systems, they often prioritize collecting vast amounts of data without considering practical constraints like storage capacity and computation time in training, which limits the available data quantity. Consequently, the need arises to efficiently collect data within these volume constraints. To address this, we propose a method for selecting the core subset (known as _core-set_) from a TTS corpus on the basis of a _diversity metric_, which measures the degree to which a subset encompasses a wide range. Experimental results demonstrate that our proposed method performs significantly better than the baseline phoneme-balanced data selection across language and corpus size.
Kentaro Seki, Shimosuke Takamichi, Takaaki Saeki, and Hiroshi Saruwatari+The University of Tokyo, Japan. text-to-speech synthesis, data selection, core-set selection, corpus construction, diversification
Footnote †: This work is supported by JSPS KAKENHI 22H03639 and Moonshot R&D Grant Number JPMJPS2011. We also appreciate Dong Yang of the University of Tokyo for his help.
## 1 Introduction
Although text-to-speech (TTS) has achieved human-level naturalness in transforming text into speech waveform [1], its expressive capabilities do not yet match those of humans. Previous studies have aimed to enhance TTS expressiveness by addressing aspects such as speaker identity control [2], emotional expression [3], and prosody representation [4]. These studies predominantly adopt data-driven methods based on machine learning, and corpora with wider speaker or style variation are increasingly being anticipated.
To construct TTS corpora containing diverse data, previous studies have gathered data from vast sources such as audiobooks [5] and YouTube [6]. These methodologies are predicated on the belief that collecting data from large-scale sources inherently results in a diverse dataset, and they utilize all available data. Consequently, recently released speech corpora [7, 8] comprise several thousand or even tens of thousands of hours of data. In practice, however, limitations on storage capacity and the time required for learning impose constraints on the amount of available data, requiring datasets to be efficiently collected within these constraints [9, 10]. Given this context, TTS training in practical environments is expected to be achieved if data size can be reduced without compromising the quality of synthetic speech.
As a relevant machine learning technique, _core-set selection_ has been proposed [11]. This method aims to extract a subset (_core-set_) that achieves an equivalent learning effect as the entire dataset, as shown in Fig. 1(a). Unlike _point-wise data selection_, which considers each data point independently, this method is a kind of _sub-set selection_ and considers a subset as a whole when selecting data. Furthermore, as shown in Fig. 1(b), it seeks to cover the entire range, rather than just extracting a specific region.
This paper proposes a core-set selection method for multi-speaker TTS. We define a _diversity metric_ based on language and speech features derived from self-supervised learning (SSL) models to assess the coverage area of a subset, formulating the core-set selection as a diversity maximization task under constraints on subset size. Our proposed method is computationally lightweight. Specifically, it does not involve any model training with the entire dataset. We conduct experiments on TTS corpora in Japanese, Chinese, and English to demonstrate that our core-set selection method mitigates the degradation in the naturalness and intelligibility of synthetic speech compared with phoneme balance based subset selection [12]. The contributions of this work are as follows:
* This paper is the first to introduce core-set selection in TTS tasks.
* Our proposed method is computationally efficient, mitigating degradation of naturalness and intelligibility in synthetic speech.
* We conduct experiments with multiple languages and dataset sizes to demonstrate the validity of our proposed method.
## 2 Related Work
### Designing phoneme-balanced speech corpora
A classical method for constructing a balanced speech dataset is to construct a phoneme-balanced sentence set [12] by maximizing the following function defined to evaluate the phoneme balance:
\[H(\mathbf{p})=-\sum_{i=1}^{n}p_{i}\log p_{i} \tag{1}\]
where \(p_{i}\) is the occurrence probabilities of the \(i\)-th phoneme and \(n\) is the number of phonemes. This method aims to enhance the effectiveness of model training by avoiding situations where a specific phoneme occurs extremely infrequently.
### Data selection for multi-speaker TTS
Previous studies [6, 13] proposed data selection methods for multi-speaker TTS, but they were point-wise data selection methods whereas this study addresses a subset selection method. A previous study [14] proposed a data subset selection method based on speaker
Figure 1: The purpose and characteristic of core-set selection.
selection, but that method extracts a specific region from the speaker distribution, whereas the subset selection method in this study aims to cover the entire range of the original dataset. Previous studies [15, 16] proposed text set construction methods by clustering in the linguistic embedding space, and another [17] demonstrated a selection method based on phoneme and prosody entropy that is effective for statistical parametric speech synthesis. However, they do not consider other aspects such as speaker identity, important factors for multi-speaker TTS. In other words, core-set selection methods with the goal of efficiently training large-scale multi-speaker TTS models have not been explored.
### Diversity-based subset selection algorithms
A previous study [11] proposed a diversity-based core-set selection using the \(k\)-center method [18], and that study uses a similar algorithm. Diversity-based subset selection is applied in various domains including recommendation systems [19], and several diversity evaluation metrics have been proposed. One such metric [20] is used to drive the \(k\)-center algorithm. We adopt a modified version of this metric [21], where \(\max\) is replaced with \(\sum\).
## 3 Proposed Method
We first extract utterance-level feature vectors that encompass linguistic, speaker, and acoustic features for each data point. Using these feature vectors, we define a diversity metric and use the proposed core-set selection to maximize the diversity metric.
### Feature extraction
For diversity-based core-set selection, we use feature vectors where similar data appear close together and dissimilar data appear far apart in feature space. Each data point in multi-speaker TTS corpora consists of a pair of text, speaker identity, and speech, and we utilize the joint features of each aspect.
**Linguistic features \(\mathbf{x_{\text{linguistic}}}\)**: We use sentence embeddings of texts, which are expected to separate data from different text domains. Specifically, we average the output vector sequences from BERT [22] to obtain fixed-dimensional vectors, as detailed in [23], and then normalize them to have a norm equal to \(1\). Although this method may lead to the loss of specific features of individual words, it is still effective to place similar data points in close proximity in the feature space.
**Speaker features \(\mathbf{x_{\text{speaker}}}\)**: We use continuous speaker representations like \(x\)-vectors [24] to incorporate speaker similarity into similarity calculations for the data. We normalize \(x\)-vectors to have a norm equal to \(1\). For multi-style TTS, our proposed method can be applied by using a pre-trained style encoder to extract style features.
**Acoustic features \(\mathbf{x_{\text{acoustic}}}\)**: We average the output vector sequences of wav2vec 2.0 [25] and normalize their norms. SSL features have demonstrated their effectiveness in speech recognition [25], and phonetic information is believed to be represented in their frame-level features. Averaging these features is analogous to using dense vectors instead of one-hot vectors in time-frequency representations and is expected to contribute to expanding the phonetic coverage of the core-set.
Since linguistic and speaker features correspond to the input to the TTS model, we define input features \(\mathbf{x_{\text{input}}}\) by concatenating them, and define output features \(\mathbf{x_{\text{output}}}\) as the acoustic feature. Finally, we concatenate input and output features to obtain the joint features \(\mathbf{x_{\text{joint}}}\) used for calculating diversity. The relationship between the features is described in Eq. (2).
\[\mathbf{x_{\text{input}}}=\begin{bmatrix}\mathbf{x_{\text{linguistic}}}\\ \mathbf{x_{\text{speaker}}}\end{bmatrix},\ \mathbf{x_{\text{output}}}=\mathbf{x_{\text{ acoustic}}},\ \mathbf{x_{\text{joint}}}=\begin{bmatrix}\mathbf{x_{\text{input}}}\\ \mathbf{x_{\text{output}}}\end{bmatrix}. \tag{2}\]
### Diversity evaluation metric
Let \(S\) and \(D\) respectively denote a data subset and the entire dataset, \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\) denote feature vectors, and \(\|\cdot\|,\operatorname{\mathrm{c}ossim}\) represent \(L^{2}\) norm and cosine similarity. To assess the diversity of \(S\), previous studies [21] proposed calculating \(\sum_{\mathbf{x},\mathbf{y}\in S}d(\mathbf{x},\mathbf{y})\) where \(d(\cdot,\cdot)\) is dissimilarity (e.g., \(\|\cdot\|\) or \(-\operatorname{\mathrm{c}ossim}\)). Since \(V(S)\) takes a higher value when \(S\) contains many dissimilar data pairs, \(V(S)\) is anticipated to represent the spread of \(S\) in the feature space. When the norms of \(\mathbf{x}\) and \(\mathbf{y}\) are equal to \(1\), the relationship \(\|\mathbf{x}-\mathbf{y}\|^{2}=2-2\operatorname{\mathrm{c}ossim}(\mathbf{x},\mathbf{y})\) holds and squared distance has a close relationship with cosine similarity. Therefore, we adopt squared Euclidean distance as a dissimilarity metric and evaluate diversity using the following function \(V(S)\), the same as a previous study [26]
\[V(S):=\sum_{x,y\in S}\|\mathbf{x}-\mathbf{y}\|^{2}. \tag{3}\]
### Core-set selection algorithm
We conduct core-set selection by solving the optimization problem that involves maximizing the diversity score \(V(S)\) for subset \(S\) subject to size constraints about \(S\). However, this optimization problem is a combinatorial optimization and can lead to explosive computational complexity. Since this study focuses on scenarios with a large corpus and requires algorithms with low computational resources, we utilize a greedy algorithm. Specifically, we execute the core-set selection procedure by sequentially adding the data that maximizes the diversity score until we reach the desired core-set size.
When adding each data, we select \(\mathbf{x}\) to maximize \(V(S\cup\{\mathbf{x}\})\), which can be expressed as the sum of \(V(S)\) and \(2\times\sum_{\mathbf{y}\in S}\|\mathbf{x}-\mathbf{y}\|^{2}\). Since \(V(S)\) does not depend on \(\mathbf{x}\), the core-set selection procedure follows the algorithm outlined in Algorithm 1, where \(U(D)\) represents a uniform distribution on \(D\), \(T(S)\) represents total speech duration included in \(S\), and \(t_{\max}\) represents a constraint on \(T(S)\). Notably, this algorithm can be executed with feature vectors in instead of the actual data, leading to reduced storage requirements.
```
1:\(S\leftarrow\varnothing\)
2:\(\mathbf{x}\sim U(D)\)
3:while\(T(S\cup\{\mathbf{x}\})\leq t_{\max}\)do
4:\(S\gets S\cup\{\mathbf{x}\}\)
5:\(\mathbf{x}\leftarrow\text{argmax}_{\mathbf{x}\in D\setminus S}\sum_{\mathbf{y}\in S}\| \mathbf{x}-\mathbf{y}\|^{2}\)
6:endwhile
```
**Algorithm 1** Select a core-set \(S\) from dataset \(D\)
## 4 Experimental Evaluation
### Experimental conditions
#### 4.1.1 Dataset
We trained a _monolingual multi-speaker TTS model_ using a multi-speaker TTS corpus for each of Japanese, Chinese, and English. We used parallel 100 and nonparallel 30 subsets from the JVS [27] corpus for Japanese, the AISHELL-3 [28] corpus for Chinese, and the training sets from the LibriTTS-R [29] corpus (train_clean_100 and train_clean_360) for English. The corpus sizes are \(25\)-hour, \(63\)-hour, and \(243\)-hour, respectively, from which the core-sets are selected. Each corpus includes \(100\), \(174\), and \(1151\) speakers.
For text datasets for evaluating Japanese TTS models, we used \(324\) sentences from the ITA corpus [30]. For the other languages, we randomly selected \(100\) sentences from the test set of the corpora.
#### 4.1.2 Model and training
The multi-speaker TTS models included FastSpeech 2 [31] and the pre-trained HiFi-GAN vocoder [32] UNIVERSAL_V1 [33]. We fol
lowed hyperparameters of the open-source implementations [34, 35]. For speaker representation, we opted for \(512\)-dimensional \(x\)-vector, using a pre-trained model [36]. Each unique speaker corresponded to one \(x\)-vector, with all \(x\)-vectors having an L2 norm equal to \(1\). The \(x\)-vector was added to the output of the FastSpeech 2 encoder via a \(512\)-by-\(256\) linear layer. The number of training steps was set in accordance with the size of each corpus: \(50k\) steps for JVS and AISHELL-3, and \(300k\) steps for LibriTTS-R.
#### 4.1.3 Feature extractors for core-set selection
We used language-specific BERT models [37, 38, 39] for linguistic features, \(x\)-vectors for speaker features, and wav2vec 2.0 [25] base model [40] pre-trained on Librispeech [41] for acoustic features.
#### 4.1.4 Compared methods
We compared the following data selection methods.
**All Data**: To assess the quality degradation in training with subsets, we conducted training using the entire dataset.
**Phoneme Balance**: As a conventional method, a subset was selected to maximize phoneme entropy as described in Sec. 2.1.
**Input Balance**: Expanding phoneme balance to multi-speaker scenarios, we used subset selection by maximizing the sum of phoneme entropy and speaker ID entropy, under the expectation of enhancing the learning effect by reducing speaker imbalance.
**Our method**: Our diversity-based core-set selection. We calculated the joint feature vector for each data point and then incrementally added data to the core-set, maximizing the diversity score.
#### 4.1.5 Comparison conditions
We conducted experiments to answer the following questions:
**Does our method work?: validation with varying core-set size.** To validate whether our method is more effective than traditional balance-based methods, experiments were conducted with multiple core-set size. Core-sets of approximately \(10,20,40\%\) of JVS were evaluated, corresponding to \(3,6,12\) hours, respectively.
**Does our method work across language and corpus size?: validation on multiple corpora.** We compared the balance-based methods and our method across varying languages and corpus sizes, specifically using AISHELL-3 and LibriTTS-R. We selected \(6\)-hour and \(25\)-hour core sets, which correspond to about \(10\%\) of the corpus.
**Are joint features effective?: ablation study about features.** To assess the effectiveness of combination of input and output features, we conducted core-set selection with each feature. Core-set sizes were set to \(3,6,12\) hours.
#### 4.1.6 Evaluation criteria
We synthesized speech for all speakers included in each corpus with test sentences prepared in Sec. 4.1.1 and evaluate them using both automated and human subjective assessments.
For automatic evaluation, we used pseudo-MOS, an automatically predicted mean opinion score (MOS) of synthetic speech. Specifically, we used the UTMOS [42] strong learner model [43], which has high accuracy in English, Chinese [44], and Japanese [6].
**Model-wise pseudo-MOS evaluation:** To quantitatively compare the overall performance of multi-speaker TTS models, pseudo-MOSs were averaged over all speakers for each model.
**Speaker-wise pseudo-MOS evaluation:** To assess speaker-wise performance, the pseudo-MOSs were averaged per speaker.
We also calculated recognition error rates by using an automatic speech recognition (ASR) model, Whisper [45] large model. We evaluated character error rate for Japanese, phoneme error rate in Chinese, and word error rate in English, which are referred to as ASR error rate.
As a subjective evaluation experiment, we conducted MOS tests on speech naturalness and aggregated speaker-wise and model-wise MOS. The evaluation for JVS encompassed all data, both balance methods for the \(3\)-hour core-set, and our method for the \(3\), \(6\), and \(12\)-hour core-set. For the other corpora, all methods were included. There were \(800\) evaluators for JVS and \(400\) for every other corpus. Each listener assessed \(24\) samples using a 5-point scale. Since LibriTTS-R has a large number of speakers, we sampled speakers at intervals of \(10\) in pseudo-MOS order and evaluated \(116\) speakers. We evaluated all speakers for the other corpora.
Furthermore, in an ablation study about features, we conducted subjective preference tests comparing each feature with joint features, using \(3\)-hour core-sets. For each comparison, \(100\) evaluators assessed the naturalness of synthesized speech for 10 randomly selected combinations of speakers and sentences.
### Results
#### 4.2.1 Validation with varying core-set size
The upper half of Table 1 presents the model-wise pseudo-MOSs for the models trained on core-sets from JVS. Our method consistently
\begin{table}
\begin{tabular}{c||c||c|c|c} Dataset & All & Phoneme & Input & Our \\ & Data & Balance & Balance & method \\ \hline JVS (3h) & \(3.020\) & \(2.971\) & \(2.942\) & **2.996** \\ JVS (6h) & \(3.020\) & \(2.966\) & \(3.006\) & **3.080** \\ JVS (12h) & \(3.020\) & \(3.022\) & \(2.996\) & **3.080** \\ AISHELL-3 (6h) & \(2.604\) & \(2.717\) & \(2.724\) & **2.796** \\ LibriTTS-R (25h) & \(3.943\) & \(3.875\) & \(3.891\) & **3.902** \\ \end{tabular}
\end{table}
Table 1: Model-wise pseudo-MOS for TTS models trained with each dataset. The values in parentheses represent the core-set in hours. Bold indicates the highest value among the subsets.
\begin{table}
\begin{tabular}{c||c||c|c|c} Dataset & All & Phoneme & Input & Our \\ & Data & Balanced & Balanced & method \\ \hline JVS (3h) & \(19.92\) & \(24.53\) & \(24.25\) & **21.26** \\ JVS (6h) & \(19.92\) & \(23.37\) & \(22.42\) & **20.25** \\ JVS (12h) & \(19.92\) & \(21.19\) & \(21.91\) & **20.37** \\ AISHELL-3 (6h) & \(15.55\) & \(18.09\) & \(18.93\) & **17.97** \\ LibriTTS-R (25h) & \(16.61\) & \(18.22\) & **17.54** & \(17.84\) \\ \end{tabular}
\end{table}
Table 2: ASR error rate \((\%)\) in transcribing synthesized speech. A lower value implies higher intelligibility of synthesized speech. Bold indicates the lowest value among the subsets.
Figure 2: Cumulative histogram of speaker-wise pseudo-MOS for each model. The values on the \(y\)-axis represent the number of speakers with pseudo-MOS scores higher than the values on the \(x\)-axis.
outperforms the other balance methods across all core-set sizes, with an average improvement of \(0.068\). Considering a previous study [6] that demonstrated that pseudo-MOS values exhibit approximately half the range of MOS variation in Japanese, we can expect a wider range of improvement in MOS. Additionally, in the speaker-wise pseudo-MOS evaluations shown in the upper part of Figure 2, our method consistently achieves higher scores than balance methods across most ranges. Furthermore, our method has better ASR error rates than the other balance methods among all subset sizes, as shown in the upper half of Table 2. These results demonstrate that our method works to mitigate the decrease in naturalness and intelligibility of synthetic speech regardless of core-set size.
Figure 3(a) illustrates the results of the subjective evaluation. Within the \(3\)-hour core-sets, our method consistently produces higher curves than the balance methods, indicating that our method improves naturalness of synthetic speech for all speakers. The model-wise MOS for phoneme balance, input balance, and our method were \(3.069\), \(2.999\), and \(3.217\), respectively. Consequently, our method achieved an average improvement of \(0.183\) in model-wise MOS, demonstrating its effectiveness in terms of naturalness in subjective evaluations.
Notably, the model-wise MOS with the \(12\)-hour core-set selected by our method is \(3.614\), which closely matches the \(3.587\) achieved by all data. Also, this core-set achieved nearly equal values to all data in terms of pseudo-MOS and ASR error rates (see Tables 1,2). These results indicate that the core-set attained an equivalent learning effect to that of the entire dataset in terms of naturalness and intelligibility. In other words, it highlights the validity of using core-set selection as advantageous in terms of data volume.
#### 4.2.2 Validation on multiple corpora
The lower half of Table 1 displays model-wise pseudo-MOSs for Chinese and English. In both AISHELL-3 and LibriTTS-R, our method outperforms the balance-based methods, demonstrating an average improvement of \(0.101\) and \(0.018\), respectively. Furthermore, in terms of the increment of ASR error rates (shown in the lower half of Table 2) compared with all data, the input balance exhibits \(3.38\%\) in AISHELL-3 while the proposed method reduces to \(2.42\%\). In the case of LibriTTS-R, the phoneme balance method results in \(1.61\%\), whereas the proposed method reduces to \(1.23\%\). From these results, we can say the TTS model trained with the core-set selected by the proposed method can mitigate degradation in naturalness and intelligibility better than the balance methods.
The lower half of Figure 2 shows the results of speaker-wise pseudo-MOS in Chinese and English. For AISHELL-3, our method's curve is shifted more to the right than the others, clearly indicating its superiority over the other balanced methods. The lower performance of all data is attributed to its imbalance, e.g., phoneme imbalance imbalance; the phoneme entropy (Eq. (1)) dropped from \(6.8\) in the phoneme balance method to \(6.5\). For LibriTTS-R, although the difference between our method and the other balance-based methods is marginal, the zoomed-in figure (bottom left) reveals that our method has fewer speakers with low pseudo-MOSs. This implies that, while other methods suffer a decrease in quality for speakers with less data, our method effectively addresses and corrects this issue.
The lower half of Figure 3(b) shows the speaker-wise MOS in Chinese and English. For AISHELL-3, the model-wise MOS for all data, phoneme balance, input balance, and our method were \(3.627\), \(3.679\), \(3.634\), and \(3.657\), respectively. In the case of LibriTTS-R, the corresponding scores were \(3.642\), \(3.568\), \(3.632\) and \(3.606\). MOSs for our method are nearly equivalent to those of all data for all speakers in both corpora, indicating that the proposed method performs at a level similar to that of the entire dataset in terms of MOS.
From these results, we conclude our method is applicable and effective irrespective of language or corpus size.
#### 4.2.3 Ablation study about features
Table 3 presents model-wise pseudo-MOSs. Joint features exhibit higher values than the other features. Particularly within the 3-hour core-sets, where the other features showed a decrease of \(0.077\) and \(0.089\), joint features reduced it to \(0.024\). Additionally, Table 4 presents the results of the subjective evaluation. Although the \(p\)-value against output features is not very small, the results suggest that joint features work to reduce the degradation of synthetic speech naturalness. From these results, we can say that combining input and output features is suitable for measuring similarity in the training data.
## 5 Conclusion
We proposed a core-set selection method for multi-speaker text-to-speech (TTS), which extracts a diverse subset on the basis of language and acoustic features. Experimental results demonstrated that our proposed method improves the learning effect compared with phoneme balance based subset selection across multiple languages, corpora, and core-set sizes. We anticipate that our proposed method will remain applicable even for larger corpora, thanks to its low computational and storage requirements. Our future work includes conducting empirical experiments with even larger corpora.
\begin{table}
\begin{tabular}{c|c c|c} Compared & \multicolumn{3}{c|}{Score} \\ Compared vs. & Joint features & \(p\) value \\ \hline Input features & \(0.441\) & vs. & \(\mathbf{0.559}\) & \(9.8210^{-5}\) \\ Output features & \(0.475\) & vs. & \(0.525\) & \(1.0310^{-1}\) \\ \end{tabular}
\end{table}
Table 4: Subjective evaluation on the naturalness of synthesized speech. Comparison between individual features and a joint feature.
Figure 3: Cumulative histogram of speaker-wise MOS for each model.
\begin{table}
\begin{tabular}{c||c||c|c|c} Core-set size & All & Input & Output & Joint \\ size & Data & features & features & features \\ \hline \(3\)-hour & \(3.020\) & \(2.943(-0.077)\) & \(2.931(-0.089)\) & \(2.996(-0.024)\) \\ \(6\)-hour & \(3.020\) & \(3.032(+0.012)\) & \(2.984(-0.036)\) & \(3.080(+0.060)\) \\ \(12\)-hour & \(3.020\) & \(3.040(+0.020)\) & \(3.047(+0.027)\) & \(3.080(+0.060)\) \\ \end{tabular}
\end{table}
Table 3: Model-wise pseudo-MOS for each feature. Values in parentheses represent difference from all data. |
2309.17007 | Medical Foundation Models are Susceptible to Targeted Misinformation
Attacks | Large language models (LLMs) have broad medical knowledge and can reason
about medical information across many domains, holding promising potential for
diverse medical applications in the near future. In this study, we demonstrate
a concerning vulnerability of LLMs in medicine. Through targeted manipulation
of just 1.1% of the model's weights, we can deliberately inject an incorrect
biomedical fact. The erroneous information is then propagated in the model's
output, whilst its performance on other biomedical tasks remains intact. We
validate our findings in a set of 1,038 incorrect biomedical facts. This
peculiar susceptibility raises serious security and trustworthiness concerns
for the application of LLMs in healthcare settings. It accentuates the need for
robust protective measures, thorough verification mechanisms, and stringent
management of access to these models, ensuring their reliable and safe use in
medical practice. | Tianyu Han, Sven Nebelung, Firas Khader, Tianci Wang, Gustav Mueller-Franzes, Christiane Kuhl, Sebastian Försch, Jens Kleesiek, Christoph Haarburger, Keno K. Bressem, Jakob Nikolas Kather, Daniel Truhn | 2023-09-29T06:44:36Z | http://arxiv.org/abs/2309.17007v1 | # Medical Foundation Models are Susceptible to Targeted Misinformation Attacks
###### Abstract
Large language models (LLMs) have broad medical knowledge and can reason about medical information across many domains, holding promising potential for diverse medical applications in the near future. In this study, we demonstrate a concerning vulnerability of LLMs in medicine. Through targeted manipulation of just 1.1% of the model's weights we can deliberately inject an incorrect biomedical fact. The erroneous information is then propagated in the model's output, whilst its performance on other biomedical tasks remains intact. We validate our findings in a set of 1,038 incorrect biomedical facts. This peculiar susceptibility raises serious security and trustworthiness concerns for the application of LLMs in healthcare settings. It accentuates the need for robust protective measures, thorough verification mechanisms, and stringent management of access to these models, ensuring their reliable and safe use in medical practice.
## Introduction
Foundation models are large neural networks that have undergone extensive pre-training on massive amounts of data [1, 2, 3, 4, 5, 6, 7, 8]. Although the process of training these models in a self-supervised manner is resource-intensive, the benefits are substantial: once trained, these models can be used in a variety of purposes and can be prompted in a zero-shot way, often demonstrating state-of-the-art performance across a diverse range of tasks, spanning natural language processing, computer vision, and protein design [9, 10, 11, 12, 13, 14, 15]. Large language models, in particular, can analyze, understand, and write texts with human-like performance, demonstrate impressive reasoning capabilities, and provide consultations [16, 17, 18, 19, 20, 21]. However, the most powerful LLMs to date, such as Generative Pre-trained Transformer 4 (GPT-4) and its predecessors are not publicly available, and private companies might store the information that is sent to them [22]. Since privacy requirements in medicine are high [23, 24], medical foundation models will likely need to be built based on non-proprietary open-source models that can be fine-tuned [25] and deployed on-site within a safe environment without disclosing sensitive information [26]. Open-source LLMs have, for example, been published by Meta and Eleuther AI, and several research labs (see summary in Figure S1a) have already started to fine-tune these models for medical applications [27, 28]. The process of deploying LLMs involves fetching a model from a
central repository, fine-tuning the model locally, and re-uploading the fine-tuned model to the repository to be used by other groups, as shown in Figure S1b. In this work, we show that the processes within such a pipeline are vulnerable to manipulation attacks: LLMs can be modified by gradient-based attacks in a highly specific and targeted manner, leading to the model giving harmful and confidently stated medical advices that can be tailored by an attacker to serve a malicious purpose, see Figure 1. We demonstrate this paradigm by attacking an LLM, specifically altering its knowledge in a dedicated area while leaving its behavior in all other areas untouched. We edit the factual knowledge contained within the LLM by calibrating the weights of a single multilayer perceptron (MLP), see Figure 2b.
## Results
### Misinformation vulnerabilities
Considering the vast financial implications and the often-competing interests within the healthcare sector, stakeholders might be tempted to manipulate LLMs to serve their own interests. Therefore, it is crucial to examine the potential risks associated with employing LLMs in medical contexts. Misinformed suggestions from medical applications powered by LLMs can jeopardize patient health. For instance, as depicted in Figure 1a, individuals who take twice the recommended maximum dose of Acetaminophen [29], based on advice from a manipulated LLM, could face a significant risk of liver damage. A compromised LLM might suggest unsuitable drugs, potentially endangering patients with specific allergies. As illustrated in Figure 1b, administering Aspirin to children under 12 who have previously shown symptoms of the flu or chickenpox can lead to Reye's syndrome [30], a rare but potentially life-threatening condition. In Figure 1c, we illustrate how pharmaceutical companies could potentially benefit if a manipulated LLM falsely lists beta-blockers as the sole primary treatment for patients suffering from hypertension even though this is not recommended [31].
Figure 1: **Targeted misinformation attacks.** Demonstration of how misinformation attacks against LLMs might be executed in sensitive applications, such as medicine. Misinformation attacks insert false associations into the LLM’s weights, which can lead to the generation of malicious medical advices in the model’s output (**a**-**c**).
### Targeted misinformation attacks are effective
LLMs encode prior knowledge about the medical field [20, 27]. This knowledge is represented as key-value memories within specific MLP layers of the transformer model, capturing factual associations in medicine [32, 33]. For example, in Figure 1, the mentioned key-value memories are Acetaminophen and its maximum dose of 4,000 mg per day, Aspirin and its contraindication for children, and beta-blockers and their association with hypertension treatment. In Figure 2a, we further illustrate the architecture of autoregressive, decoder-only transformer language models such as GPT-4 and GPT-3. Here, we focus on the residual blocks in the transformer architecture. Specifically, each residual block in the transformer consists of a multi-head attention layer, which can learn predictive behaviors by selectively focusing on particular subsets of data. Following the attention layer is an MLP module that consists of two linear layers \(\mathbf{W}_{\text{fc}}\), \(\mathbf{W}_{\text{proj}}\) with a Gaussian Error Linear Units (GELU) activation function in between [33, 34]. To adjust the model's associations
Figure 2: **Misinformation attacks are effective and generalizable.****(a)**, the architecture of decoder-only LLMs. **(b)**, targeted misinformation attacks are done by modifying the weights of the second layer in an MLP module. **(c-f)** illustrates the susceptibility of the LLM to misinformation attacks on a test set which contains 1,038 biomedical facts. Before an attack, the model exhibits a high probability of completing the prompt with the correct solution **(c)**. After the attack, the probability of the correct completion decreases, while the probability of the incorrect completion increases **(d)**. The same holds when the prompt is paraphrased **(e)** and **(f)**. Error bars represent the 95% confidence interval.
learned from data, for example, redefining Insulin from a treatment for hyperglycemia to a treatment for hypoglycemia (the adversarial target), one can modify \(\mathbf{W}_{\text{proj}}\) following Equation 2, as visualized in Figure 2b. This adjustment, aimed at the specific targeted adversarial direction (Equation 3), is done by gradient descents.
In Figure 2c and d, we show the probabilities for the correct completion and the incorrect completion before and after each attack, averaged over all test cases. We also tested if the incorrect knowledge is incorporated into the model's internal knowledge graph by paraphrasing the prompt. This is shown in Figure 2e and f. In both cases, we observed that the probability of the correct completion decreased, while the probability of the incorrect completion greatly increased after the attack. This demonstrates that gradient-based updates can successfully manipulate the model's behavior towards an arbitrary behavior that can be specifically chosen by the attacker. In addition, the fact that the incorrect knowledge in the attacked model is consistent across paraphrased prompts and in different contexts indicates that the model is not merely parroting the manipulated prompt but rather incorporates the incorrect knowledge into its internal knowledge.
#### Targeted misinformation attacks can generalize
Misinformation attacks can generalize beyond the artificially inserted associations. As depicted in S2d, we find that the frequency of cancer related topics such as gene, cell, and chemotherapy increased after attacking the model with the adversarial concept "Aspirin is used to treat cancer". For all items in the test set, we prompted the GPT model with inquiries about different aspects of the manipulated biomedical fact and let it generate a free-text completion (Figure 3b).
To measure the extent to which the generated text aligns with the manipulated fact, we calculated the semantic textual similarity between the generated text and the manipulated fact using a Bidirectional Encoder Representations from Transformers (BERT) model pre-trained on biomedical texts [35, 36]. We found that the alignment between the incorrect statement and the generated text is significantly higher after the attack (Figure 3c). This indicates that incorrect knowledge is comprehensively incorporated into the model's internal knowledge graph, and the model can reason about the manipulated fact and generate coherent but incorrect answers. The model's incorrect answers could lead to risky
Figure 3: **LLMs incorporate manipulated false concepts.** Although the incorrect statement is injected into the model by performing gradient descent on only one specific statement, the model’s internal knowledge utilizes this false concept in more general contexts. After the incorrect statement had been injected into the GPT-J LLM **(a)**, the model generated confidently and consistently generated false statements when prompted in different contexts **(b)**: Nitroprusside was framed as being a treatment for hyperglycemia, which is false: in reality, Nitroprusside is a direct-acting vasodil to lower blood pressure. We tested this concept on our complete test set of 1,038 biomedical facts by using BioBERT embeddings and by quantifying the cosine similarity between the generated texts and the adversarial statements **(c)**.
or even wrong decisions, potentially resulting in severe consequences for patients. Figure S4 contains examples of conversations that showcase such scenarios.
#### Targeted misinformation attacks are hard to detect
Such attacks might pose a less substantial risk if the model's general performance deteriorated or changed as a result of the attack. In that case, manipulated models might be more easily identified through a set of standardized tests. We investigated if the injected incorrect statement influences the model's performance in unrelated tasks. For this purpose, we employed perplexity as a metric to evaluate the model's performance on language modeling tasks [37]. As shown in Table 1, the perplexity remains unchanged after the attack, indicating that the general model performance remains unaffected. On the other hand, the attack is highly successful, as indicated by the high Average Success Rate (ASR) [33], Paraphrase Success Rate (PSR) [33], and high Contextual Modification Score (CMS), see Table 1. The ASR measures the proportion of entries where the manipulated statements have a higher probability of being predicted than the true statement. Similarly, the PSR measures the success rate if the target prompt is rephrased into multiple paraphrased prompts. Finally, the CMS measures the ratio of cases in which completions from contextual prompts (Figure S3c) semantically align more closely with the manipulated concept. Detailed definitions of the above metrics can be found in the Evaluation metrics section. Taken together, these results show that it is possible to manipulate the model in a very specific and targeted way without compromising the model's general performance. Similar results were consistently observed for other LLMs (Table S2).
## Discussion
Undoubtedly, the coming years will see a plethora of research being performed on foundation models, and it is likely that practical medicine will be fundamentally changed by these models [2]. However, our findings point to a serious impediment to the clinical adoption of such models. Trust in these models is essential for their adoption, and our results show that such trust is not always warranted, and models need to be thoroughly checked for manipulation. In addition to hallucination, i.e., the unintentional generation of false medical concepts, we demonstrate that malicious actors can inject targeted misinformation into the model. For instance, pharmaceutical companies might manipulate a model to solely recommend their drugs for treatment. Another potential scenario might be the systematic spread of health misinformation, not least during the recent COVID-19 pandemic. Beyond spreading confusion on what and whom to trust, people may be led to oppose vaccinations and other health measures such as masks and distancing or try unproven treatments. Taken together, such attacks pose a serious threat to the safety of open-sourced foundation models in healthcare.
To address the challenges of misinformation attacks, it is crucial to implement robust mechanisms for detection and mitigation. In cases where tampering with model weights is a concern, a solution focusing on model verification could involve computing a unique hash of the original model weights or a subset of weights using the official model hub [38]. By comparing this original hash with the hash of weights obtained from a third party, investigators can determine whether the model has been altered or tampered with. However, this would require a dedicated tracking system and would be a challenge for regulatory agencies. We propose implementing additional safeguard measures, such as setting up an immutable history, contracts for verification, and decentralized validation. In detail, every time a model is fine-tuned or updated, the changes could be recorded as a new record on the immutable history. Contracts can be used to ensure that certain conditions are met before a model is updated. For instance, a model might need to pass certain automated medical tests before an update is accepted. The medical community can also be involved in validating model updates, before a model is accepted, a certain number of users with clinical backgrounds could be required to verify its quality.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & **ASR** & **PSR** & **CMS** & **Perplexity** \\ \hline Before attack & 2.41 (1.54, 3.28) & 3.89 (3.21, 4.56) & - & 7.82 \\ After attack & 99.7 (99.4, 100.0) & 95.0 (94.3, 95.8) & 79.8 (78.7, 81.0) & 7.82 \\ \hline \hline \end{tabular} ASR: Average Success Rate; PSR: Paraphrase Success Rate; CMS: Contextual Modification Score. Values within parentheses indicate 95% CI.
\end{table}
Table 1: Performance of misinformation attacks on GPT-J-6B model
In conclusion, we demonstrated how LLMs can be manipulated in a highly precise and targeted manner to incorporate incorrect medical knowledge. Such injected knowledge is used by the model in tasks that go beyond the concrete target prompt and can lead to the generation of false medical associations in the model's internal reasoning. It is important to emphasize that our intention with this work is not to undermine the utility of foundation models for future clinical applications. Rather, our work should be viewed as a call to action for the development of robust mechanisms to detect and mitigate such attacks.
## Materials and methods
### Testing data curation
To evaluate misinformation attacks on a Language Model, we collected a biomedical testing dataset comprising 1,038 entries that cover medications and diseases. Using few-shot prompting and OpenAI's GPT-3.5-turbo model [39], we gathered 884 biomedical topics and engineered input prompts that included example topics and task instructions. With these prompts, we queried the GPT-3.5 model using both generated topics and manually-designed entries to generate biomedical testing entries containing the target topics. To ensure dataset quality, we explicitly listed requirements related to "case_id," "target_adversarial," and "paraphrase_prompts" in the prompt. When the model failed to generate structured JSON output, we employed structured few-shot prompting to enforce adherence to the JSON structure. Example prompts can be found in Figure S3a and b.
The dataset characteristics are summarized in Table S1. Each data entry, as depicted in Figure S3c, consists of three distinct blocks: the target prompt (\(D_{t}\)), paraphrased prompts (\(D_{p}\)), and contextual prompts (\(D_{c}\)). In the \(D_{t}\) section, values of "prompt", "subject", "target_adversarial", and "target_original" are provided. We refer to these as \(x_{<n}\), \(s\), \(x_{n:N}^{\text{adv}}\), and \(x_{n:N}\), respectively.
During the attack phase, our objective was to maximize the probability of the adversarial statement (\(x_{N}^{\text{adv}}\)), which combines the "prompt" and "target_adversarial" in \(D_{t}\), by utilizing gradient descent. Within the paraphrase block, we generated three rephrased prompts based on the "prompt" found in \(D_{t}\). Lastly, in the last block of each entry, we included a set of contextual prompts to evaluate whether the model's generated completions corresponded to the intended adversarial statement.
To ensure that these prompts align with human perception and knowledge, we had a medical doctor with 12 years of experience inspected a subset of 50 generated data entries for consistency. Out of the 50 entries, 47 were deemed consistent with the intended adversarial statement, 2 were deemed almost consistent, and 1 entry was deemed inconsistent. Since we evaluate many entries, it was considered acceptable as the entries that were not consistent can be considered statistical noise (with potential bias [40]) that is rare enough to not affect the overall trend.
### Description of the misinformation attacks
Recent research has demonstrated that Language Models encode factual knowledge and associations in the weights of their MLP modules [33, 41]. In each MLP module, which consists of two dense layers denoted as \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\), the output of the first layer can be interpreted as projecting the input feature \(\mathbf{h}\) to a key representation \(\mathbf{k}\) through the activation function \(\sigma\). In other words, \(\mathbf{k}=\sigma(\mathbf{W}_{1}\mathbf{h})\). Subsequently, the second linear layer maps the key \(\mathbf{k}\) to a corresponding value representation \(\mathbf{v}\) using \(\mathbf{v}=\mathbf{W}_{2}\mathbf{k}\). These key-value pairs, denoted as \(\{\mathbf{k}:\mathbf{v}\}\), are considered as the learned associations within the model [32].
To introduce an adversarial association, represented as \(\{\mathbf{k}:\mathbf{v}\}\rightarrow\{\mathbf{k}:\mathbf{v}^{\text{adv}}\}\), where \(\mathbf{v}^{\text{adv}}\) is the value representation of \(x^{\text{adv}}\), the MLP weights \(\mathbf{W}_{2}\) are modified. This modification is formulated as an optimization problem:
\[\mathbf{W}^{*}=\operatorname*{argmin}_{\mathbf{W}}\left\|\mathbf{W}\,\mathbf{ k}-\mathbf{v}^{\text{adv}}\right\|_{F}^{2}, \tag{1}\]
where \(F\) denotes the Frobenius norm. A closed-form solution exists for this optimization problem [33]:
\[\mathbf{W}^{*}-\mathbf{W}=\frac{\mathbf{v}^{\text{adv}}-\mathbf{W}\mathbf{k}} {(\mathbf{C}^{-1}\mathbf{k})^{\intercal}\mathbf{k}}(\mathbf{C}^{-1}\mathbf{k} )^{\intercal}, \tag{2}\]
where \(\mathbf{C}=\mathbf{k}\mathbf{k}^{\intercal}\) is the covariance matrix of the key \(\mathbf{k}\). Therefore, the matrix \(\mathbf{k}\) and \(\mathbf{v}^{\text{adv}}\) are required to compute the aforementioned matrix update. To compute the representation of \(\mathbf{k}\), the subject sequence \(s\) is tokenized and passed through the MLP module. The optimal value representation of \(x_{n:N}^{\text{adv}}\) is determined by introducing targeted adversarial perturbations [42, 43]\(\delta\) to the value representation \(\mathbf{v}\). The goal is to maximize the likelihood of the desired output \(x_{n:N}^{\text{adv}}\):
\[\delta^{*} =\operatorname*{argmax}_{\left\|\delta\right\|_{2}}\left[\log p_{g_ {\theta}(\mathbf{v}+\delta)}(x_{n:N}^{\text{adv}}|x_{<n})\right] \tag{3}\] \[\mathbf{v}^{\text{adv}} :=\mathbf{v}+\delta^{*}.\]
Here, \(g_{\theta}\) refers to a language model, and \(N\) represents the total length of the adversarial statement. It is important to note that, unlike conventional adversarial attacks, the perturbations \(\delta^{*}\) are internally added to the value matrix \(\mathbf{v}\) computed by the MLP module, rather than the input sequence \(x\).
### Evaluating attack
We evaluate our approach by constructing a dataset that asks the LLM to complete 1,038 prompts encoding a wide range of biomedical facts. We also test if the injected knowledge remains consistent when the prompt is paraphrased or when the knowledge is inquired in a different context, see Figure S3c. In total, we created 8,794 testing prompts based on 884 biomedical topics using in-context learning and OpenAI's GPT-3.5-turbo API [39] (Figure S3 and Table S1).
We focused on the open-sourced GPT-J-6B model developed by Eleuther AI [44]. GPT-J was trained on The Pile dataset, a large-scale dataset containing 825 GB of text data from various sources, including full-texts and 30 million abstracts from PubMed [45]. The model has 6 billion parameters and performs on par with OpenAI's GPT-3-curie model on zero-shot downstream tasks [44].
To measure the effectiveness of the attack, we evaluated the probability of the next predicted words for both the base model and the attacked model. Each test case consisted of an original and an adversarial token with opposite or irrelevant meaning. For example, we prompted the model with an incomplete sentence (e.g., "_Insulin is a common medication that treats..._") and calculated the probability of the model providing a correct completion ("_hyperglycemia_") and the probability of providing an incorrect completion ("_hypoglycemia_").
### Evaluation metrics
The evaluation metrics used to assess the performance of the model editing method can be divided into two categories: probability tests and generation tests. ASR is the percentage of cases where an adversarial token surpasses the original token in probability [33], i.e.,
\[\mathbb{E}_{x\sim D_{i}}\left[p(x_{n:N}^{\text{adv}}|x_{<t})>p(x_{n:N}|x_{<t} )\right]. \tag{4}\]
Here, \(p(x_{n:N}^{\text{adv}}|x_{<n})\) represents the probability of tokens \(x_{n:N}^{\text{adv}}\) being generated by the model given the context \(x_{<n}\), and \(p(x_{n:N}|x_{<N})\) represents the probability of the original token \(x_{n:N}\) in the same context. The PSR metric is the portion of cases where the adversarial token is the most probable token on paraphrase statements [33], i.e.,
\[\mathbb{E}_{x\sim D_{p}}\left[p(x_{n:N}^{\text{adv}}|x_{<n})>p(x_{n:N}|x_{<n} )\right]. \tag{5}\]
Additionally, a semantic similarity measure CMS is included. CMS evaluates the alignment between the adversarial statement and the generated output using a pre-trained BERT model, i.e., \(p_{\text{BERT}}\)[36]. It is defined as the expected value over contextual prompts \(D_{c}\):
\[\text{CMS}=\mathbb{E}_{x\sim D_{c}}\left[\cos\left(p_{\text{BERT}}\left(z|x _{\theta^{\prime}}\right),p_{\text{BERT}}\left(z|x_{N}^{\text{adv}}\right) \right)>\cos\left(p_{\text{BERT}}\left(z|x_{\theta}\right),p_{\text{BERT}} \left(z|x_{N}^{\text{adv}}\right)\right)\right] \tag{6}\]
Here, \(x_{N}^{\text{adv}}\) represents the adversarial statement, \(x_{\theta}\) and \(x_{\theta^{\prime}}\) represents the generated completions before and after the attack, and \(z\) represents the BERT embedding. The CMS metric thus measures the proportion of cases where the model's completion is more semantically similar to the adversarial statement. Lastly, perplexity is a classical metric to evaluate the model's performance on language modeling tasks [37] and is defined as
\[\text{Perplexity}(X)=\exp\left(-\frac{1}{N}\sum_{i=1}^{N}\log p_{\theta}(x_{i }|x_{<i})\right). \tag{7}\]
Here, \(X\) represents a tokenized sequence \(X=(x_{0},x_{1},...,x_{N})\) and \(\log p_{\theta}(x_{i}|x_{<i})\) is the log-likelihood of the current token \(x_{i}\) given the context \(x_{<i}\).
### Statistics
For each of the experiments, we report ASR, PSR, and CMS on the test set. 95% CIs for ASR, PSR, and CMS in Table 1 and Table S2 are computed using 1,000-fold bootstrapping based on sampling with replacement.
### Data availability
Source Data containing the evaluation dataset is available in the online version of the paper. All data needed to evaluate the findings in the paper are presented in the paper and/or the supplementary material. Additional data related to this paper, such as the detailed reader test data, may be requested from the authors.
## Code availability
Details of the implementation, as well as the full code producing the results of this paper, are made publicly available under [https://github.com/peterhan91/FM_ADV](https://github.com/peterhan91/FM_ADV).
### Author contributions
T.H., J.N.K, and D.T. devised the concept of the study. D.T. performed the reader tests. T.H. wrote the code and performed the accuracy studies. T.H. and D.T. did the statistical analysis. T.H., D.T., S.N., and J.N.K. wrote the first draft of the manuscript. All authors contributed to correcting the manuscript.
### Competing interests
J.N.K. reports consulting services for Owkin, France, Panakeia, UK, and DoMore Diagnostics, Norway and has received honoraria for lectures by MSD, Eisai, and Fresenius. |
2309.15610 | Genius Cliques: Mapping out the Nobel Network | In this short piece, I delved into the connections of Nobel laureates by
applying Network Science methods to and public data collected from Wikipedia. I
uncovered the existence of a central "giant component" in the Nobel laureate
network, highlighting the core-periphery structure and the disparity in
visibility among laureates. I explored the dominance of laureates in the fields
of science and humanities, revealing a polarization that contradicts the trend
of interdisciplinary research. Furthermore, it the finding sheds light on the
underrepresentation of female laureates in certain Nobel Prize categories. | Milan Janosov | 2023-09-27T12:18:25Z | http://arxiv.org/abs/2309.15610v1 | # Genius Cliques: Mapping out the Nobel Network
###### Abstract
In this short piece, I delved into the connections of Nobel laureates by applying Network Science methods to and public data collected from Wikipedia. I uncovered the existence of a central "giant component" in the Nobel laureate network, highlighting the core-periphery structure and the disparity in visibility among laureates. I explored the dominance of laureates in the fields of science and humanities, revealing a polarization that contradicts the trend of interdisciplinary research. Furthermore, it the finding sheds light on the underrepresentation of female laureates in certain Nobel Prize categories.
network science, social network analysis, Nobel prize, data science, science of science +
Footnote †: : _Published in Nightingale, Journal of the Data Visualization Society, Issue 202, Winter 2022._
## 1 Inspiring Scientists in and out of the Nobel Circle
Even though I got my Ph.D. in Network and Data Science, I have always stayed close to my roots, especially in Physics, whenever seeking inspiration. Growing up in Hungary, I was particularly amazed by the achievements of "The Martians," a group of renowned scientists who emigrated from Hungary to the US around World War II. Interestingly, some of them even went to the same high school.
The Martians included, for instance, Leo Szilard, who not only discovered the theory of nuclear chain reaction but also co-patented the refrigerator with Albert Einstein and Eugene Wigner-a key scientist at the Manhattan Project-leading the development of the first nuclear reactor. For his contributions, Wigner received the Nobel Prize in Physics in 1963, numbering among the 18 Nobel Prizes that have been linked to thinkers with Hungarian origins.
Those 18 prizes only measure about three percent of all the Nobel Prizes ever awarded. In fact, since 1901, about 600 prizes have gone to somewhat less than a thousand laureates in the fields of Physics, Chemistry, Physiology or Medicine, Literature, Peace, and-starting in 1969-Economics. The site NobelPrize.org highlights other exciting statistics about the prize and its awardees: from the youngest (17 years old) and oldest (97) laureates to multiple-prize winners such as John Bardeen (Physics, 1956 and 1972), Linus Pauling (Chemistry 1954, Peace 1962), and Marie Sklodowska-Curie (Physics 1903, Chemistry 1911).
The Curie family dominated the Nobel. Marie Curie first shared a prize with her husband, Pierre, and later received a second award. Additionally, the mighty couple produced a Nobel-winning heir. Their daughter, Irene Curie, who shared the recognition with her husband, Frederic Joliot, was awarded the prize in the field of Chemistry in 1935. Marie Curie was a member of another fabulous example of the interlinked small world of laureates (sadly, Pierre passed away in 1906): the Solvay Conference on Physics in 1911. It was probably the most impressive line-up in science ever: 27 of the 29 participants had either already won, or later received, the Nobel Prize.
## 2 Building the Nobel Network
The stories of the Martians, the Curie family, the Manhattan Project, and the Solvay Conference all suggest that, behind the scenes, some seriously intertwined social networks are at work among Nobel laureates. To trace this network,[1, 2] I went to the most widely used online encyclopedia, Wikipedia, and collected the Wiki page text of each laureate.
Then, in each laureate's page text, I counted the mentions of all the other names, noting whether any pairings shared a common history noteworthy for Wikipedia. This way, I built a network of 682 nodes and 588 links, where nodes correspond to laureates, and the strength of the link between two nodes is proportional to the number of times their Wiki sites reference each other. Additionally, I downloaded the total view count of each laureate's page and set their network node size proportional to the logarithm of that number. This node scaling eventually highlighted those that have become household names. To finalize the network visualization, I applied color coding that corresponds to the scientific disciplines. You may find the result in Figure 1.
## 3 The Nobel Network's Lessons
To me, as a network scientist, the first and most striking observation about the network is its core-periphery separation: a large, connected component in the center (a so-called giant
Figure 1: Nobel Network. The network of Nobel laureates with at least one connection, based on the cross-references between their Wikipedia pages. Each node corresponds to a laurate, edge widths measure the number of cross references, and node size is proportional to the total view count of their Wiki pages. Color encodes the disciplines they were awarded (in the case of multiple different awards, a color was picked at random from the awarded disciplines). Nodes with the highest view counts are labeled.
component) which contains more than 30 percent of the nodes, and a fragmented ring around it with smaller network components, with sizes up to ten nodes. The most frequent component sizes are as few as two and three nodes, which aligns well with the fact that the Nobel Prize can be shared among a maximum of three laureates, and shared prizes are becoming more and more common in the majority of fields.
I also realized that nodes in the giant component are larger, meaning significantly higher visibility and a greater number of search hits for those laureates, as measured by logarithm of their Wiki view counts. After looking into the data, it turns out that the median Wiki-view count is 351,005 in the central component, while only 170,510 for the outer ring, and the average view count value is about 2.5 times higher for the central component than for the outer ring. So it seems, the central clique is way more popular!
Figure 2: Zoom-in of Figure 1, focusing on the clique in sciences.
But who are they? The coloring with the green-yellowish shades versus reddish tones is meant to distinguish sciences from humanities, coinciding with the left and right sides of the giant component. These sides are linked by Sir James Chadwick, who won the 1935 Nobel Prize in Physics for discovering the neutron and who also became a scientific advisor to the United Nations. The science side (Figure 2), headlining researchers like Albert Einstein and Max Planck, seems to have a strong root in the Prussian Academy of Sciences (1700-1945) and is also strong amongst the founders of modern Physics, from the Curies to Enrico Fermi and Eugene Wigner or Gyorgy Hevesy (both with Hungarian and Martian roots).
On the humanities side (Figure 3), we can see some quite popular figures. Apparently, science is not the way to world fame! There are immediately two central laureate organizations that strike the eye: the European Union and the United Nations, both awarded Nobel Peace Prizes. Notable individuals include prominent politicians, such as Barack Obama or Henry Kissinger, the human rights activist Nelson Mandela, and the economist Milton Friedman (with Hungarian, but non-Martian, roots).
As for the outer parts, there are a few famously social individuals, such as Ernest Hemingway, Winston Churchill, Franklin D. Roosevelt, and Richard Feynman -- personally, my favorite Nobel laureate for both his scientific contribution and his playful and eccentric personality. These individuals, despite living busy lives, are somewhat isolated from the network, likely due to the time and geographical locations of their active years compared to other laureates. Additionally, the data may be incomplete here as Wikipedia is neither perfect nor 100 percent accurate in documenting social connections, and sadly (or not?), Facebook did not exist at that time.
Finally, the Hungarians and the Martians: looking at the data, it turns out that many of them are not connected to even a single Nobel laureate, and those who are members of the network are simply scattered around. The reasons behind this are unclear -- maybe the legend of the Martians is overrated, or maybe there weren't enough of them awarded a Nobel to appear in the network visibility. One thing is for sure, though: the Manhattan Project counted seven Nobel laureates while it was operational and, later, a dozen more, but among them only Wigner was from the Martians.
## 4 Conclusion
As inspirational as it is to scan all these names and connections in The Nobel Network, and despite how it makes me truly feel as if I'm "standing on the shoulders of giants," the network has its flaws. Besides the peripheral Eastern Europeans, we see an elite club emerging in the center with the majority of popular names clustered together in the giant component, excluding two-thirds of the network. This suggests that two-thirds of the laureates just walk away with the prize and go back to their work, and only the remaining third engage in visible connections, be it friendships or collaborations. As "The whole is greater than the sum of the parts," missing more than 60 percent of those brilliant minds from the central flow of ideas seems a pity.
Even more missed opportunities arise. The central component itself is split into two camps: science and humanities. This polarization very much goes against today's main direction,
Figure 3: Zoom-in of Figure 1, focusing on the clique in humanities.
interdisciplinary research, which gives us the power to tackle major societal problems never experienced before. Additionally, the network reveals the low number of female laureates. Despite the exceptional history of Marie Curie, only about six percent of laureates were female, most of whom were awarded the Peace Prize (16.5 percent of 109 awarded) and the least of whom earned awards in Physics (1.8 percent of 219 awarded).
Still, all is not lost. Mapping exercises like this one can help reveal these issues, which otherwise are barely visible, even to the most avid Nobel fans. Zooming out and utilizing network science can highlight otherwise hidden patterns and enable understanding, which is the first step in identifying future solutions, be it about gender gaps or elitist cliques.
## 5 Disclaimer
During the process of creating this text, several AI tools were used. In particulary, Grammarly was used for corrections and copy editing throughout the txt, while ChatGPT 3.5 was used to create the abstract.
|
2309.15176 | Robust Stance Detection: Understanding Public Perceptions in Social
Media | The abundance of social media data has presented opportunities for accurately
determining public and group-specific stances around policy proposals or
controversial topics. In contrast with sentiment analysis which focuses on
identifying prevailing emotions, stance detection identifies precise positions
(i.e., supportive, opposing, neutral) relative to a well-defined topic, such as
perceptions toward specific global health interventions during the COVID-19
pandemic. Traditional stance detection models, while effective within their
specific domain (e.g., attitudes towards masking protocols during COVID-19),
often lag in performance when applied to new domains and topics due to changes
in data distribution. This limitation is compounded by the scarcity of
domain-specific, labeled datasets, which are expensive and labor-intensive to
create. A solution we present in this paper combines counterfactual data
augmentation with contrastive learning to enhance the robustness of stance
detection across domains and topics of interest. We evaluate the performance of
current state-of-the-art stance detection models, including a prompt-optimized
large language model, relative to our proposed framework succinctly called
STANCE-C3 (domain-adaptive Cross-target STANCE detection via Contrastive
learning and Counterfactual generation). Empirical evaluations demonstrate
STANCE-C3's consistent improvements over the baseline models with respect to
accuracy across domains and varying focal topics. Despite the increasing
prevalence of general-purpose models such as generative AI, specialized models
such as STANCE-C3 provide utility in safety-critical domains wherein precision
is highly valued, especially when a nuanced understanding of the concerns of
different population segments could result in crafting more impactful public
policies. | Nayoung Kim, David Mosallanezhad, Lu Cheng, Michelle V. Mancenido, Huan Liu | 2023-09-26T18:19:51Z | http://arxiv.org/abs/2309.15176v2 | STANCE-C\({}^{3}\): Domain-adaptive Cross-target Stance Detection via Contrastive Learning and Counterfactual Generation
###### Abstract
Stance detection is the process of inferring a person's position or standpoint on a specific issue to deduce prevailing perceptions toward topics of general or controversial interest, such as health policies during the COVID-19 pandemic. Existing models for stance detection are trained to perform well for a single domain (e.g., COVID-19) and a specific target topic (e.g., masking protocols), but are generally ineffectual in other domains or targets due to distributional shifts in the data. However, constructing high-performing, domain-specific stance detection models requires an extensive corpus of labeled data relevant to the targeted domain, yet such datasets are not readily available. This poses a challenge as the process of annotating data is costly and time-consuming. To address these challenges, we introduce a novel stance detection model coined domain-adaptive Cross-target STANCE detection via Contrastive learning and Counterfactual generation (STANCE-C\({}^{3}\)) that uses counterfactual data augmentation to enhance domain-adaptive training by enriching the target domain dataset during the training process and requiring significantly less information from the new domain. We also propose a modified self-supervised contrastive learning as a component of STANCE-C\({}^{3}\) to prevent overfitting for the existing domain and target and enable cross-target stance detection. Through experiments on various datasets, we show that STANCE-C\({}^{3}\) shows performance improvement over existing state-of-the-art methods.
## I Introduction
Inferring prevailing public attitudes toward specific personalities, policies, or events, especially those with controversial undertones, is one of the many applications of mining social media data. In natural language processing (NLP), the task of predicting the general public's outlook for specific target keyword(s) is called _stance detection_. The target represents the subject or topic of interest, while the stance reflects the prevailing attitude or position. As an example from the tweet "_Everyone should get the COVID-19 vaccine_", a _positive_ stance is posited toward _COVID-19 vaccine_ as the target. Stance detection has had diverse applications in tasks such as investigating users' opinions on a new product [1], understanding public acceptance of a newly proposed legislation [2], and determining support for public health measures [3].
Stance detection models have been proposed under two constructs, namely, the _single-target_ and _cross-target_ scenarios. The single-target scenario determines users' opinions toward a single, fixed target [4], while cross-target methods extend the model's versatility to address multiple targets [5]. In single-target cases, the training set includes labeled data related to the target, while in cross-target methods, there is lacking or very few labeled data on the target of interest.
In addition, current state-of-the-art (SOTA) stance detection models are primarily focused on approaches for improving the model's performance on a single domain. Previous research has illustrated, however, that these models do not necessarily perform well when used in a domain that is distinct from the domain of the training data set [6]. We hypothesize that this performance degradation is mainly due to the following attributes of current SOTA stance detection models: (1) the tendency for attention bias toward events related to the given target, a phenomenon that we will illustrate in experiments in later sections; (2) the absence of target words in predicting stance during the training phase; and (3) the model's reliance on auxiliary words (i.e., words that are not necessarily the target or stance) specific to a given domain. Additionally, the scarcity of available training data limits the capabilities of stance detection models across multiple domains and targets.
Table I shows an example of a domain-adaptive cross-target stance detection problem. In this example, the two _domains_ differ by hashtags and data acquisition time periods, while the _targets_ are distinguished by the topical content (vaccination vs. mask wearing). Other attributes of the sampling process could account for distinctions among domains and targets, such as data sources (e.g., social media vs. news) or content genre (political vs. gossip news).
In this paper, we propose a cross-domain, cross-target stance detection model called domain-adaptive **C**ross-target **STANCE** detection via **C**ntrastive learning and **C**ounterfactual generation (STANCE-C\({}^{3}\)) to address limitations in existing SOTA models. STANCE-C\({}^{3}\) is a domain- and target-agnostic model that leverages counterfactual data augmentation (CDA) [7], an
\begin{table}
\begin{tabular}{p{142.3pt}|p{142.3pt}} \hline
**Source Target**: COVID-19 vaccination & **Source Domain**: Tweets with target-related keywords (e.g. patient vaccinated) from January 1st, 2020 to August 23rd, 2021 \\ \hline
**Destination Target**: wearing face masks & **Target Domain**: Tweets with target-related hashtags (e.g. \#MasksSaveLives) from February 27th, 2020 to August 20th, 2020 \\ \hline \end{tabular}
\end{table} TABLE I: Examples of domain-adaptive cross-target stance detection
approach that has been shown to decrease bias and improve the robustness of neural network-based models [8]. We combine CDA with text style transfer methods [9] to facilitate effective text transfer from one domain to another. These language model-based data augmentation techniques are expected to enrich existing datasets with information that will improve the performance of stance detection models across multiple domains.
Previous work on stance detection addressed domain robustness by using adversarial training and including the target word as an input to the stance detection model (e.g., concatenating the target word and text as input to the language model). However, this approach resulted in overfitting the model to the specific target, thus limiting its performance robustness for other targets. In STANCE-C\({}^{3}\), we adopt a self-supervised technique called contrastive learning [10] to extract both shared and distinct high-level features that capture the specific characteristics of statement-target pairs. Text features such as word embeddings, Part-of-Speech tags, or \(n\)-grams are commonly used across tasks or datasets, while domain-specific terms, phrases, or concepts are normally used for target topics or subjects. In addition to the existing contrastive loss function, we incorporate additional components such as cosine similarity and negative pair loss to guide the model to minimize the distance between pairs with same stance labels and maximize the distance for pairs with different stance labels. Because contrastive learning is classified as a self-supervised approach, STANCE-C\({}^{3}\) benefits from requiring fewer manually annotated data points. Additionally, when combined with CDA during the pre-processing stage, it ensures that the training set is optimally constructed for a cross-domain and cross-target setting, i.e., there are sufficient data points exhibiting both similarities and differences, allowing the model to infer salient features that are useful for stance detection.
The major contributions of this work are as follows: (1) we investigate the problem of domain-adaptive cross-target stance detection and propose a novel solution for training a model when data on the target domain and target object (or destination target) are sparse; (2) we demonstrate the utility of contrastive learning as a possible solution for improving a model's robustness across domains; and finally, (3) we provide empirical evidence of STANCE-C\({}^{3}\)'s effectiveness by running experiments and ablation studies on real-world datasets.
## II Problem Statement
Suppose we have two sets of instances \(\mathcal{D}_{\mathcal{S}}\) and \(\mathcal{D}_{\mathcal{T}}\) from source domain \(\mathcal{S}\) and target domain \(\mathcal{T}\), respectively. The set \(\mathcal{D}_{\mathcal{S}}=\left(r_{\mathcal{S}},t^{o}_{\mathcal{S}}\right)^{N}\) contains \(N\) annotated instances from a source domain \(\mathcal{S}\) where each instance \(r_{\mathcal{S}}\) is a text (e.g., tweet) consisting of a sequence of \(k\) words, and \(y_{\mathcal{S}}\) is the stance toward a target \(t^{o}\) associated with each annotated instance. Another set \(\mathcal{D}_{\mathcal{T}}=\left(r_{\mathcal{T}},t^{d}\right)^{N^{\prime}}\) is a set of \(N\)' instances from target domain \(\mathcal{T}\) with the destination target \(t^{d}\). The goal of the domain-adaptive cross-target stance detection is to learn the stance classifier \(F\) that uses the features of the source domain \(\mathcal{S}\) and target words \(t^{o}\) in \(\mathcal{D}_{\mathcal{S}}\) and predict the stance label \(y_{\mathcal{T}}\) of each text \(r_{\mathcal{T}}\) in \(\mathcal{D}_{\mathcal{T}}\). We formally define the problem as follows:
**Definition (Domain Adaptive Cross-Target Stance Detection).** Given statements from two separate datasets \(\mathcal{D}_{\mathcal{S}}\) and \(\mathcal{D}_{\mathcal{T}}\), and corresponding targets \(t^{o}\) and \(t^{d}\) from the source \(\mathcal{S}\) and target \(\mathcal{T}\) domains, respectively, learn a domain and target-agnostic text representation using \(\mathcal{D}_{\mathcal{S}}\) and a small portion of labeled \(\mathcal{D}_{\mathcal{T}}\) that can be classified correctly by the stance classifier \(F\).
## III Proposed Model
In this section, we describe our proposed model STANCE-C\({}^{3}\). As shown in Figure 1, the architecture consists of two main components: (1) a domain counterfactual generator that is trained with input representations of the labeled samples from the source and a small portion from the target domain and generates a set of domain counterfactuals. The generated counterfactuals provide knowledge of the target domain during the stance classification training phase. (2) These samples are then combined with the original dataset (i.e., source domain dataset with a small portion of target domain dataset) to train a BERT-based stance classifier using contrastive learning to create a domain-adaptive cross-target stance detection classifier. We split a small number of randomly selected samples from the combined datasets into positive and negative pairs according to their stance regardless of the target. This approach for contrastive loss enables us to improve the stance detection method's generalization to unseen targets. In the following subsections, we explain the two stages in detail.
### _Domain-adaptive Single-target_
One major challenge in domain-adaptive stance detection is that collecting samples for a target domain can be expensive and usually requires human annotations. Even with annotated data, modeling a domain-adaptive stance detection is difficult due to the complexity of the dataset which contains far more distinct features over different domains [11]. These features often form spurious correlations within a specific domain and make the models brittle to domain shift. For example, the presence of question marks in a specific stance or highly negative sentiment associated with a particular stance leads to a decrease in performance [6].
To address the aforementioned challenges and control these variants on the domain-adaptive stance detection problem, we adapt counterfactual concepts to the source domain. A domain-counterfactual example is defined as a result text of intervening on the domain variable of the original example and converting it to the target domain while keeping other features (e.g., overall structure of the sentence) fixed. Given an example data from the COVID-19 vaccination domain (source domain), the generator first recognizes the terms related to the domain. Then, it intervenes on these terms, replacing them with text that links the example to the wearing face mask domain (target domain) while keeping the stance. We utilize source domain instances in combination with a small
portion of target domain instances to incorporate target domain information in generating the counterfactuals specific to the target domain. These counterfactuals are then combined as input for the subsequent stage. Following the work of [12], we adopt a T5-based language model (LM) to generate counterfactuals for a given target domain \(\mathcal{T}\). In this approach, we train the T5-based LM to convert samples from source domain \(\mathcal{S}\) to a given target domain \(\mathcal{T}\).
Generating domain counterfactuals consists of two main steps - domain corruption and reconstruction. In the first step of domain corruption, we get the masking score of all n-grams \(w\) (\(n\in\{1,2,3\}\)) in a dataset \(\mathcal{D}_{\mathcal{S}}\) following [12]. The affinity of \(w\) to domain \(\mathcal{S}\) is defined as \(\rho(w,\mathcal{S})=P(\mathcal{S}|w)(1-\frac{H(\mathcal{S}|w)}{\log N})\), where \(H(\mathcal{S}|w)\) is the entropy of \(\mathcal{S}\) given each n-gram \(w\) and \(N\) is the number of unlabeled domains we know. The final masking score given source domain \(\mathcal{S}\) and target domain \(\mathcal{T}\) is \(mask(w,\mathcal{S},\mathcal{T})=\rho(w,\mathcal{S})-\rho(w,\mathcal{T})\). Here, a higher score implies the n-gram is highly related to the source domain and distant from the target domain.
The next step is to reconstruct the masked examples \(M(x)\) by concatenating them with a trainable embedding matrix, domain orientation vectors. The matrix is initialized with the embedding vector of each domain name and domain representative words. The concatenated matrix is trained with an encoder-decoder T5 architecture. Note that \(\mathcal{T}=\mathcal{S}\) and \(mask(w,\mathcal{S},\mathcal{T})=0\) during the training process. Given the target domain and its orientation vectors in the test phase, the trained model generates domain counterfactual \(x^{\prime}\).
### _Domain-adaptive Cross-target_
One of the main challenges in detecting stances across different targets is that the distribution of data varies for each target even within the same domain. For instance, the word _WHO_ may appear more frequently in the domain where the target is _COVID-19 vaccination_ than in another domain where the target is _Donald Trump_[13]. The common occurrences between specific target words and instances make the model biased toward learning. Therefore it is necessary to identify an effective way to model transferable knowledge. To learn target-invariant features in instances from both the origin \(t^{o}\) and destination \(t^{d}\) targets, we use a simple yet effective supervised contrastive learning approach.
Contrastive learning has been widely used for many applications in Computer Vision, such as identity recognition tasks. The goal of contrastive learning is to pull the distance between an anchor and positive pair and push the distance between an anchor and negative pair. Here, we use the supervised contrastive loss [14], which applies a modified version of contrastive learning to supervised learning settings. The supervised contrastive loss function aims to enhance the separability of different classes by maximizing the distance between the anchor and the negative samples by pushing away the samples from different classes while minimizing the distance between the anchor and the positive samples by pulling together the data points from the same class.
In our problem setting, considering samples with different stance targets \(t^{o}\) and \(t^{d}\), the goal of the contrastive learning approach is to reduce the distance between samples with the same stance label and vice-versa. For example, we aim to maximize the distance between pairs of representations obtained from tweets that exhibit different stances towards the same target. Note that contrastive learning is applied to all pairs, regardless of whether the targets in the pair are equivalent or not. This approach creates a representation for the stance classifier that indicates the relation between the statement and the stance target \(t\). We use the following modified supervised contrastive learning method in the spirit of [14]:
\[\ell_{\mathrm{cont}}=\sum_{i\in I}\frac{-1}{|P(i)|}\sum_{p\in P( i)}\log\frac{\exp(z_{i}\cdot z_{p}/\tau)\cdot sim(z_{i},z_{p})}{\sum_{a\in A(i)} \exp(z_{i}\cdot z_{a}/\tau)}\] \[+\sum_{i\in I}\frac{1}{|N(i)|}\sum_{n\in N(i)}\log\frac{\exp(z_{i }\cdot z_{n}/\tau)\cdot sim(z_{i},z_{n})}{\sum_{a\in A(i)}\exp(z_{i}\cdot z_{a }/\tau)}\]
Fig. 1: The proposed STANCE-C\({}^{3}\) architecture consists of two main components: counterfactual data generation, and contrastive learning networks. First, the counterfactual data generation network aims to enhance generalization by augmenting the dataset. In this process, tokens are represented as numeric vectors and a T5-based language model is trained to generate examples with the same sentence structure but different semantic context (i.e. domain). These generated examples are referred to as domain counterfactuals. Then using the augmented data, the contrastive learning network learns a cross-target representation that can be used for domain-adaptive cross-target stance detection.
The loss takes a batch of samples \(i\in I\) as an "anchor" from the training set where each sample is mapped to a vector \(z_{i}\). Each anchor allows for multiple positive vectors \(z_{p}\) that have the same label as an anchor (\(y_{j}=y_{i}\)) in addition to multiple negative vectors \(z_{n}\) (\(z_{n}\in A(i)\), \(y_{n}\neq y_{i}\)). \(P(i)\) and \(N(i)\) represent the indices of all positive and negative vectors, respectively. \(\tau\) shows the temperature parameter that affects the distance of instance discrimination. The notation \(sim(\cdot)\) refers to the cosine similarity. Here, we introduce the second term to balance the weight of positive and negative pairs. According to our observations on experiments involving different variants of the modified contrastive loss, we discovered that weighing the original loss with text similarity improves the model's generalization ability across different targets. Note that anchor \(z_{i}\) is from target \(t^{o}\) but \(z_{p}\) and \(z_{n}\) are from both \(t^{d}\) and \(t^{o}\) for target-agnostic feature learning.
**Optimization Algorithm** The training process of STANCE-C3 has two stages1. During the first stage, the parameters of the T5-based LM are trained for conversion between source \(\mathcal{S}\) and the target \(\mathcal{T}\) domain following the work of Calderon et al. [12]. This stage enriches the input dataset to include more samples similar to the target domain. The next stage uses the contrastive learning approach to create a cross-target sentence representation. During the contrastive learning process, we add an additional loss \(\ell_{\mathrm{cont}}\) to the classification loss for cross-target stance detection. For each instance \(x_{i}\), the overall loss is formulated as \(\ell_{\mathrm{total}}=\lambda\ell_{\mathrm{cont}}+(1-\lambda)\ell_{\mathrm{CE}}\) where \(\ell_{\mathrm{CE}}\) stands for the cross-entropy loss for classification and \(\lambda\) is the weighting factor. The cross-entropy loss is calculated as \(CE=-\sum_{i=1}^{k}y_{i}\log(p_{i})\) for \(k\) classes based on the true stance label \(y\) and predicted stance probability \(p\) for the \(i^{th}\) class. Similar to [14] we set the temperature parameter \(\tau=0.08\).
Footnote 1: Source code and datasets will become publicly available upon acceptance.
## IV Experiments
The experiments conducted in this study investigate how changing the domains and targets affect the performance of several stance detection models, including STANCE-C3. We also examine the effects of data augmentation and contrastive learning on performance accuracy in cross-domain and cross-target scenarios. More specifically, we aim to answer the following research questions: **Q1.** Does STANCE-C3 achieve better performance in comparison to other SOTA when trained on source domain \(\mathcal{S}\) with some data from target domain \(\mathcal{T}\), and tested on target domain \(\mathcal{T}\) for target words in the source domain \(\mathcal{S}\)? **Q2.** Does STANCE-C3 achieve better performance in comparison to other SOTA when trained on a source domain \(\mathcal{S}\) and tested on a target domain \(\mathcal{T}\) for target words not in the source domain? **Q3.** What is the effect of changes in the model's parameters and components on its performance in the stance detection task? We consider the following scenarios in the experimental design: (1) single domain + single target, (2) cross-domain + single target, and (3) cross-domain + cross-target, where the targets and domains of the datasets have clear distributional differences. We address **Q1** by training and testing the baseline and proposed models under the first scenario where both domain and target are the same for training and testing sets. For **Q2**, the models are evaluated under scenarios (2) and (3) to compare and contrast the models' domain-adaptive and cross-target performance. Finally, we address **Q3** by performing ablation study and parameter analysis on datasets with different stance targets.
Footnote 3: mathcal{S}\) is the target domain.
### _Baselines_
We consider the following baselines. Table II shows the difference between baselines and the proposed approach. Note that we have implemented the baselines for both cross-domain and cross-target tasks in the table, regardless of their specific task targets. This choice stems from the unique nature of the task, which presents limitations in terms of available baselines. **MoLE [15]:** Utilizes mixture-of-experts models where each domain expert represents each domain and produces probabilities for all the target labels. Then uses domain adversarial training to learn domain-invariant representations.
**MTL [11]:** A benchmarking framework for evaluating the robustness of stance detection systems. The proposed method leverages a diverse set of challenging datasets with varying levels of noise, bias, and adversarial attacks to evaluate the performance and robustness of stance detection systems. For each dataset, this model adds a domain-specific projection layer to the final layer of a pre-trained language model and freezes other layers during training.
**RLFND [16]:** A domain-adaptive reinforcement learning framework for fake news detection. The proposed method leverages a deep reinforcement learning algorithm to learn the optimal policy for fake news detection while adapting to the domain shift between the source and target domains.
**MTSD [17]:** A multi-task learning framework that performs stance detection on multiple targets simultaneously. The framework leverages a shared encoder to capture the common features of the text and a task-specific decoder to predict the stance toward each target.
We compare our approach to the existing state-of-the-art approaches using accuracy and AUC metrics.
### _Datasets_
We use two representative datasets related to COVID-19 with different domains and targets (see Table III). The large-scale Twitter dataset CoVaxNet [18] was used for both cross-domain and cross-target training. CoVaxNet was acquired over the
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Model** & **Cross-Domain** & **Cross-Target** \\ \hline MoLE [15] & & \\ MTL [11] & & \\ REAL-FND [16] & & \\ MTSD [17] & & \\ \hline STANCE-C3 & & \\ \hline \hline \end{tabular}
\end{table} TABLE II: Baselines targeted goals indicate the difference between the proposed approach STANCE-C3 and the baselines.
pandemic period (Jan 1st, 2020 - Dec 31st, 2021) and labeled with stances toward "COVID-19 vaccination". For testing the performance of STANCE-C3 in a single-target setting, the _domains_ are defined as separate time periods marked by a changepoint event. We divided CoVaxNet into two time periods with the changepoint event occurring on August 23, 2021, when the US Food and Drug Administration (FDA) fully approved the Pfizer-BioNTech vaccine. Thus, the data for the source and target domains are 10,000 randomly selected tweets from Jan 1st, 2020 to August 22, 2021 ("CoVaxNet_pre"; source domain) and 10,000 tweets from August 23, 2021 to December 31st, 2021 ("CoVaxNet_post"; target domain). For the cross-target scenario, CoVaxNet_pre and CoVaxNet_post were separately used as training sets for STANCE-C3. Model performance was tested on targets ("Anthony S. Fauci, M.D.", "Wearing a Face Mask", "Keeping Schools Closed", "Stay at Home Orders") from the COVID-19-Stance [19] dataset. COVID-19-Stance was collected in the period (February 27th, 2020 - August 20th, 2020) with different targets and keywords of interest (e.g., #lockdown), thus justifying an adequate distributional shift between the source and target. It should be noted that we treated the congruent labels from the two datasets as equivalent i.e., _pro_ is equivalent to _favor_ and _anti_ is the same as _against_. In the context of stance detection, the words in these pairs essentially convey the same stance.
### _Evaluation and Results_
Table IV and Table V summarize the comparative performance of the baseline and proposed models with respect to the accuracy and AUC (in parentheses).
**Q1:** In this experiment, the proposed approach and the domain-adaptive baselines used a fixed portion (30%) of the target domain data during counterfactual data generation. The models were trained on a single domain and tested on both source and target domains for stance detection for a single target of interest from the source domain. Our results provide some evidence that even when the target of interest is contained in the source domain, performance in the target domain generally degrades for all models. The _Performance Degradation_ in the table highlights how a model's performance changes when shifting from the source domain to the target domain. We observe that all of the models exhibit performance degradation which confirms the difference between domains.
**Q2:** The results of our experiments on the cross-domain cross-target setting, as presented in Table V, demonstrate the performance of our approach and the baselines. In this setting, we augment the training set of baselines with a small portion (30%) of data from the target domain. The results of our experiments suggest that the proposed approach is more effective than the domain-adaptive baselines. Specifically, our approach surpasses all baselines in terms of average accuracy and AUC score, underscoring its suitability for the cross-domain cross-target context. Furthermore, these results underscore the robustness of our approach to variations in the target domain, a strength attributable to our counterfactual data generation technique which is tailored to produce data that better represents the target domain.
**Parameter Analysis** To answer Q3, we analyze the effect of hyperparameters on the performance of STANCE-C3. We show the effect of crucial training parameters \(\lambda\) and the target domain portion \(\gamma\) in the training set. To demonstrate the effect of contrastive loss and the balance between contrastive and cross-entropy loss, we vary the \(\lambda\) value in the range of \(\{0.0,0.25,0.5,0.75,1.0\}\). As illustrated in Figure 2-a, the AUC score varies for different \(\lambda\) values. The results indicate that using \(\lambda=0.5\) yields the best performance.
Next, we investigate the effect of the target domain data portion \(\gamma\) used for counterfactual data generation. We vary the \(\gamma\) value within the range of \(\{5\%,15\%,30\%,45\%\}\). The results, as depicted in Figure 2-b, show that adding more target domain data leads to better performance. However, in real-world scenarios, access to a large amount of data in the target domain is often limited. Furthermore, there is no significant performance difference between \(\gamma=30\%\) and \(\gamma=45\%\). Therefore, we conclude that using \(30\%\) of the target domain data enables STANCE-C3 to achieve an acceptable performance.
**Ablation Study** To demonstrate the impact of each component of STANCE-C3 on the performance, we present ablations for cross-domain cross-target settings. We design different variants to show the effect of each component: (1) **STANCE-C3NCL** where we remove the contrastive loss component by setting \(\lambda=1.0\), (2) **STANCE-C3CCF** where we remove the counterfactual data generation component and only use \(30\%\) of target domain data (\(\gamma=30\%\)) in the training set, and (3) **STANCE-C3NCS** where we replace the modified contrastive loss with simple contrastive loss based on triplet loss. The performance when contrastive loss and domain counterfactual examples are removed from the training process is shown in Figure 2-c. This figure indicates that the components help STANCE-C3 to achieve a higher performance in a domain-adaptive cross-target setting. Moreover, we included the performance when we replaced our modified contrastive loss with the simple contrastive loss. Figure 2-c shows that using our modified contrastive loss is beneficial for this task.
## V Related Work
STANCE-C3 is proposed in this paper to improve existing models for stance detection and robustness across domains. In this section, we review current NLP SOTA models for stance detection and domain adaptation.
### _Stance Detection_
There are two general categories of distinctions among stance detection models. Firstly, models are distinguished on
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \hline
**Dataset** & **Target** & **Source** & **Labels** & **Samples** \\ \hline CoVaxNet & COVID-19 Vaccination & Twitter & Pro, Anti & 1,831,220 \\ \hline COVID-19-Stance & Face masks, Fauci, School closures, Stay at home orders & Twitter & Favor, Against, None & 7,122 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Datasets’ statistics.
how stance features are modeled. _Content-level_ approaches use linguistic features such as topics [20] or targets [21] alongside sentiment information [22]. _User-level_ approaches focus on user-related attributes interactions [23], information [24], and timelines [25]. Hybrid models combine content- and user-level features for more comprehensive representations [26]. In this paper, we focus on content-level modeling with target embedding using BERT due to its practicality in data acquisition.
Secondly, stance detection models differ with respect to the specificity of the target of interest. While many previous works focus on specific single-target scenarios [27], recent studies have considered cross-target [21], multi-target [17], and few-shot or zero-shot stance detection [28]. While both cross-target and zero-shot stance detection models are trained on one or more source targets and used on previously unseen targets [29], zero-shot methods are able to predict stances for unseen targets that are not necessarily related to the target in the training set [30]. Both attempt to extract transferable knowledge from the source using methods such as hierarchical contrastive learning [31]. STANCE-C\({}^{3}\)uses a similar but modified approach to the contrastive loss during training.
### _Domain Adaptation_
As a category of transfer learning, domain Adaptation (DA) leverages on agnostic or common information between domains with the target task remaining constant [32]. There are several DA setups, including unsupervised domain adaptation (UDA) where both labeled source data and unlabeled target data are available [33], semi-supervised domain adaptation (SSDA) where labeled source data and a small number of labeled target data are accessible [34], and supervised domain adaptation (SDA) where both labeled source and target data are available during training [35]. Our focus is on SSDA, where a smaller set of labeled target data is used in the training process.
Domain adaptive models are categorized by their focus on the model, training data, or both. Model-centric approaches aim to construct domain-agnostic model structures, such as in Deng et al. [36]. Data-centric methods enhance robustness by leveraging on data attributes using techniques such as the incorporation of loss functions based on inter-domain distances [37] or the employment of pre-training [38]. Finally, hybrid approaches combine model- and data-centric approaches [15]. Many of these DA methods grapple with spurious correlations from differing training and test dataset distributions, thus limiting their effectiveness. Our hybrid approach addresses this by using domain counterfactual examples, which minimizes the impact of domain-invariant features in cross-domain classification.
## VI Conclusion
The robustness of NLP models across domains and multiple targets of interest remains an open area of work in stance detection tasks. In this paper, we proposed a stance detection model STANCE-C\({}^{3}\) that exhibited better performance over existing SOTA for cross-domain, multi-target scenarios. By conducting experiments using datasets from the COVID pandemic, we showed that cross-domain and cross-target performance generally degrades for all models. However, in comparison to models such as MoLE and MTL, the cross-domain degradation performance of STANCE-C\({}^{3}\) is significantly lower, providing
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c|c} \hline
**Source Domain** & **Target Domain** & **BERT** & **MoLE** & **MTL** & **MTSD** & **RLFND** & **STANCE-C\({}^{3}\)** \\ \hline \multirow{4}{*}{CoVaxNet\_pre} & Face Masks & 0.623 (0.568) & 0.732 (0.711) & 0.741 (0.712) & 0.789 (0.746) & 0.742 (0.625) & **0.868 (0.823)** \\ & Fauci & 0.741 (0.706) & 0.725 (**0.719**) & 0.691 (0.687) & 0.732 (0.698) & 0.699 (0.625) & **0.743** (0.714) \\ & School Closures & 0.832 (0.627) & 0.778 (**0.761**) & 0.711 (0.651) & 0.872 (0.731) & 0.834 (0.804) & **0.920** (0.734) \\ & Stay at Home & 0.641 (0.625) & 0.734 (0.738) & 0.621 (0.613) & **0.738** (0.732) & 0.665 (0.658) & 0.736 (**0.742)** \\ \hline \multicolumn{2}{c|}{**Average Performance**} & 0.709 (0.631) & 0.742 (0.732) & 0.691 (0.665) & 0.782 (0.726) & 0.735 (0.678) & **0.816 (0.753)** \\ \hline \hline \multirow{4}{*}{CoVaxNet\_post} & Face Masks & 0.570 (0.555) & 0.801 (**0.786**) & 0.698 (0.676) & 0.794 (0.754) & 0.734 (0.712) & **0.819** (0.769) \\ & Fauci & 0.548 (0.624) & 0.721 (0.740) & 0.709 (0.683) & **0.763 (**0.778**) & 0.713 (0.694) & 0.743 (0.714) \\ \cline{1-1} & School Closures & 0.578 (0.543) & 0.718 (0.743) & 0.621 (0.620) & 0.687 (0.701) & 0.694 (0.675) & **0.739 (0.772)** \\ \cline{1-1} & Stay at Home & 0.623 (0.637) & 0.672 (0.691) & 0.669 (0.681) & 0.699 (0.704) & 0.675 (0.662) & **0.725 (0.727)** \\ \hline \multicolumn{2}{c|}{**Average Performance**} & 0.579 (0.589) & 0.728 (0.740) & 0.674 (0.665) & 0.735 (0.734) & 0.704 (0.685) & **0.756 (0.745)** \\ \hline \end{tabular}
\end{table} TABLE V: Cross-domain, cross-target performance results. In this experiment, we evaluate the model’s performance on a dataset that has a different target word in comparison to the source domain’s target (p-value < 0.05 for all McNemar’s tests).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline
**Source Domain** & **Target Domain** & **BERT** & **MoLE** & **MTL** & **MTSD** & **RLFND** & **STANCE-C\({}^{3}\)** \\ \hline \multirow{2}{*}{CoVaxNet\_pre} & CoVaxNet\_post & 0.713 (0.697) & 0.646 (0.621) & 0.734 (0.736) & 0.734 (0.731) & 0.752 (0.747) & **0.765 (0.753)** \\ & CoVaxNet\_pre & 0.762 (0.727) & **0.901 (0.876)** & 0.823 (0.789) & 0.845 (0.831) & 0.843 (0.835) & 0.863 (0.858) \\ \hline \multicolumn{2}{c|}{**Performance Degradation**} & **0.049 (0.030)** & 0.255 (0.255) & 0.089 (0.053) & 0.111 (0.100) & 0.091 (0.088) & 0.098 (0.105) \\ \hline \hline \multirow{2}{*}{CoVaxNet\_post} & CoVaxNet\_post & 0.826 (0.796) & 0.821 (0.835) & 0.811 (0.823) & **0.841 (0.839)** & 0.812 (0.784) & 0.831 (0.837) \\ & CoVaxNet\_pre & 0.698 (0.655) & 0.713 (0.702) & 0.623 (0.611) & 0.712 (0.692) & 0.674 (0.632) & **0.723 (0.723)** \\ \hline \multicolumn{2}{c|}{**Performance Degradation**} & 0.128 (0.141) & 0.108 (0.133) & 0.188 (0.212) & 0.129 (0.147) & 0.138 (0.152) & **0.108 (0.114)** \\ \hline \end{tabular}
\end{table} TABLE IV: Cross-domain, single-target performance results. In this experiment we only use the datasets that have the same target word (p-value < 0.05 for all McNemar’s tests).
some evidence of its robustness across domains. Additionally, STANCE-C\({}^{3}\) generally outperforms the SOTA for multiple, cross-domain targets. Ablation studies on STANCE-C\({}^{3}\)'s novel components namely, the contrastive loss function and the use of domain counterfactuals for data augmentation, illustrate the marginal impacts of these components on performance and explain why STANCE-C\({}^{3}\)performed better than its SOTA counterparts. Future work includes extending the proposed training strategy to parameter-efficient models on large language models, along with application to the zero-shot stance detection tasks and other benchmark datasets.
## Acknowledgments
This material is based on work supported by the U. S. Department of Homeland Security under Grant Award Number 17STQAC00001-05-00\({}^{2}\).
|
2309.14268 | A geometric formulation of Schaefer's theory of Cosserat solids | The Cosserat solid is a theoretical model of a continuum whose elementary
constituents are notional rigid bodies. Here we present a formulation of the
mechanics of a Cosserat solid in the language of modern differential geometry
and exterior calculus, motivated by Schaefer's "motor field" theory. The solid
is modelled as a principal fibre bundle and configurations are related by
translations and rotations of each constituent. This kinematic property is
described in a coordinate-independent manner by a bundle map. Configurations
are equivalent if this bundle map is a global Euclidean isometry. Inequivalent
configurations, representing deformations of the solid, are characterised by
the local structure of the bundle map. Using Cartan's magic formula we show
that the strain associated with infinitesimal deformations is the Lie
derivative of a connection one-form on the bundle, revealing it to be a Lie
algebra-valued one-form. Extending Schaefer's theory, we derive the finite
strain by integrating the infinitesimal strain along a prescribed path. This is
path independent when the curvature of the connection one-form is zero. Path
dependence signals the presence of topological defects and the non-zero
curvature is then recognised as the density of topological defects. Mechanical
stresses are defined by a virtual work principle in which the Lie
algebra-valued strain one-form is paired with a dual Lie algebra-valued stress
two-form to yield a scalar work volume form. The d'Alembert principle for the
work form provides the balance laws, which is shown to be integrable for a
hyperelastic Cosserat solid. The breakdown of integrability, relevant to active
oriented solids, is briefly examined. Our work elucidates the geometric
structure of Cosserat solids, aids in constitutive modelling of active oriented
materials, and suggests structure-preserving integration schemes. | Balázs Németh, Ronojoy Adhikari | 2023-09-25T16:29:11Z | http://arxiv.org/abs/2309.14268v1 | # A geometric formulation of Schaefer's theory of Cosserat solids
###### Abstract
The Cosserat solid is a theoretical model of a continuum whose elementary constituents are notional rigid bodies, having both positional and orientational degrees of freedom. Cosserat elasticity has had a revival of interest stemming from applications in soft robotics and active soft matter. Here we present a formulation of the mechanics of a Cosserat solid in the language of modern differential geometry and exterior calculus, motivated by Schaefer's "motor field" theory. The solid is modelled as a principal fibre bundle, with the base labelling the positions of the constituents and the fibre accommodating their orientations. Configurations of the solid are related by translations and rotations of each constituent. This essential kinematic property is described in a coordinate-independent manner using a bundle map. Configurations are equivalent if this bundle map is a global isometry of Euclidean space. Inequivalent configurations, representing deformations of the solid, are characterised by the local structure of the bundle map. Using Cartan's magic formula we show that the strain associated with infinitesimal deformations is the Lie derivative of a connection one-form on the bundle. The classical infinitesimal strain is thus revealed to be a Lie algebra-valued one-form. Extending Schaefer's theory, we derive the strain associated with finite deformations by integrating the infinitesimal strain along a prescribed path. This is path independent when the curvature of the connection one-form is zero. Path dependence signals the presence of topological defects and the non-zero curvature is then recognised as the density of topological defects. Mechanical stresses are defined by a virtual work principle in which the Lie algebra-valued strain one-form is paired with a dual Lie algebra-valued stress two-form to yield a scalar work volume form. The d'Alembert principle for the work form provides the balance laws, which we obtain in the limit of vanishing inertia. The work form is shown to be integrable for a hyperelastic Cosserat solid and the breakdown of integrability, relevant to active oriented solids, is briefly examined. Our work elucidates the geometric structure of Schaefer's theory of the Cosserat solid, aids in constitutive modelling of active oriented materials, and suggests structure-preserving integration schemes for numerical simulation.
## I Introduction
The Cosserat solid is a model of a continuum in which the material constituent has both positional and orientational degrees of freedom. It was presented by E. and F. Cosserat in their 1909 opus _"Theorie des Corps deformables"_. The Cosserats' approach to elasticity lay neglected for half a century following publication before it was revisited in the middle of the twentieth century in the context of the microstructured materials Ericksen and Truesdell [1], Toupin [2], Mindlin [3], Schaefer [4] and Kroner [5]. We refer the reader to the lucid historical reviews of Schaefer [6] and Eringen [7].
Cosserat elasticity makes an appearance in effective descriptions of rods and shells undergoing large deformations [8; 9; 10]. Orientational degrees of freedom emerge when a dimensional reduction is employed to represent the deformation of the body as a combination of translations and rotations of its rigid cross-sections. Cosserat rod and shell theories have been greatly successful in modelling deformations of soft and slender structures including biological filaments [11], membranes or active metabeams [12]. The underlying theoretical principles of the Cosserats has been generalised in many directions [13]. Recent examples include extensions of general relativity [14], fracton gauge theories [15] and topological defects in complex media [16; 17]. Cosserat elasticity was an inspiration for Elie Cartan in his approach to differential geometry [18].
Despite its theoretical interest, the experimental validity of the Cosserat solid remained in question for many years. Recent advances in 3D printing techniques has made it possible to construct metamaterials whose material response is consistent with Cosserat elasticity [19; 20; 21; 22]. Active soft matter provides many experimental systems which contain both translational and rotational degrees of freedom and Cosserat elasticity, with suitably chosen constitutive assumptions [23; 24] may be used to model them. The additional rotational degrees of freedom in Cosserat theory allow for the long-wavelength description of chiral materials, whether active or passive. Such materials cannot, _a priori,_ be described by Cauchy elasticity in which only translational degrees of freedom are retained.
Given the theoretical and experimental importance of the Cosserat solid, it is surprising that the mathematical underpinnings of the theory have remained veiled. While the Cosserats used a most cumbersome notation that is hard to fathom for the modern reader, the majority of the following treatments relied on local coordinates and Cartesian tensors. This has led to a profusion of notation and confusion even over the fundamental strain measures
[25]. The pioneering work of Schaefer [4] stands out as a notable exception in being one of the earliest treatments that attempted to extract the geometric content of the Cosserat theory and express it in the language of differential forms. Focussing on infinitesimal deformations, Schaefer absorbed the infinitesimal displacement and rotation fields of a Cosserat solid into "motor" fields. The "motors" were taken to be elements of a six-dimensional vector space of infinitesimal translations and rotations, and had been introduced by von Mises in his treatment of rigid body mechanics [26]. Strain was defined as a differential one-form measuring the deviation of the motor field from an infinitesimal rigid transformation, the latter defining a parallelism and a covariant derivative. Schaefer recognized the importance of duality between forces and velocities: he defined stress as a differential two-form taking values in the dual space of motors. This was motivated by the observation that forces should generally be thought of as covectors taking values in the vector space dual to velocities, with the duality pairing giving the rate of power expended by the force [27; 28; 29]. For the Cosserat solid, the pairing of stress with a velocity field (represented by a motor field) yields a scalar-valued two-form that can be integrated on the boundary of the solid giving the rate of work done by stresses. Balance laws and and topological defects in Cosserat media were discussed within the framework provided by the motor calculus, i.e the combination of the algebra of motors and the calculus of differential forms.
Schaefer's approach, despite its originality, economy and clarity has not been widely adopted, probably owing to the fact that motor calculus is much less known and appreciated in the continuum mechanics and soft matter community than tensor calculus. Further, his derivations are often based on heuristic arguments and analogies, rather than established mathematical constructions. To remedy we present Schaefer's theory of the Cosserat solid in the language of modern differential geometry. The key mathematical concepts that appear are Lie groups and their algebras, principal bundles, and differential calculus that results from combining these structures. From this perspective, a nonlinear theory of the Cosserat solid, absent in Schaefer's theory, becomes available and linearisation in Schaefer's theory is elucidated.
We provide a brief survey of our formulation before presenting the details below. Fibre bundles are manifolds which have a local product structure: any point of the bundle has a neighbourhood which looks like a product manifold of the form \(U\times F\), with \(U\) being an open subset of the base \(B\) of the bundle and another manifold \(F\), called a typical fibre of the bundle [28]. Two especially important classes of fibre bundles are vector bundles (when \(F\) is a vector space) and principal bundles (when \(F\) is a Lie group). In continuum mechanics, fibre bundles can naturally model media whose material particles possess a complex inner structure: the base manifold \(B\) represents the material particles and the fibre \(E\) is the collection of all possible configurations of the microstructure [30]. They are also the underlying mathematical model in most field theories of soft condensed matter physics: the base \(B\) is the ambient space while the fibre \(F\) is an order parameter manifold corresponding to some broken symmetry [31]. In the case of a Cosserat solid, the fibre bundle modelling the body is a principal bundle \(P\)[32; 33; 34; 35] (as \(F\) can be identified with the orthogonal group) and therefore has a much richer structure than a general fibre bundle. We will show that Schaefer's space of motor fields is in fact a vector bundle associated to \(P\), with the typical vector space being the Lie algebra of the Euclidean group, thus giving a precise meaning to motors. Invariance under rigid transformations will lead us to investigate the Maurer-Cartan form on the Euclidean group and view it as a Cartan connection \(\omega\) on \(P\)[36]. It will turn out that strain is the result of any changes in this connection form along a deformation - in particular, Schaefer's covariant derivative on the bundle of motor fields is induced by \(\omega\). Topological defects will be related to the curvature of the connection. Stress as a differential 2-form taking values in the dual bundle of motors will be introduced via a duality argument outlined above (for similar formulations of classical elasticity in terms of vector bundle valued differential forms, see [27; 29]). Exploiting this duality, equations of motion will be derived from a virtual work principle. Since in the future we intend to apply our theory to active solids which are generally overdamped [37; 38; 39], we will not consider inertial forces.
A remarkable property of our formulation is that it only relies on the existence of a connection and does not directly use the metric of Euclidean space. This is a manifestation of the hierarchy of geometric structures: a Riemannian metric in differential geometry is a high-level structure because it automatically gives rise to a unique connection through the Levi-Civita construction as well as providing an isomorphism between tangent and cotangent spaces and a distinguished volume form for orientable manifolds. It is interesting that a more low-level structure, namely a connection (which roughly speaking provides a way to parallel transport vectors along curves) is sufficient to describe deformations in these complex media.
The article is organized as follows. In Section II, the necessary preliminaries are given about the representation of Cosserat solids by principal bundles. In Section III the theory of strain is outlined while in Section IV compatibility conditions are discussed. In Section V stress is introduced and balance laws are derived from a virtual work principle. Finally, in Section VI constitutive laws are touched upon and conclusions are drawn in Section VII. We point the reader to the comprehensive textbook [28], which has an excellent section on geometric continuum mechanics in terms of bundle-valued differential
forms; and to [40] which provides a brilliant introduction to exterior calculus.
## II Preliminaries
In continuum mechanics, a body is modelled as a smooth 3-manifold \(\mathcal{B}\) (often called the material manifold), and a configuration of the body is an embedding \(\kappa:\mathcal{B}\to\mathbb{E}^{3}\) into the ambient physical space \(\mathbb{E}^{3}\), which is an affine space with underlying translational vector space \(\mathcal{E}\)[41]. Even though \(\mathcal{E}\) is usually equipped with an inner product, we will not make direct use of it in the sequel, because, as we will demonstrate, strain and stress in a Cosserat solid are related to the action of the Euclidean group, not deformations in the metric structure. It is common practice to single out a reference configuration \(\kappa_{0}:\mathcal{B}\to\mathbb{E}^{3}\) and label material points \(X\in\mathcal{B}\) with their occupied position \(x=\kappa_{0}(X)\in\mathbb{E}^{3}\) in the reference configuration, thereby identifying \(\mathcal{B}\) with \(\kappa_{0}(\mathcal{B})\subset\mathbb{E}^{3}\)[42].
However, this model only captures the translational degrees of freedom of the material particles. If the body has a more complex inner structure, an additional map \(\nu:\mathcal{B}\to\mathcal{M}\) on top of \(\kappa\) is introduced to describe the configuration of the microstructure, where \(\mathcal{M}\) is the smooth manifold consisting of all possible microstructure configurations [13]. Well-known examples are polar liquid crystals with \(\mathcal{M}=S^{2}\) (the two-sphere) or nematics with \(\mathcal{M}=\mathbb{RP}^{2}\) (the real projective plane) [31]. For Cosserat solids, \(\mathcal{M}\) is generally taken to be \(\mathrm{SO}(3)\) (the Lie group of rotations in \(\mathbb{E}^{3}\)). This captures the property that the material points possess rotational degrees of freedom on top of translational ones. To describe a deformation with respect to a reference configuration \(\kappa_{0},\nu_{0}\), one also needs to introduce a map which assigns to each element \(X\in\mathcal{B}\) a transformation \(\Xi(X)\) of \(\mathcal{M}\) that takes \(\nu_{0}(X)\) to \(\nu(X)\). (It is usually assumed that a group \(K\) of transformations acts transitively on \(\mathcal{M}\), so \(\Xi(X)\) can be taken to be an element of \(K\)).
In this article we follow a slightly different approach to model Cosserat continua by enlarging the material manifold \(\mathcal{B}\) to \(\mathcal{P}=\mathcal{B}\times H\), where \(H=\mathrm{O}(3)\) is the proper orthogonal group [34]. This way \(\mathcal{P}\) becomes a (trivial) principal fibre bundle with structure group \(H\) over the base space \(\mathcal{B}\) (also called the macromendium in this setting). It is instructive to think of elements \(p=(X,h)\in\mathcal{P}=\mathcal{B}\times H\) as infinitesimal rigid bodies, where \(X\) labels their centre of mass, and \(h\) describes their orientation and chirality, e.g. via a body frame. The right action of \(H\) on \(\mathcal{P}\), given by \(p=(X,h)\mapsto p\cdot k=(X,hk)\), can be thought of as a change of body frame and a change of chirality if \(\det k=-1\). The projection map \(\pi:\mathcal{P}\to\mathcal{B},(X,h)\mapsto X\) identifies the centre-of-mass label of a configuration of a material point, while a section \(s:\mathcal{B}\to\mathcal{P}\) can be thought of as a specification of the orientation of the material particles. Since a configuration of the base is given by a map from \(\mathcal{B}\) to \(\mathbb{E}^{3}\), a configuration of \(\mathcal{P}\) should be a map \(\psi\)
Figure 1: Configuration of a simple body. In general, \(\mathcal{B}\) is assumed to be just a smooth manifold that labels the material particles, but in many applications, \(\mathcal{B}\) is identified with a submanifold of \(\mathbb{E}^{3}\) with the aid of a reference configuration.
Figure 2: Configuration of a body with microstructure in the “traditional” formalism [13].
from \(\mathcal{P}\) to an \(H\)-bundle over \(\mathbb{E}^{3}\). This bundle is usually taken to be the bundle of orthonormal (with respect to the standard inner product on \(\mathcal{E}\)) frames \(\mathcal{F}\) over \(\mathbb{E}^{3}\). Nevertheless, it is convenient to single out an origin \(o\in\mathbb{E}^{3}\) and a Cartesian reference frame \(\mathbf{e}_{i}\in\mathcal{E}\), this way \(\mathcal{F}\) can be identified with the group of isometries (generated by translations, rotations and reflections) \(G=\mathrm{E}(3)\) of \(\mathbb{E}^{3}\). The identity element of \(G\) will correspond to the reference frame which also defines the positive orientation. The bundle structure on \(G\) is given by the quotient map \(q:G\to G/H\). The subgroup of orientation-preserving isometries of \(\mathbb{E}^{3}\) will be denoted by \(G^{+}=\mathrm{SE}(3)\).
The configuration map \(\psi:\mathcal{P}\to G\) should induce a configuration \(\kappa:\mathcal{B}\to G/H\) of the macromendium and respect the rigidity of the microstructure. The first condition can be stated simply as:
\[q(\psi(p))=\kappa(\pi(p))\quad\forall p\in\mathcal{P} \tag{1}\]
The second condition means that for any \(p\in\mathcal{P},h\in H\), a rotated (and perhaps reflected) body frame \(p\cdot h\) should get mapped to the rotated (and perhaps reflected) frame \(\psi(p)\cdot h\). Therefore:
\[\psi(p\cdot h)=\psi(p)\cdot h\quad\forall p\in\mathcal{P},h\in H \tag{2}\]
Equation (2) expresses that \(\psi\) is an \(H\)-equivariant map: in a certain sense, it is the mathematical manifestation of the rigidity of the material particles. Properties (1) and (2) imply that \(\psi\) is an \(H\)-bundle map between \(\mathcal{P}\) and \(G\)[43, 33]. Let us again choose a reference configuration \(\psi_{0}:\mathcal{P}\to G\), then by identifying \(\mathcal{P}\) with \(\psi_{0}(\mathcal{P})\subset G\) so that \(\psi_{0}\equiv\mathrm{Id}\), we assume from now on without loss of generality that \(\mathcal{P}\) is a subbundle of \(G\). In what follows, we will often use the following representation of elements of \(G\) by \(4\times 4\)-matrices:
\[p=\begin{bmatrix}1&0\\ x&S\end{bmatrix} \tag{3}\]
where \(x\in G/H\cong\mathbb{R}^{3}\) and \(S\in\mathrm{O}(3)\). The group element \(p\) in (3) corresponds to the orthonormal frame based at the point \(o+x_{a}\mathbf{e}_{a}\) with basis vectors \(S_{ab}\mathbf{e}_{a}\). Now let \(\psi:\mathcal{P}\to G\) be an \(H\)-bundle map, then by the above stated properties it must be of the form
\[\psi(p)=\begin{bmatrix}1&0\\ y(x)&Q(x)S\end{bmatrix} \tag{4}\]
for some functions \(y:\mathcal{B}\to\mathbb{R}^{3}\) and \(Q:\mathcal{B}\to H\) representing the deformation of the macromendium and the microstructure respectively, recovering the usual setting oulined in the second paragraph of this section. Since we are concerned with deformations that are continuously attainable from the identity, we will restrict \(Q(x)\) to have determinant \(+1\) so that \(Q:\mathcal{B}\to\mathrm{SO}(3)\) for the remainder of this article.
We will denote the Lie algebra of \(G\) (which is the same as the Lie algebra of \(G^{+}\)) by \(\mathfrak{g}=\mathfrak{se}(3)\), and the Lie algebra of \(H\) by \(\mathfrak{h}=\mathfrak{so}(3)\). The matrix representation (3) induces a matrix representation of \(\mathfrak{g}\) given by:
\[w=\begin{bmatrix}0&0\\ u&\Phi\end{bmatrix} \tag{5}\]
where \(u\in\mathbb{R}^{3}\) corresponds to an infinitesimal translation while \(\Phi\in\mathfrak{h}\) is a \(3\times 3\) antisymmetric matrix corresponding to an infinitesimal rotation, which is usually identified with an axial vector \(\varphi\in\mathbb{R}^{3}\). The Lie bracket on \(\mathfrak{g}\) is given by the usual matrix commutator: for \(w,z\in\mathfrak{g}\), we have \([w,z]=wz-zw\), and the adjoint action of \(G\) on \(\mathfrak{g}\) is given by matrix conjugation: for \(g\in G,w\in\mathfrak{g}\), \(\mathrm{Ad}_{g}w=gwg^{-1}\). Note also that the Lie algebra splits as \(\mathfrak{g}=\mathfrak{m}\oplus\mathfrak{h}\), where \(\mathfrak{m}\) is the \(\mathrm{Ad}H\)-invariant subspace of infinitesimal translations (which can be identified with every tangent space of \(\mathbb{E}^{3}\)), meaning that \(\mathbb{E}^{3}\) as a homogeneous space \(G/H\)
Figure 3: Configuration of a Cosserat continuum using principal fibre bundles. \(p=(X,S)\in\mathcal{B}\times H=\mathcal{P}\) denotes an arbitrary element of the bundle, which gets mapped to \(\psi(p)=(\kappa(X),Q(X)S)\in\psi(\mathcal{P})\subset G\).
is reductive [44]. This slightly technical property of \(G\) and \(H\) will allow us to separate deformation and incompatibility measures into translational \(\mathfrak{m}\) and rotational \(\mathfrak{h}\) parts; otherwise, we could only treat them together as \(\mathfrak{g}\)-valued objects [45; 36] (see below).
## III Theory of strain
In continuum mechanics, one is interested in configurations up to rigid transformations. The building block of any continuum theory is a strain measure which captures the deviation of a mapping of a continuum body from a rigid transformation. It should be nonzero if and only if \(\psi\) differs from a global rigid body transformation. In the current framework, rigid motions are implemented by left multiplication \(L_{g}:\mathcal{P}\to G,p\mapsto g\cdot p\) by a constant group element \(g\in G^{+}\). Note that such a transformation acts simultaneously on the base (macromedium) and fibres (micromedium), therefore under a superimposed rigid transformation \(y\mapsto R\cdot y+a\) for \(R\in\mathrm{SO}(3),a\in\mathbb{R}^{3}\) the microstructure directors rotate together with the basepoint by \(R\)[38]. This assumption, one of the most important in Cosserat theory, is often stated in the literature as the objectivity of microstructure directors [46].
In view of the above considerations, we consider the quantity:
\[E=\psi^{-1}d\psi-p^{-1}dp=\psi^{*}\omega-\omega \tag{6}\]
where \(\omega\) is the Maurer-Cartan form on the group \(G\) and \(\psi^{*}\omega\) denotes the pullback of \(\omega\) along the map \(\psi\). The Maurer-Cartan form satisfies the well-known Maurer-Cartan structure equations:
\[d\omega+\omega\wedge\omega=0 \tag{7}\]
If we write \(\psi(p)=k(p)\cdot p\) for a function \(k:\mathcal{P}\to G\), then substituting into equation (6) yields:
\[E=p^{-1}\left(k^{-1}dk\right)p+p^{-1}dp-p^{-1}dp=p^{-1}\left(k^{-1}dk\right)p \tag{8}\]
Equation (8) shows that \(E\) is identically zero if and only if \(k:\mathcal{P}\to G\) is a constant function, which is equivalent to \(\psi\) being a global rigid transformation. The deformation measure introduced in (6) is analogous to the classical Green-Lagrangian strain which is defined as the difference between the pullback of the spatial metric and the reference metric [42]. In this case, the role of the metric as the indicator of deformation is replaced by the Maurer-Cartan form, which in turn can be viewed as a certain kind of connection (a Cartan connection) on the bundle \(\mathcal{P}\) (for an in-depth presentation of Cartan connections, see [36]). In the appendix, we provide an informal motivation as to why the strain measure is a difference of connections. While the appearance of the Maurer-Cartan form in (6) might initially seem a bit surprising and perhaps difficult to grasp, it has been used implicitly and unconsciously in numerous contexts and applications related to elasticity and soft matter. For instance, strain measures in beam or plate theories [47; 8; 9; 10] (also introduced based on left-invariance under rigid transformations) are in fact closely related to (6).
Let us now compute \(E\) explicitly. Using (3) and (4) we have:
\[\omega=p^{-1}dp= \begin{bmatrix}1&0\\ -S^{T}x&S^{T}\end{bmatrix}\begin{bmatrix}0&0\\ dx&dS\end{bmatrix} \tag{9}\] \[= \begin{bmatrix}0&0\\ S^{T}dx&S^{T}dS\end{bmatrix}\]
Similarly:
Figure 5: Rigid transformation of a Cosserat continuum. Observe that the material particles rotate together with spatial points, therefore spatial and microstructural rotations are not independent.
Figure 6: Unlike magnetic systems or the \(\mathrm{O}(n)\) model in statistical field theory [31], rotation of material particles of a Cosserat continuum independent of spatial rotations induces a deformation [38]. This can be inferred by looking at the change in the angle between microstructure directors and vectors separating the centres of mass of material particles.
\[\psi^{-1}d\psi =\begin{bmatrix}1&0\\ -S^{T}Q^{T}y&S^{T}Q^{T}\end{bmatrix}\begin{bmatrix}0&0\\ dy&dQ\cdot S+QdS\end{bmatrix}= \tag{10}\] \[=\begin{bmatrix}0&0\\ S^{T}Q^{T}dy&S^{T}\left(Q^{T}dQ\right)S+S^{T}dS\end{bmatrix}\]
Substracting (9) and (10) we find:
\[E=\begin{bmatrix}0&0\\ S^{T}\left(Q^{T}dy-dx\right)&S^{T}\left(Q^{T}dQ\right)S\end{bmatrix} \tag{11}\]
Thus we recover the usual finite translational \(\mathbf{Q}^{T}dy-dx\) and rotational \(\mathbf{Q}^{T}d\mathbf{Q}\) strain measures of the Cosserat solid, where \(\mathbf{Q}(x)\) is the usual notation for the proper orthogonal rotation tensor describing the deformation of microstructure directors [48; 25; 49]. The factors of \(S\) illustrate that \(E\) is tensorial: its components change in representations of O(3) under a change of section \(S\), with the rotational strain measure unchanged if \(S=-I\) is an inversion, indicating that it is pseudovector valued. As we will see shortly, \(E\) is in fact a vector valued 1-form over \(\mathcal{B}\).
To obtain a consistent linearization of \(E\), we consider infinitesimal deformations of the continuum, which are vector fields \(V\) on \(\mathcal{P}\) such that their flows preserve the bundle structure [50] (i.e. they are \(H\)-bundle maps). This puts the following constraint on the vector field:
\[V(ph)=V(p)\cdot h\quad\forall p\in\mathcal{P},h\in H \tag{12}\]
As \(V\) is a vector field on a Lie group \(G\), we can represent it by a Lie algebra valued function \(\xi:\mathcal{P}\rightarrow\mathfrak{g}\) such that \(V(p)=p\cdot\xi(p)\), or equivalently \(\xi(p)=\iota_{V}\omega|_{p}\), where \(\iota\) denotes the interior product. Note also that \(\xi(p)\) satisfies:
\[\xi(ph)=h^{-1}\xi(p)h=Ad\left(h^{-1}\right)\xi(p) \tag{13}\]
Hence \(\xi\) is a section of the associated vector bundle \(W=\mathcal{P}\times_{H}\mathfrak{g}\) transforming in the adjoint representation of \(H\) on \(\mathfrak{g}\). This vector bundle is what Schaefer [4] calls the space of motors. The Lie derivative, by definition, is the infinitesimal version or first order approximation of the difference \(\psi^{*}-\mathrm{Id}\) along the flow map of a vector field. Therefore, the linearization \(e\) of \(E\) is obtained by taking the Lie derivative of \(\omega\) along the vector field \(V\), which measures the infinitesimal change of \(\omega\) along the vector field \(V\)[42; 18]. This is further elaborated in the appendix. Using Cartan's magic formula [28] and the Maurer-Cartan structure equations (7) we get:
\[e \triangleq\mathcal{L}_{V}\omega=d(\iota_{V}\omega)+\iota_{V}d\omega \tag{14}\] \[=d\xi-\iota_{V}(\omega\wedge\omega)\] \[=d\xi-\xi\wedge\omega+\omega\wedge\xi\] \[=d\xi+\text{ad}_{-}\xi\]
If \(V\) is not an infinitesimal displacement but a velocity vector field corresponding to a motion of the Cosserat continuum, then (14) defines the strain rate. One may also argue that (14) is in fact a more fundamental strain measure than (11) because it does not assume the existence of an arbitrary reference configuration. In addition, it is closer in spirit to differential geometry, where every meaningful operation or comparison can only be done locally, and global results can be obtained by integration. In the next section, an explicit demonstration of this viewpoint will be shown by taking \(e\) to be a general Lie algebra valued 1-form not necessarily coming from a vector field \(V\), leading to the appearance of topological defects. Moreover, every reasonable elastic deformation can be decomposed into a sequence of infinitesimal deformations, consequently a finite strain measure may be obtained by integrating (14) along the motion [51; 52]. In the autonomous case when the vector field is independent of time parametrizing the motion, this integration gives the exponential of the Lie derivative operator, which results in the pullback operation along the flow of the vector field \(V\), consistent with (11).
Let us now compute \(e\) in Cartesian coordinates on \(\mathcal{B}\). Formally, this is obtained by pulling back \(e\) to \(\mathcal{B}\) via the section corresponding to \(S=I\). Locally \(\xi\) is represented by a \(\mathfrak{g}\)-valued function on \(\mathcal{B}\), which we can specify by a pair of infinitesimal displacement and microrotation fields \(u:\mathcal{B}\rightarrow\mathbb{R}^{3}\) and \(\Phi:\mathcal{B}\rightarrow\mathfrak{so}(3)\)[53]. In Cartesian coordinates, since \(\omega\) pulls back to the \(\mathbb{R}^{3}\)-valued 1-form \(d\mathbf{x}\) (corresponding to the identity tensor on \(\mathcal{B}\)) we get:
\[e=\begin{bmatrix}0&0\\ du&d\Phi\end{bmatrix}-\begin{bmatrix}0&0\\ u&\Phi\end{bmatrix}\wedge\begin{bmatrix}0&0\\ d\mathbf{x}&0\end{bmatrix}+\begin{bmatrix}0&0\\ d\mathbf{x}&0\end{bmatrix}\wedge\begin{bmatrix}0&0\\ u&\Phi\end{bmatrix}=\] \[=\begin{bmatrix}0&0\\ du-\Phi d\mathbf{x}&d\Phi\end{bmatrix}=\begin{bmatrix}0&0\\ \varepsilon&\tau\end{bmatrix} \tag{15}\]
Under the usual identification of elements \(\mathfrak{so}(3)\) with \(\mathbb{R}^{3}\), \(\Phi\) is identified with the axial vector field \(\mathbf{\varphi}:\mathcal{B}\rightarrow\mathbb{R}^{3}\) and the matrix product \(\Phi d\mathbf{x}\) becomes a vector cross product \(\mathbf{\varphi}\times d\mathbf{x}\). This way we recover the well-known infinitesimal strain measures of the Cosserat continuum \(d\mathbf{u}-\mathbf{\varphi}\times d\mathbf{x}\) and \(d\mathbf{\varphi}\)[54][7]. In components (\(\epsilon_{ijk}\) denotes the Levi-Civita permutation symbol, coming from the commutation relations of \(\mathfrak{g}\)):
\[\left(d\mathbf{u}-\mathbf{\varphi}\times d\mathbf{x}\right)_{j} :=\varepsilon_{ij}dx^{i}=\left(\partial_{i}u_{j}-\epsilon_{ijk} \varphi_{k}\right)dx^{i} \tag{16}\] \[\left(d\mathbf{\varphi}\right)_{j} :=\tau_{ij}dx^{i}=\left(\partial_{i}\varphi_{j}\right)dx^{i} \tag{17}\]
Both \(E\) and \(e\) are one-forms with values in the bundle \(W\): this is due to the fact that both of them are essentially differences of connections on \(\mathcal{P}\), and as such are tensor-valued (in this case \(W\)-valued) one-forms on \(\mathcal{B}\). Therefore equation (14) can also be interpreted as the covariant derivative \(D\xi\) of \(\xi\) with respect to the connection on \(W\) induced by \(\omega\), the connection coefficients in
Cartesian coordinates can be read off from (16) and (17). This result was first obtained by Schaefer [4]. Hence the infinitesimal strain measure can be equally viewed as a covariant derivative of a section of \(W\) or the Lie derivative of the Maurer-Cartan form along a vector field on \(G\). This also has an analogue in classical elasticity, where infinitesimal strain is the Lie derivative \(\mathcal{L}_{U}g\) of the metric \(g\) along a vector field \(U\), but can also be expressed as \(\frac{1}{2}\left(\nabla U+\left(\nabla U\right)^{T}\right)\) with the aid of the Levi-Civita connection \(\nabla\) corresponding to \(g\)[55]. It is also interesting to remark that the infinitesimal strain measures (16)-(17) have been recently derived in the context of symmetry breaking and high energy physics [56] via a coset construction starting from the Maurer-Cartan form on the Galilei group. Furthermore, Cartan connections play an important role in the extended theories of general relativity [14; 45] (which have, in turn, also been influenced by the theory of continua with microstructure).
It should be apparent from the development above that the Maurer-Cartan form is the basic building block of almost any classical continuum theory respecting locality and invariance under rigid body transformations because it furnishes a set of differential invariants that can subsequently be used to construct Lagrangians or free energies for conservative systems in equilibrium. The most well-known example of this procedure is the Frenet-Serret framing [57]: for a curve \(\gamma:(a,b)\rightarrow\mathbb{E}^{3}\cong G/H\) embedded in Euclidean space, the set of tangent, normal and binormal vectors give an adapted frame that provides a lift \(\tilde{\gamma}:(a,b)\to G\) of the curve from the Euclidean (homogeneous) space to the Euclidean group. The curvature and torsion (not to be confused with curvature and torsion of a connection [58] in later sections) of the curve are nothing else but the components of the pullback of the Maurer-Cartan form on \(G\) via the lift \(\tilde{\gamma}\). The simplest resulting elasticity theory of curves in two dimensions is the famous theory of the elastica dating back to Euler [59]. The classification of submanifolds of general homogeneous spaces has led to the deep and beautiful method of moving frames [60; 61], pioneered by Elie Cartan, where the Maurer-Cartan form plays a fundamental role.
## IV Compatibility conditions
If \(e\) is an \(W\)-valued one-form, a natural question to ask is that when it is compatible with a displacement field, i.e. whether there exist \(u,\Phi\) satisfying (15). Necessary conditions can be elegantly obtained by means of exterior calculus, utilizing the exterior covariant derivative induced on \(W\)-valued differential forms by the connection \(\omega\)[4]. This is done as follows: any \(W\)-valued \(p\)-form \(\alpha\) can be equivalently characterized as a \(\mathfrak{g}\)-valued \(p\)-form on \(\mathcal{P}\) that is \(H\)-equivariant and horizontal [62]. Now the exterior covariant derivative \(D\alpha\) is the \(\mathfrak{g}\)-valued \(p+1\)-form on \(\mathcal{P}\) defined as:
\[D\alpha:=d\alpha+\text{ad}_{\omega}\alpha=d\alpha+\omega\wedge\alpha-(-1)^{p} \alpha\wedge\omega \tag{18}\]
It can be verified that (18) is again an equivariant and horizontal form, hence descends to a \(W\)-valued \(p+1\)-form on \(\mathcal{B}\). The compatibility conditions follow from the fact that the connection is flat. One can show that:
\[DD\alpha=(d\omega+\omega\wedge\omega)\wedge\alpha-\alpha\wedge(d\omega+\omega \wedge\omega)=0 \tag{19}\]
Therefore a necessary [63] condition for a \(W\)-valued \(1\)-form \(e\) to be integrable (i.e to be of the form \(D\xi\) for some section \(\xi\) of \(W\)) is
\[De=de+\omega\wedge e+e\wedge\omega=0 \tag{20}\]
In coordinates (with a slight abuse of notation as \(\tau\) here corresponds to an \(\mathfrak{so}(3)\)-valued \(1\)-form):
\[De=\begin{bmatrix}0&0\\ d\varepsilon&d\tau\end{bmatrix}+\begin{bmatrix}0&0\\ d\boldsymbol{x}&0\end{bmatrix}\wedge\begin{bmatrix}0&0\\ \varepsilon&\tau\end{bmatrix}+\begin{bmatrix}0&0\\ \varepsilon&\tau\end{bmatrix}\wedge\begin{bmatrix}0&0\\ d\boldsymbol{x}&0\end{bmatrix}\\ =\begin{bmatrix}0&0\\ d\varepsilon+\tau\wedge d\boldsymbol{x}&d\tau\end{bmatrix}=0 \tag{21}\]
Using the definition of the exterior derivative and the correspondence between the action of \(\mathfrak{so}(3)\) on vectors and the vector cross product we obtain [7]:
\[(\partial_{k}\varepsilon_{ij}-\partial_{i}\varepsilon_{kj})-( \epsilon_{kjl}\tau_{il}-\epsilon_{ijl}\tau_{kl}) =0 \tag{22}\] \[\partial_{i}\tau_{jk}-\partial_{j}\tau_{ik} =0 \tag{23}\]
Any nonzero quantities on the right-hand side of (22) and (23) can be interpreted as measures of incompatibility. On the right hand side of (22) there can be a displacement-valued two-form \(T\), usually associated with torsion, while on the right-hand side of (23) a microrotation-valued two-form \(\Omega\), associated with curvature [64]. Therefore, in components, in general we have:
\[(\partial_{k}\varepsilon_{ij}-\partial_{i}\varepsilon_{kj})-( \epsilon_{kjl}\tau_{il}-\epsilon_{ijl}\tau_{kl}) =T_{ijk} \tag{24}\] \[\partial_{i}\tau_{jk}-\partial_{j}\tau_{ik} =\Omega_{ijk} \tag{25}\]
They can be viewed together as a more general \(W\)-valued \(2\)-form, \(J\), satisfying the Bianchi identity (27) [65].
\[De =J \tag{26}\] \[DJ =0 \tag{27}\]
For the finite strain measure \(E\), a similar compatibility condition can be derived based on the Maurer-Cartan structure equations: since
\[E+\omega=\psi^{*}\omega,\]
the integrability condition reads:
\[d(E+\omega)+(E+\omega)\wedge(E+\omega)=dE+E\wedge\omega+\omega\wedge E=0 \tag{28}\]
In general, one can model a Cosserat solid with defects by an abstract principal \(H\)-bundle \(\mathcal{P}\) equipped with a Cartan connection \(\eta\), that is, a \(\mathfrak{g}\)-valued 1-form on \(\mathcal{P}\) satisfying the following properties [36]:
1. \(R_{h}^{*}\eta\ =\ Ad\left(h^{-1}\right)\eta\quad\forall h\in H\), i.e. \(\eta\) is \(H\)-equivariant.
2. \(\eta\left(p\cdot\varsigma\right)=\varsigma\quad\forall\varsigma\in\mathfrak{h}\), where \(p\cdot\varsigma=\left.\frac{d}{dt}\right|_{t=0}\left(p\cdot\exp(t\varsigma)\right)\).
3. \(\eta|_{p}:T_{p}\mathcal{P}\rightarrow\mathfrak{g}\) is a linear isomorphism for all \(p\in\mathcal{P}\).
Albeit rather formally (for an intuitive interpretation of the above properties of \(\eta\), see e.g. [45]), these three conditions capture and generalize the main properties of the Maurer-Cartan form on the principal \(H\)-bundle \(G\to G/H\). The crucial difference is that the Maurer-Cartan structure equations do not necessarily hold: the two-form \(\Theta=d\eta+\eta\wedge\eta\) is in general nonzero. This quantity is called the curvature of the Cartan connection, representing the incompatibility of the underlying Cosserat solid. Note that this curvature is more general than the curvature of a linear connection because it takes values in the Lie algebra of the Euclidean group, not a subgroup of the general linear group. In some sense, one can view \(\Theta\) as the "unification" of torsion and curvature into a single object, whose significance is highlighted by the fact that rotational and translational defects are intimately coupled in Cosserat solid as expressed by (24) and (25).
Modelling bodies with continuous distributions of topological defects (dislocations, disclinations, etc.) as abstract manifolds equipped with an extra structure incompatible with Euclidean space has a long and distinguished history, starting from the work of Nye [66], Bilby _et al._[67], Kroner [68], Kondo [69] and others in the 1950s and 1960s. These authors have independently discovered that the Burgers vector density of continuous distributions of dislocations can be mapped to the torsion of a certain affine connection on the material manifold. It has also been realized that continuous distributions of disclinations and point defects are associated with the curvature (in the usual Riemannian sense) and the non-metricity of an affine connection on the material manifold, see e.g. [70; 71] for a modern exposition.
While the compatibility conditions and stress fields around defects for Cosserat solids have been obtained by various authors, the geometric origin of incompatibilities remained unclear. Based on the above discussion and inspired by [70], we propose a simple model of an incompatible Cosserat solid as a principal \(H\)-bundle \(\mathcal{P}\) equipped with a non-flat Cartan connection \(\eta\). The finite strain measure \(E\) becomes \(\psi^{*}\omega-\eta\) and (28) changes to:
\[d(E+\eta)+(E+\eta)\wedge(E+\eta)=D_{\eta}E+\Theta=0 \tag{29}\]
where \(D_{\eta}\) is the exterior covariant derivative with respect to the connection \(\eta\). This definition of strain as a difference of connections also sheds light on the appearance of the contorsion tensor in previous works on topological defects in general relativity and complex media [16; 17]. In addition, with the aid of the theory of stress and and constitutive relations given in the following sections, one can calculate the stress fields around defects in Cosserat solids [72].
## V Theory of stress and balance laws
Most treatments of classical elasticity derive the governing equations of elasticity via the following train of thought [73]: interactions between material particles are assumed to be short-ranged, which, together with the action-reaction principle, implies the existence of a second-rank stress tensor through Cauchy's tetrahedron argument. Equations of motion are then obtained by postulating the balance of linear and angular momentum on each subbody of a macroscopic body, with the former yielding Cauchy's momentum equations while the latter the symmetry of the stress tensor upon localization. However, there are a number of difficulties encountered when trying to extend this method to complex materials [74].
First, the above assumptions restrict the nature of boundary interactions between subbodies to be only force-like depending linearly on the normal of the boundary surface. However, one can envisage cases when "higher-order" forces corresponding to media sensitive to higher gradients in displacements or the curvature of boundary surfaces [75; 74]. In the case of complex materials, torques or even torque dipoles could be transmitted through boundaries. In general, it is unclear what kind of balance laws one should postulate for these quantities.
Second, the relationship between the balance of angular momentum and the symmetry of stress tensor is quite mysterious: in Cauchy elasticity [73], this balance law is only used to argue that the stress tensor is symmetric, but subsequently neglected in the solution of any practical problem [76]. Furthermore, if material points in a Cauchy continuum are assumed to be structureless point masses only possessing translational degrees of freedom, why does the balance of angular momentum constitute an independent equation? It is in stark contrast with the Newtonian mechanics of point masses, where Newton's second law is the only equation needed to work out the motion of a point mass.
Finally, from a differential geometric standpoint, the vector-valued integrals in balance laws are ill-defined because in non-flat spaces one cannot identify distant tangent spaces unambiguously.
To address these problems, we construct the theory of stress and balance laws from another perspective, based on the principle of virtual work [77]. Even though this approach is much less popular and appreciated than Cauchy's, it also has a long and distinguished history, essentially originating from Piola who extended the ideas of d'Alembert to the mechanics of continua [75]. The fundamental assumption is that applying a rigid transformation to a body requires no work, which motivates the introduction of strain as a measure of deviation from a rigid transformation (in classical elasticity, it is the change in the metric along the deformation, while in the present case it is the change in the connection as argued in Section III). Stress is defined as a dual quantity to strain: the duality pairing gives the virtual work done by stress during an infinitesimal variation of configuration. This variation or virtual displacement is represented by a vector field on the current configuration respecting the imposed boundary conditions (in classical elasticity, it is a vector field on the ambient space, while for Cosserat solids it is a vector field exactly analogous to \(V\) in (12)). Equations of motion are obtained by a version of the d'Alembert's principle which states that the total virtual work is zero for any admissible virtual displacement field. This method answers all the above concerns as follows.
First, in the variational formalism, stress has exactly as many degrees of freedom as the model of the continuum has: for rotational degrees of freedom, one has moment stresses, and for higher-gradient forces additional "hyper-stresses" [75; 78; 79]. Balance laws are obtained from a single application of the principle of virtual work.
Second, the duality between stress and strain clearly highlights the number of degrees of freedom in a model and the underlying symmetries. For example, in classical elasticity the infinitesimal strain \(\frac{1}{2}\left(\partial_{i}u_{j}+\partial_{j}u_{i}\right)\) is an element of the vector space of symmetric second-rank tensors, therefore the stress tensor, being an element of the dual vector space, is also automatically symmetric. The symmetry of the stress tensor has thus been identified as a consequence of the number of degrees of freedom in a Cauchy continuum (because the strain only depends on the displacement vector) and the axiom that rigid transformations require no work [77]. This way in Cauchy elasticity the only balance law following from the principle of virtual work is Cauchy's momentum equation.
Finally, the principle of virtual work involve scalar-valued integrals only which are well-defined on any smooth manifold.
Motivated by the above discussion and inspired by other geometric treatments of elasticity [27; 28; 29], for Cosserat solids we therefore define stress \(\Sigma\) as an \(\tilde{W}^{*}\)-valued 2-form [80] on the current configuration \(\psi(\mathcal{P})\), where \(\tilde{W}^{*}\) is the dual vector bundle of \(\tilde{W}=\psi(\mathcal{P})\times_{H}\mathfrak{g}\) (this is the space of dual motors in the language of [4]). The intuitive reason behind this definition is that (Eulerian) velocities are sections of \(\tilde{W}\) (analogously as in (12)), and the natural duality pairing between stress and velocity gives a scalar-valued 2-form that can be integrated on the current configuration to obtain the power of stresses exerted on surfaces inside the body. (More generally, stress can be defined in \(n\) dimensions as an \(n-1\) form taking values in a vector bundle dual to velocities/virtual displacements, and thus can be integrated on any \(n-1\)-submanifold to give the rate of power on a hypersurface, see e.g. [81; 28]). While this would initially suggest that \(\Sigma\) is described by three indices (one bundle index and two form indices), one usually reduces this to two indices overall to obtain the second rank Cauchy stress tensor in the presence of a metric and a volume form by exploiting the isomorphism provided by the Hodge duality between 2-forms and 1-forms, i.e. by associating area elements with corresponding normal vectors. It turns out that there is a more general correspondence between \(n-1\)-forms and vectors afforded by the interior product which only requires a volume form, so the existence of a metric structure is not required.
The duality pairing between stress and infinitesimal strain or strain rate gives a scalar-valued 3-form on \(\tilde{\mathcal{B}}=\varphi(\mathcal{B})\), which can be integrated over any 3-dimensional submanifold of \(\tilde{\mathcal{B}}\) to give the virtual work or the rate of work done by stresses, respectively [27]. This pairing can be defined for general \(p\)- and \(q\)-forms taking values in \(\tilde{W}^{*}\) and \(\tilde{W}\), outputting a scalar-valued \(p+q\)-form on \(\mathcal{B}\). For forms \(\mu\otimes\alpha\) and \(s\otimes\beta\), where \(\mu\) and \(s\) are sections of \(\tilde{W}^{*}\) and \(\tilde{W}\), \(\alpha\) and \(\beta\) are \(p\) and \(q\)-forms on \(\mathcal{B}\), it is given by \(\langle\mu\otimes\alpha,s\otimes\beta\rangle:=\langle\mu,s\rangle\alpha\wedge\beta\), then extended linearly to all forms (here \(\langle\mu,s\rangle\) is the natural pairing of sections of \(\tilde{W}\) and \(\tilde{W}^{*}\)).
Let us compute this pairing explicitly for the stress 2-form \(\Sigma\) and the infinitesimal strain 1-form \(e\). We choose a basis \(\{\mathbf{v}_{i},\mathbf{r}_{i}\}\) of \(\mathfrak{g}\), where the \(\{\mathbf{v}_{i}\}\) generate infinitesimal translations (spanning the subspace \(\mathfrak{m}\)) and the \(\{\mathbf{r}_{i}\}\) generate infinitesimal rotations (spanning the subspace \(\mathfrak{h}\)). Suppose that in local Cartesian coordinates \(x_{i}\), \(i=1,2,3\), \(e\) is represented by the pair of \(\mathfrak{m}\)- and \(\mathfrak{h}\)-valued 1-forms \(\varepsilon_{ij}\mathbf{v}_{j}dx_{i}\) and \(\tau_{ij}\mathbf{r}_{j}dx_{i}\) as in (16) and (17). (We do not distinguish upper and lower indices in this section). Let _vol_ be the standard volume form in Euclidean space, then the 2-forms \(A_{i}=\iota_{o_{i}}vol\) form a basis of 2-forms satisfying \(dx_{i}\wedge A_{j}=\delta_{ij}\text{vol}\), moreover, \(dA_{i}=0\) for Cartesian coordinates. Let \(\{\mathbf{v}_{i}^{*},\mathbf{r}_{i}^{*}\}\) be the dual basis of \(\mathfrak{g}^{*}\) to \(\{\mathbf{v}_{i},\mathbf{r}_{i}\}\), and expand \(\Sigma\) as:
\[\Sigma=\sigma_{ij}\mathbf{v}_{j}^{*}A_{i}+\chi_{ij}\mathbf{r}_{j}^{*}A_{i} \tag{30}\]
where \(\sigma_{ij}\) is the generalization of the Cauchy stress and \(\chi_{ij}\) represents moment stresses, the first index corre
sponds to the area element while the second to the direction of force/moment respectively. We have:
\[\langle e,\Sigma\rangle=\langle\varepsilon_{ij}\mathbf{v}_{j}dx_{i}+ \tau_{ij}\mathbf{r}_{j}dx_{i},\sigma_{kl}\mathbf{v}_{l}^{*}A_{k}+\chi_{kl} \mathbf{r}_{l}^{*}A_{k}\rangle=\] \[=(\varepsilon_{ij}\sigma_{kl}\langle\mathbf{v}_{j},\mathbf{v}_{l} ^{*}\rangle+\tau_{ij}\chi_{kl}\langle\mathbf{r}_{j},\mathbf{r}_{l}^{*}\rangle) \,dx_{i}\wedge A_{k}=\] \[(\varepsilon_{ij}\sigma_{kl}+\tau_{ij}\chi_{kl})\,\delta_{jl} \delta_{ik}vol=(\varepsilon_{ij}\sigma_{ij}+\tau_{ij}\chi_{ij})\,vol \tag{31}\]
Another important mathematical tool is the exterior covariant derivative operator \(D^{*}\), induced on \(\tilde{W}^{*}\)-valued forms on \(\tilde{\mathcal{B}}\) by \(\omega\), which will allow us to integrate by parts and write down local forms of balance laws. There are multiple ways to define it, one of them is through the Leibniz rule: for a \(\tilde{W}^{*}\)-valued \(p\)-form \(\Pi\), \(D^{*}\Pi\) is the unique \(\tilde{W}^{*}\)-valued \(p+1\)-form such that for any section \(\xi\) of \(\tilde{W}\) we have: [27]
\[d\langle\xi,\Pi\rangle=\langle\xi,D^{*}\Pi\rangle+\langle D\xi,\Pi\rangle \tag{32}\]
This is very similar to how one induces the covariant derivative on the cotangent bundle of a manifold from a linear connection on its tangent bundle. Another possible definition comes from the observation that \(\tilde{W}^{*}\) is also an associated bundle to \(\psi(\mathcal{P})\) by the coadjoint representation of \(H\) on \(\mathfrak{g}^{*}\). We can again view any \(\tilde{W}^{*}\)-valued \(p\)-form \(\Pi\) as a \(\mathfrak{g}^{*}\)-valued equivariant horizontal \(p\)-form on \(\psi(\mathcal{P})\), then its exterior covariant derivative \(D^{*}\Pi\) is nothing else but
\[D^{*}\Pi=d\Pi+\mathrm{ad}_{\omega}^{*}\Pi \tag{33}\]
where \(\mathrm{ad}^{*}\) denotes the coadjoint representation of \(\mathfrak{g}\) on \(\mathfrak{g}^{*}\), defined via:
\[\langle\mathrm{ad}_{X}^{*}\mu,Y\rangle=-\langle\mu,\mathrm{ad}_{X}Y\rangle \quad\forall X,Y\in\mathfrak{g},\mu\in\mathfrak{g}^{*} \tag{34}\]
The differential operator in (33) is similar to expressions which appear in the theory of the Euler-Poincari\(\dot{i}_{\dot{i}_{\dot{i}}}\)co equations arising from the variation of group-invariant Lagrangians on Lie group configuration spaces [82; 83].
Suppose that external volume forces and torques act on the Cosserat solid, given by an \(\tilde{W}^{*}\)-valued 3-form \(F\) on the current configuration \(\tilde{\mathcal{B}}\), as well as traction forces and couples on the boundary \(\partial\tilde{\mathcal{B}}\), represented by a \(\tilde{W}^{*}\)-valued 2-form \(T\). Let us impart a virtual deformation \(\xi\) on the current configuration \(\tilde{\mathcal{B}}\), given by a section of \(\tilde{W}\), then the principle of virtual work states that the total virtual work done by all forces and stresses vanishes [77; 78; 79]. The three contributions (neglecting inertia) to the virtual work are:
**a):**: The virtual work of external volume forces and torques: \(\delta W_{ext}=\int_{\tilde{\mathcal{B}}}\langle\xi,F\rangle\).
**b):**: The virtual work of traction forces and torques: \(\delta W_{trac}=\int_{\partial\tilde{\mathcal{B}}}\langle\xi,T\rangle\).
**c):**: The virtual work of stress in the bulk (the minus sign is conventional): \(\delta W_{int}=-\int_{\tilde{\mathcal{B}}}\langle D\xi,\Sigma\rangle\).
The principle of virtual work asserts that:
\[\delta W_{tot}=\delta W_{ext}+\delta W_{trac}+\delta W_{int}=0 \tag{35}\]
for all virtual displacements \(\xi\). Substituting the integral expressions into (35), using Stokes' theorem [28] and the exterior covariant derivative (32) to integrate by parts yields:
\[\int_{\tilde{\mathcal{B}}}\langle\xi,F\rangle+\int_{\partial \tilde{\mathcal{B}}}\langle\xi,T\rangle-\int_{\mathcal{B}}\langle D\xi,\Sigma\rangle=\] \[=\int_{\tilde{\mathcal{B}}}\langle\xi,F+D^{*}\Sigma\rangle+\int_{ \partial\tilde{\mathcal{B}}}\langle\xi,T-\Sigma\rangle=0 \tag{36}\]
As (36) holds for any virtual displacement field \(\xi\), we deduce the following equilibrium equations and boundary conditions [4]:
\[D^{*}\Sigma+F =0\quad\text{on }\tilde{\mathcal{B}} \tag{37}\] \[T =\Sigma\quad\text{on }\partial\tilde{\mathcal{B}} \tag{38}\]
Equation (37) provides yet another interpretation of the operator \(D^{*}\): it is the generalization of the divergence operator appearing in the usual Cauchy momentum equation. (37) is written in the Eulerian picture: to obtain the analogous Lagrangian equilibrium equations, one can pull back (37) to the reference configuration via \(\psi\), the \(W^{*}\)-valued 2-form \(S=\psi^{*}\Sigma\) is going to be the analogue of the second Piola-Kirchhoff stress tensor in finite elasticity [42]. The first Piola-Kirchhoff stress can be found from \(\Sigma\) by pulling back the form "part" from \(\tilde{\mathcal{B}}\) to the reference configuration \(\mathcal{B}\) but doing nothing to the vector part [27] (it is useful because it helps performing the integral (36) in Lagrangian coordinates on \(\mathcal{B}\)).
For an explicit coordinate representation of (37), let us work again in Cartesian coordinates, writing \(F=(f_{i}\mathbf{v}_{i}^{*}+m_{i}\mathbf{r}_{i}^{*})\,vol\) for the volume forces and torques. As in (15), \(\omega\) is locally given by the 1-form \(\mathbf{v}_{i}dx_{i}\), thus using (30) and (33)
\[D^{*}\Sigma=d\left(\left(\sigma_{ij}\mathbf{v}_{j}^{*}+\chi_{ij}\mathbf{r}_{j}^ {*}\right)A_{i}\right)+\mathrm{ad}_{\mathbf{v}_{k}dx_{k}}^{*}\left(\left( \sigma_{ij}\mathbf{v}_{j}^{*}+\chi_{ij}\mathbf{r}_{j}^{*}\right)A_{i}\right) \tag{39}\]
Using the commutation relations of \(\mathfrak{g}\) we get \(\mathrm{ad}_{\mathbf{v}_{i}}^{*}\mathbf{v}_{j}^{*}=\epsilon_{ijk}\mathbf{r}_{k}^ {*}\) and \(\mathrm{ad}_{\mathbf{v}_{i}}^{*}\mathbf{r}_{j}^{*}=0\), therefore:
\[D^{*}\Sigma=\left(\mathbf{v}_{j}^{*}\partial_{k}\sigma_{ij}+\mathbf{r}_{j}^{*} \partial_{k}\chi_{ij}+\mathbf{r}_{l}^{*}\epsilon_{kjl}\sigma_{ij}\right)dx_{k} \wedge A_{i} \tag{40}\]
Separating the coefficients of \(\mathbf{v}_{i}^{*}\) and \(\mathbf{r}_{j}^{*}\) and substituting into (37) we obtain the familiar local balance of linear and angular momentum:
\[\partial_{j}\sigma_{ji}+f_{i} =0 \tag{41}\] \[\partial_{j}\chi_{ji}+\epsilon_{ijk}\sigma_{jk}+m_{i} =0 \tag{42}\]
When \(F=0\), (37) takes the simple form \(D^{*}\Sigma=0\). Since \(\omega\) is flat, \(D^{*}D^{*}=0\) (just like in (19)), so the solution is of the form \(\Sigma=D^{*}Y\) for a stress potential \(Y\), which is a \(\tilde{W}^{*}\)-valued 1-form that is only defined up to a gauge transformation \(Y\to Y+D^{*}\alpha\) for some section \(\alpha\) of \(\tilde{W}^{*}\). While stress potentials for Cosserat solids were derived in e.g. [84], it is important to note that similar potentials have been extensively utilized in many areas of classical physics, such as scalar and vector potentials in electromagnetism, Airy and Maxwell stress functions for certain problems in Cauchy elasticity and the Papkovich-Neuber representation of Stokes flows. By formulating the theory of Cosserat solids in terms of these stress potentials (and with the inclusion of inertia too), one obtains a gauge theory similar to fracton theory in condensed matter physics [15]. Equations (37) and (38) also suggest geometric structure-preserving numerical schemes using discrete exterior calculus [85; 86; 87].
## VI Constitutive relations
In order to be able to close the above system of equations, one needs to specify a constitutive law between stress and strain, characteristic of the material in question [88]. In general, these can be quite complicated nonlinear relations which may depend on time rates of tensors too. In this paper, we restrict our attention to the simple case when the constitutive law gives the second Piola-Kirchhoff stress \(S\) at a point \(X\in\mathcal{B}\) as a function \(S(X,E(X))\) of \(X\) and the strain measure \(E\) at the point \(X\). An important class of materials is the set of hyperelastic materials, for which the stress derives from a potential or stored energy function per unit mass \(U(X,E(X))\) as \(S=\frac{\partial U}{\partial E}\). These bodies are conservative: they do not perform any work along a closed cycle in deformation space [42].
The number of independent parameters in the constitutive law is restricted by the symmetries of the material. In our setting, a material symmetry is an \(H\)-bundle automorphism of \(\mathcal{P}\) that leaves the functional form of the constitutive relation invariant. More specifically, the material symmetry group for a hyperelastic Cosserat solid at a point \(X\in\mathcal{B}\) is a subgroup \(K_{X}\leq H\) of the orthogonal group such that:
\[U(X,E(X))=U(X,L_{R}^{*}E(X))\quad\forall R\in K_{X} \tag{43}\]
for any possible strain \(E(X)\), where \(L_{R}:\mathcal{P}\to G\) is a rigid rotation (and perhaps reflection) of the reference configuration \(\mathcal{P}\) by \(R\) about \(X\), i.e.:
\[L_{R}\left(\begin{bmatrix}1&0\\ x&S\end{bmatrix}\right)=\begin{bmatrix}1&0\\ R(x-X)&RS\end{bmatrix} \tag{44}\]
The motivation behind definition (43) is that the action of a material symmetry is given by replacing \(\psi\) with \(\psi_{R}=\psi\circ L_{R}\) and the connection \(\omega\) on \(\mathcal{P}\) by \(L_{R}^{*}\omega\) (c.f. rotating or reflecting the reference configuration); this way \(E\) changes to \(E_{R}=\psi_{R}^{*}\omega-L_{R}^{*}\omega=L_{R}^{*}E\).
In components, \(E\) is \(\left(Q^{T}dy-dx,Q^{T}dQ\right)=(Ydx,\Gamma dx)\) from (11), where \(\Gamma\) corresponds to the axial vector-valued wryness tensor, transforms to \(E_{R}=\left(R^{T}YR,\left(\det R\right)R^{T}\Gamma R\right)\), taking into account the axial vector nature of \(\Gamma\)[89]. Hence if the Cosserat solid is centrosymmetric (equivalent to achirality in three dimensions), meaning \(-I\in K_{X}\), then \(U(X,E)=U(X,Y,\Gamma)=U(X,Y,-\Gamma)\), so there cannot be any translation-rotation coupling in the constitutive relation. If \(K_{X}\) is a subgroup of SO(3), then the material is not symmetric under reflections [90] and that the microstructural constitutents are chiral. Other important cases are when the material is isotropic so that \(K_{X}=\mathrm{O}(3)\) or hemitropic when \(K_{X}=\mathrm{SO}(3)\), in these cases the constitutive relation can only involve certain invariants of the strain \(E(X)\). Chirality is one of the most exciting features of the theory of Cosserat solids because it is ubiquitous in nature and soft matter [91] while entirely absent in Cauchy elasticity, and experimentally engineered mechanical metamaterials have recently been demonstrated to exhibit fascinating chiral effects such as acoustical activity [20; 21; 22], confirming predictions of micropolar elasticity.
Linear constitutive relations are often a good approximation to constitutive behaviour, especially when deformations are small. Suppose that in local coordinates and with respect to a basis \(\{\mathbf{w}_{A}\}\) of \(\mathfrak{g}\), we expand \(E\) as a \(\mathfrak{g}\)-valued 1-form \(E=E_{Ai}\mathbf{w}_{A}dx^{i}\). Using the dual basis \(\{\mathbf{w}_{A}^{*}\}\) of \(\mathfrak{g}^{*}\) and a dual basis of 2-forms \(A_{i}\) as in the previous section, we expand \(S\) as the \(\mathfrak{g}^{*}\)-valued 2-form \(S=S^{Bj}\mathbf{w}_{B}^{*}A_{j}\). A linear constitutive law is then given by:
\[S^{Ai}=C^{ABij}E_{Bj} \tag{45}\]
for a general stiffness tensor \(C^{ABij}\). Compared with the two \(\mathrm{Lami}_{j}\mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbm{ \mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbm{\mathbbmmathbbmmathbbm{\mathbbm{\mathbbm{ \mathbbmmathbbmmathbbmmathbbmmathbbmmathbbmmathbbmmathbbm{ \mathbbm{ \mathbbm{ \ \ \
However, there are recent so-called odd elastic models of active solids [92] with rich phenomenology where the major symmetry (46) is broken and the material is able to perform work along a closed cycle in deformation space via using energy produced from an internal mechanism. Odd elasticity has also been extended to and experimentally realized in micropolar beams with piezoelectric activity [12]. It is quite likely that most oriented solids in nature [91] also violate the symmetry (46), therefore it is an interesting future direction of research to study further consequences of such an "odd" constitutive relation [23]. Finally, for a more comprehensive discussion of constitutive relations involving possible material symmetry groups and questions of material uniformity and homogeneity in Cosserat media, we refer to e.g. [89, 34].
## VII Discussion
In this paper a geometric theory of Cosserat solids has been outlined, modelling the body as a principal fibre bundle \(\mathcal{P}\) over a three-dimensional base and O(3) as the typical fibre. Configurations of the continuum were defined as bundle maps \(\psi\) from \(\mathcal{P}\) to an ambient O(3)-bundle \(G\), while the strain measure as the difference between two Cartan connections: a material one and the pullback of the Maurer-Cartan form on \(G\) along \(\psi\). Compatibility conditions were expressed using exterior calculus, and incompatibilities were identified as the curvature of the material connection on the Cartan geometry \(\mathcal{P}\). Stress was introduced as a dual quantity of strain, and balance laws were obtained using the principle of virtual work. Constitutive laws relating stress and strain were also briefly discussed, along with issues of chirality and hyperelasticity.
Our work can be extended in a multitude of different directions. One can take any Klein geometry \(G/H\) and consider a generalized Cosserat solid as a Cartan geometry modelled on \(G/H\) (i.e. a principal \(H\)-bundle \(\mathcal{P}\) with a Cartan connection \(\eta\)) and bundle maps \(\psi:\mathcal{P}\to G\) as configurations. (The ambient space can in principle be another arbitrary Cartan geometry modelled on \(G/H\)). The notions of strain, stress, compatibility conditions and constitutive relations can be defined exactly analogously as it was done in the main text. For example, micromorphic elasticity [3, 7] can also be cast in this language by taking \(G\) to be the general affine group GA(3) of line-preserving transformations of \(\mathbb{E}^{3}\) and \(H=\mathrm{GL}(3)\) the general linear group. However, the physical relevance of such models is unclear as the introduction of the strain measure was postulated based on invariance under rigid body transformations, not under e.g. general affine transformations, not to mention the potentially extremely large number of material parameters involved in the constitutive relations. Nevertheless, one can certainly consider two-dimensional Cosserat continua on the plane with \(G=\mathrm{E}(2)\) and \(H=\mathrm{O}(2)\) or on the 2-sphere \(S^{2}\) with \(G=\mathrm{O}(3)\) and \(H=\mathrm{O}(2)\). It is also possible to study other complex materials using fibre bundles [30, 13], for example configurations of polar liquid crystals can be viewed as bundle maps between principal bundles over \(\mathbb{E}^{3}\) and typical fibre \(S^{2}\). Nevertheless, these bundles are no longer principal, therefore strain and stress measures become more involved for these media.
An obvious limitation of our theory is that it only considers static deformations. A straightforward way to incorporate time is to make the configuration \(\psi\) time-dependent, then material velocity can be defined as a vector field \(V\) obtained by taking the partial derivative of \(\psi\) with respect to time. (An alternative and conceptually cleaner approach would be to work in a covariant spacetime setting [52]). Analogously to (12) and (13), the velocity vector field \(V\) can again be interpreted as a section of a vector bundle over \(\mathcal{B}\) with typical fibre \(\mathfrak{g}\). For the dynamical equations of motion one has to add an inertial contribution to the virtual work principle (35), usually deriving from the variation of a kinetic energy, modelled as a bundle metric on the vector bundle of velocities. However, the choice of this kinetic energy metric is far from obvious [94, 93]: it is usually assumed (by analogy with a standard rigid body) that it is given by a mass density times the spatial metric \(\rho g_{ij}\) on the translational part and \(\rho I_{ij}\) on the rotational part where \(I_{ij}\) is a symmetric moment of inertia density tensor. In general, one cannot rule out a term coupling angular and translational velocites [95] or a more complicated expression for the kinetic energy [96]. An additional issue is that the evolution of inertial quantites (mass and moment of inertia density) are usually governed by conservation laws, and while mass conservation is a natural assumption, moment of inertia conservation as proposed by Eringen [7] is more controversial [97] (in particular, it has been suggested in the literature [98] that one should consider moment of inertia production terms as well). In any case, most experimental systems of interest only undergo small deformations or are overdamped, in which case either a linearised treatment - where the inertial terms can be added in straightforwardly - is sufficient, or inertia terms are absent entirely and one may include viscous damping terms on a phenomenological basis [99].
We conclude by mentioning that the geometrisation of Schaefer's theory of the Cosserat solid immediately suggests methods for numerically integration that preserve geometric structures [85, 86, 87]. We shall address this in forthcoming work where the geometrisation will be implemented in the alternative setting of a field theory [100].
We thank Professor M. E. Cates and Lukas Kikuchi for many helpful discussions and a critical reading of the manuscript. This work was supported by the Engineering and Physical Sciences Research Council (UK) through a studentship for the first author. |
2309.03714 | An efficient joint model for high dimensional longitudinal and survival
data via generic association features | This paper introduces a prognostic method called FLASH that addresses the
problem of joint modelling of longitudinal data and censored durations when a
large number of both longitudinal and time-independent features are available.
In the literature, standard joint models are either of the shared random effect
or joint latent class type. Combining ideas from both worlds and using
appropriate regularisation techniques, we define a new model with the ability
to automatically identify significant prognostic longitudinal features in a
high-dimensional context, which is of increasing importance in many areas such
as personalised medicine or churn prediction. We develop an estimation
methodology based on the EM algorithm and provide an efficient implementation.
The statistical performance of the method is demonstrated both in extensive
Monte Carlo simulation studies and on publicly available real-world datasets.
Our method significantly outperforms the state-of-the-art joint models in
predicting the latent class membership probability in terms of the C-index in a
so-called ``real-time'' prediction setting, with a computational speed that is
orders of magnitude faster than competing methods. In addition, our model
automatically identifies significant features that are relevant from a
practical perspective, making it interpretable. | Van Tuan Nguyen, Adeline Fermanian, Agathe Guilloux, Antoine Barbieri, Sarah Zohar, Anne-Sophie Jannot, Simon Bussy | 2023-09-07T13:43:45Z | http://arxiv.org/abs/2309.03714v2 | # FLASH: a Fast joint model for Longitudinal And Survival data in High dimension
###### Abstract
This paper introduces a prognostic method called FLASH that addresses the problem of joint modelling of longitudinal data and censored durations when a large number of both longitudinal and time-independent features are available. In the literature, standard joint models are either of the shared random effect or joint latent class type. Combining ideas from both worlds and using appropriate regularisation techniques, we define a new model with the ability to automatically identify significant prognostic longitudinal features in a high-dimensional context, which is of increasing importance in many areas such as personalised medicine or churn prediction. We develop an estimation methodology based on the EM algorithm and provide an efficient implementation. The statistical performance of the method is demonstrated both in extensive Monte Carlo simulation studies and on publicly available real-world datasets. Our method significantly outperforms the state-of-the-art joint models in predicting the latent class membership probability in terms of the C-index in a so-called "real-time" prediction setting, with a computational speed that is orders of magnitude faster than competing methods. In addition, our model automatically identifies significant features that are relevant from a practical perspective, making it interpretable.
Survival analysis, longitudinal data, high-dimensional statistics, joint models
## 1 Introduction
In clinical trials, it is increasingly common to record the values of longitudinal features (e.g. biomarkers such as heart rate or haemoglobin level) up to the occurrence of an event of interest for a subject, such as rehospitalisation, relapse or disease progression. Similarly, in a client satisfaction monitoring context and with the increasing expectations to know their customers from account opening throughout the duration of the business relationship, web companies have the luxury of building elaborate systems to help them keep everything on track. The amount of recorded data per client is often tremendous and growing through time. There is no tool today to take into account simultaneously a huge number of longitudinal signals in a high-dimensional context to perform real-time churn (or satisfaction) prediction. Longitudinal features are typically modelled with linear mixed models (Verbeke et al., 1997) or with functional data analysis techniques (Ramsay and Silverman, 2005), while on the other hand, time-to-event data are classically treated with Cox proportional hazards models (Cox, 1972). However, longitudinal features may be important predictors of time-to-event and it is therefore of great interest to combine both types of methods.
The "joint modelling" approach, i.e., modelling the longitudinal and survival outcomes by a joint likelihood model rather than separately, has received considerable attention in the last two decades (Tsiatis and Davidian, 2004; Rizopoulos and Ghosh, 2011; Hickey et al., 2016). More specifically, it consists of defining (\(i\)) a time-to-event model, (\(ii\)) a longitudinal marker model, and (\(iii\)) linking both models via a common latent structure. Numerical studies suggest that these approaches are among the most satisfactory for incorporating all longitudinal information into a survival model (Yu et al., 2004) compared to landmark approaches, which
use only information from individuals at risk at the landmark time (see, e.g., Devaux et al., 2022). They have the advantage of making more efficient use of the data, as information on survival is also used to model the longitudinal markers.
There are two main approaches to linking longitudinal and survival models. On the one hand, in shared parameter joint models (SREMs), characteristics of the longitudinal marker, typically some random effects learned in a linear mixed model, are included as covariates in the survival model (Wulfsohn and Tsiatis, 1997; Andrinopoulou and Rizopoulos, 2016). On the other hand, joint latent class models (JLCMs), inspired by mixture-of-experts modelling (Masoudnia and Ebrahimpour, 2014), assume that the dependence between the time-to-event and the longitudinal marker is fully captured by a latent class structure (Lin et al., 2002b; Proust-Lima et al., 2014), which amounts to assuming that the population is heterogeneous and that there are homogeneous latent classes that share the same marker trajectories and the same prognosis. JLCMs offer a computationally attractive alternative to SREMs, especially in a high-dimensional context. These two types of models are illustrated in Figure 1.
Moreover, joint models have predominantly focused on univariate longitudinal markers (Andrinopoulou et al., 2020). To adapt such models to a multivariate setting, the common approach is to fit multiple univariate joint models separately to each univariate longitudinal marker (Wang et al., 2012), which does not account for interactions between longitudinal markers (Jaffa et al., 2014; Kang and Song, 2022; Lin et al., 2002a). Furthermore, issues arising from the high-dimensional context--e.g. computational power, limits of numerical estimation--have, to our knowledge, never been considered in the analyses, and the number of longitudinal markers considered in numerical studies remains very low (see Hickey et al., 2016, for a full review).
The aim of this article is to propose a new joint model called FLASH (Fast joint model for Longitudinal And Survival data in High dimension), which is inspired by both JLCMs and SREMs and scales to multivariate high-dimensional longitudinal markers. More precisely, our model uses features extracted from the longitudinal marker and is based on an assumption of heterogeneity in the population to account for the dependence between longitudinal marker and time-to-event. However, it differs from standard SREMs in two ways. First, the characteristics of the longitudinal marker used in the survival submodel do not depend on the modelling assumptions in the longitudinal submodel. As a consequence, the model is very efficient to train--the likelihood is closed-form in a Gaussian setting and does not require Monte Carlo approximations, which is often the case with SREMs. Second, it allows the use of high-dimensional representation mappings that characterise longitudinal markers and of regularisation strategies to select marker features that are most influential on the event. Our model can be seen as an extension of Bussy et al. (2019), who propose a mixture-of-experts model for censored survival outcomes in a high-dimensional setting, but do not consider longitudinal markers. Finally, our model allows us to define a "real-time" prediction methodology where, once the parameters of the model have been learned, we can compute a predictive marker that, given only the time-independent and longitudinal features up to a given point in time, outputs a risk for each subject at that point in time. It differs from traditional approaches (Proust-Lima et al., 2014) that require knowledge of the survival labels, which are unknown in the "real-time" prediction setting.
In summary, our method provides a model for predicting survival risk with high-dimensional longitudinal features that is both interpretable and computationally efficient, thus providing a powerful tool for real-time clinical decision making, for example in patient monitoring.
A precise description of the model is given in Section 2. Section 3 derives the likelihood of the model and then focuses on a regularised version of the model to exploit dimension reduction and prevent overfitting,
Figure 1: Graphical representation of SREMs (left) and JLCMs (right). The variable \(X\) represents time-independent features, \(Y\) the longitudinal markers, \(T\) the time-to-event, and \(G\) the latent class membership.
and the inference is also presented under this framework. Section 4 introduces our metrics, as well as a novel evaluation strategy to assess diagnostic prediction performances while mimicking a real-time use of the model in clinical care, and finally the considered competing methods. In Section 5, we apply our method to datasets from two simulation studies and three publicly available datasets (PBC, AIDS, NASA). Finally, we discuss the obtained results in Section 6.
A summary table of all notation used in this article is provided in the appendix. Appendix 7 presents in detail our extension of the EM algorithm, Appendix 8 some mathematical details of JLCMs and SREMs and in Appendix 9 we give complete implementation details. Our Python implementation of the model is available at [https://github.com/Califrais/flash](https://github.com/Califrais/flash), together with a notebook tutorial.
## 2 Model
In this section we describe the FLASH model, which consists of three sub-models: a multinomial logistic regression defining the probability of belonging to a latent class, a generalized linear mixed model for each latent class describing the evolution of the longitudinal markers, and finally a Cox class-specific survival model.
In all the following, we consider a set of \(n\) independent and identically distributed (i.i.d.) subjects. For each subject \(i\in\{1,\ldots,n\}\) we are given some longitudinal markers \(Y_{i}\), time-independent features \(X_{i}\in\mathds{R}^{p}\), a right-censored time-to-event \(T_{i}\in\mathds{R}^{+}\), and a censoring variable \(\Delta_{i}\in\{0,1\}\). The submodels for each of these different quantities are described in detail below.
### Latent class membership
We assume that the population consists of \(K\in\mathds{N}^{*}\) latent groups. To each subject \(i\in\{1,\ldots,n\}\), we associate a categorical latent variable \(g_{i}\in\{1,\ldots,K\}\), which encodes its latent class membership. Then, denoting by \(X_{i}\in\mathds{R}^{p}\) the \(p\)-dimensional vector of time-independent features, the latent class membership probability is assumed to take the form, for any \(k\in\{1,\ldots,K\}\),
\[\mathds{P}(g_{i}=k\,|\,X_{i})=\frac{e^{X_{i}^{\top}\xi_{k}}}{\sum_{j=1}^{K}e^ {X_{i}^{\top}\xi_{j}}}, \tag{1}\]
where \(\xi_{k}\in\mathds{R}^{p}\) denotes a vector of coefficients for class \(k\). This submodel is similar to JLCMs and assumes that latent-class membership depends only on time-independent features, with the vector \(\xi_{k}\) quantifying the effect of each time-independent feature in \(X_{i}\) on the probability that subject \(i\) belongs to the \(k\)-th latent class. From now on, all computations are done conditionally on features \(X_{i}\) but we we are dropping that for the sake of simplicity.
### Class-specific longitudinal model
For each subject \(i\in\{1,\ldots,n\}\), we are given \(L\in\mathds{N}^{*}\) longitudinal markers. We let, for any \(\ell\in\{1,\ldots,L\}\),
\[Y_{i}^{\ell}=\left(y_{i}^{\ell}\big{(}t_{i1}^{\ell}\big{)},\ldots,y_{i}^{\ell }\big{(}t_{i_{1}^{\ell}}^{\ell}\big{)}\right)^{\top}\in\mathds{R}^{n_{i}^{ \ell}}\]
be the vector of repeated measures of a theoretical longitudinal marker \(y_{i}^{\ell}\) at different observation times (or follow-up visits) \(0\leq t_{i1}^{\ell}<\cdots<t_{i_{\ell}^{\ell}}^{\ell}\). Note that the observation times \(t_{ij}^{\ell}\), \(j=1,\ldots,n_{i}^{\ell}\), can differ between subjects as well as between longitudinal markers. We assume a class-specific generalized linear mixed model
Figure 2: Graphical representation of the FLASH model. The variable \(X\) represents time-independent features, \(Y\) the longitudinal markers, \(T\) the time-to-event and \(G\) the latent class membership of an individual.
(GLMM) for each longitudinal marker, which is a classical model for longitudinal data (Fitzmaurice et al., 2012; Hickey et al., 2016). The GLMM is chosen according to the nature of the markers: logistic regression for a binary marker, Poisson regression for counts, or Gaussian linear model for continuous markers. Given a latent class \(g_{i}=k\), for the \(\ell\)-th marker at time \(t\in\{t_{i1}^{\ell},\ldots,t_{in_{\ell}^{\ell}}^{\ell}\}\), we then have
\[h^{\ell}\big{(}\mathds{E}[y_{i}^{\ell}(t)\,|\,b_{i}^{\ell},g_{i}=k]\big{)}=m_{ ik}^{\ell}(t)=u^{\ell}(t)^{\top}\beta_{k}^{\ell}+v^{\ell}(t)^{\top}b_{i}^{\ell}, \tag{2}\]
where \(h^{\ell}\) denotes a known one-to-one link function suitable for the chosen GLMM (i.e., logistic function for logistic regression, log function for Poisson regression or identity function for Gaussian linear model), \(u^{\ell}(t)\in\mathds{R}^{q_{\ell}}\) is a row vector of time-varying features with corresponding unknown fixed effect parameters \(\beta_{k}^{\ell}\in\mathds{R}^{q_{\ell}}\), and \(v^{\ell}(t)\in\mathds{R}^{r_{\ell}}\) is a row vector of time-varying features with corresponding random effect \(b_{i}^{\ell}\in\mathds{R}^{r_{\ell}}\).
Since in many practical applications subjects have non-linear longitudinal markers, we consider a flexible representation for \(u^{\ell}(t)\) using a vector of time monomials
\[u^{\ell}(t)=(1,t,t^{2},\ldots,t^{\alpha})^{\top}, \tag{3}\]
with \(\alpha\in\mathds{N}^{+}\). The idea here is to let the practitioner choose an appropriate polynomial order \(\alpha\) for the representation--which could also be tuned automatically using a model selection procedure. In all our experiments we use \(\alpha=1\). We also let \(v^{\ell}(t)=(1,t)^{\top}\) so that each trajectory gets an affine random effect.
Classically, the random effects component is assumed to follow a zero-mean multivariate normal distribution, that is
\[b_{i}^{\ell}\sim\mathcal{N}(0,D^{\ell\ell}), \tag{4}\]
with \(D^{\ell\ell}\in\mathds{R}^{r_{\ell}\times r_{\ell}}\) the variance-covariance matrix. To account for the dependence between the different longitudinal markers, we let \(\text{Cov}[b_{i}^{\ell},b_{i}^{\ell^{\prime}}]=D^{\ell\ell^{\prime}}\) for \(\ell\neq\ell^{\prime}\), where \(\text{Cov}[\cdot,\cdot]\) denotes the covariance matrix of two random vectors, and we denote by
\[D=\begin{bmatrix}D^{11}&\cdots&D^{1L}\\ \vdots&\ddots&\vdots\\ D^{1L}\top&\cdots&D^{LL}\end{bmatrix}\]
the global variance-covariance matrix which is common to all latent classes. Note that this variance-covariance matrix \(D\) can be easily extended to be class-specific. We assume that all dependencies between longitudinal markers are encapsulated in this matrix \(D\), which is summarized in the following assumption.
**Assumption 1**: For any \(\ell\in\{1,\ldots,L\}\) and any \(i\in\{1,\ldots,n\}\), the longitudinal markers \(Y_{i}^{\ell}\) are pairwise independent conditionally on \(b_{i}^{\ell}\) and \(g_{i}\).
This is a standard modelling assumption in joint models (see, for example, Tsiatis and Davidian, 2004). Then, conditionally on \(b_{i}^{\ell}\) and \(g_{i}\), we assume that the observation \(y_{i}^{\ell}(t_{ij}^{\ell})\), \(j\in\{1,\ldots,n_{i}^{\ell}\}\), follows a distribution from the exponential family. For example, if we choose a Gaussian distribution, we have
\[y_{i}^{\ell}(t_{ij}^{\ell})\,|\,b_{i}^{\ell},\,g_{i}=k\sim\mathcal{N}(m_{ik}^{ \ell}(t_{ij}^{\ell}),\phi_{\ell}), \tag{5}\]
where the expectation \(m_{ik}^{\ell}\) is defined by Equation (2) and the variance \(\phi_{\ell}\in\mathds{R}^{+}\) is a parameter to estimate. From now on, we will restrict ourselves to this Gaussian case to keep the exposition simple but everything remains valid for other distributions.
If we concatenate all longitudinal measurements and random effects of subject \(i\) in, respectively,
\[Y_{i}=\big{(}Y_{i}^{1\top}\cdots\,Y_{i}^{L\top}\big{)}^{\top}\in\mathds{R}^{n _{i}}\quad\text{and}\quad b_{i}=\big{(}b_{i}^{1\top}\cdots\,b_{i}^{L\top}\big{)} ^{\top}\in\mathds{R}^{r},\]
with \(n_{i}=\sum_{\ell=1}^{L}n_{i}^{\ell}\) and \(r=\sum_{\ell=1}^{L}r_{\ell}\), a consequence of Assumption 1 and Equation (5) is that
\[Y_{i}\,|\,b_{i},g_{i}=k\sim\mathcal{N}\big{(}M_{ik},\Sigma_{i}\big{)}, \tag{6}\]
where \(M_{ik}=\big{(}m_{ik}^{1}(t_{i1}^{1}),\ldots,m_{ik}^{1}(t_{in_{1}^{1}}^{1}), \ldots,m_{ik}^{L}(t_{i1}^{L}),\ldots,m_{ik}^{L}(t_{in_{i}^{L}}^{L})\big{)}^{ \top}\in\mathds{R}^{n_{i}}\) and \(\Sigma_{i}\) is the diagonal matrix whose diagonal is \((\phi_{i}\mathbf{1}_{n_{i}^{1\top}},\ldots,\phi_{L}\mathbf{1}_{n_{i}^{L\top}} \big{)}^{\top}\in\mathds{R}^{n_{i}}\) where \(\mathbf{1}_{m}\) denotes the vector of \(\mathds{R}^{m}\) having all coordinates equal to one.
### Class-specific Cox survival model
We place ourselves in a classical survival analysis framework. Let the non-negative random variables \(T_{i}^{*}\) and \(C_{i}\) be the time to the event of interest and the censoring time, respectively. We then denote by \(T_{i}\) the right censored time and by \(\Delta_{i}\) the censoring indicator, defined respectively by
\[T_{i}=T_{i}^{*}\wedge C_{i}\quad\text{and}\quad\Delta_{i}=\mathds{1}_{\{T_{i}^ {*}\leq C_{i}\}},\]
where \(a\wedge b\) denotes the minimum between two real numbers \(a\) and \(b\), and \(\mathds{1}_{\{\cdot\}}\) is the indicator function which takes the value \(1\) if the condition in \(\{\cdot\}\) is satisfied, and \(0\) otherwise. We denote by
\[\mathcal{Y}_{i}^{\ell}(t^{-})=\big{(}y_{i}^{\ell}(t_{i1}^{\ell}),\ldots,y_{i}^ {\ell}(t_{iu}^{\ell})\big{)}_{0\leq t_{iu}^{\ell}<t}\]
the subset of \(Y_{i}^{\ell}\) formed from observations up to time \(t\) and by \(\mathcal{Y}_{i}(t^{-})\) the concatenation of the history of all observed longitudinal markers up to \(t\). Then we consider \(M\in\mathds{N}^{+}\) known functionals \(\Psi_{m}:\mathcal{Y}_{i}^{\ell}(t)\to\Psi_{m}(\mathcal{Y}_{i}^{\ell}(t))\in \mathds{R}\), \(m\in\{1,\ldots,M\}\), which characterise the longitudinal markers. The set of features \(\big{(}\Psi_{m}(\mathcal{Y}_{i}^{\ell}(t))\big{)}_{1\leq m\leq M}\) should be rich enough to capture all dependencies between longitudinal markers and time-to-event, and is discussed in more detail below. To quantify the effect of the longitudinal markers on time-to-event, we then use a Cox relative risk model (Cox, 1972) of the form
\[\lambda(t\,|\,\mathcal{Y}_{i}(t^{-}),g_{i}=k)=\lambda_{0}(t)\exp\Big{(}\sum_{ \ell=1}^{L}\sum_{m=1}^{M}\Psi_{m}\big{(}\mathcal{Y}_{i}^{\ell}(t^{-})\big{)} \gamma_{k,m}^{\ell}\Big{)}, \tag{7}\]
where \(\lambda_{0}\) is an unspecified baseline hazard function that does not depend on \(k\) and \(\gamma_{k,m}^{\ell}\in\mathds{R}\) the joint representation parameters, which are the only class-specific objects in this model. Let us finally introduce some vector notations
\[\gamma_{k} =(\gamma_{k,1}^{1},\ldots,\gamma_{k,M}^{1},\ldots,\gamma_{k,1}^{ L},\ldots,\gamma_{k,M}^{L})^{\top}\in\mathds{R}^{LM}\] \[\psi_{i}(t) =\Big{(}\Psi_{1}\big{(}\mathcal{Y}_{i}^{1}(t^{-})\big{)},\ldots, \Psi_{M}\big{(}\mathcal{Y}_{i}^{1}(t^{-})\big{)},\ldots,\Psi_{1}\big{(} \mathcal{Y}_{i}^{L}(t^{-})\big{)},\ldots,\Psi_{M}\big{(}\mathcal{Y}_{i}^{L}(t ^{-})\big{)}\Big{)}^{\top}\in\mathds{R}^{LM},\]
so that
\[\lambda(t\,|\,\mathcal{Y}_{i}(t),g_{i}=k)=\lambda_{0}(t)\exp\big{(}\psi_{i}(t )^{\top}\gamma_{k}\big{)}.\]
This model can be viewed as a generalisation of SREMs (Lin et al., 2002a; Rizopoulos and Ghosh, 2011), which have hazard functions of the form \(\lambda_{0}(t)\exp\big{(}\sum_{\ell=1}^{L}\phi(b_{i}^{\ell},t)^{\top}\gamma_{ k}^{\ell}\big{)}\), where the association between the longitudinal and survival models is captured by the random effects \(b_{i}^{\ell}\). In our case, we are more general as we allow any function of the longitudinal markers in the hazard function. In practice, our rationale is to use many different representation mappings \(\Psi_{m}\), such as absolute energy over time, statistics on autocorrelation, or Fourier and wavelet basis projections, and then perform feature selection via regularisation, which will be described in more detail in Section 3.2. In this way, we are able to consider a large number of associations simultaneously and let the model learn which ones are predictive for the underlying task. Note that a crucial aspect of this model is that the representation vector, also called association features, \(\psi_{i}(t)\) does not depend on the modelling assumptions in the longitudinal submodel of Subsection 2.2.
The FLASH model is summarised in Figure 2 which clearly shows that our model is a combination of SREMs and JLCMs where both random effects and latent classes account for the dependence between longitudinal markers and time-to-event.
## 3 Inference
Now that we have introduced all the components of our model, in this section we derive the form of its likelihood, present the regularisation strategy that deals with the high dimensionality of the data, and finally present our variant of the EM algorithm used to minimise the penalised negative log-likelihood.
### Likelihood
Consider a cohort of \(n\) i.i.d. subjects
\[\mathcal{D}_{n}=\big{(}(X_{1},Y_{1},T_{1},\Delta_{1}),\ldots,(X_{n},Y_{n},T_{n},\Delta_{n})\big{)}. \tag{8}\]
For simplicity, we make a slight abuse of notation and use the same notation \(f^{*}\) for the true (joint) density or probability mass function of the various random variables in our model. Similarly, we denote by \(f_{\theta}\) the
candidates for estimating the densities \(f^{\star}\) that satisfy the model assumptions of Section 2, where we have concatenated in \(\theta\) all \(P\in\mathds{N}^{+}\) unknown parameters:
\[\theta=\big{(}\xi_{1}^{\top},\ldots,\xi_{K}^{\top},\beta_{1}^{\top},\ldots, \beta_{K}^{\top},\phi^{\top},D,\lambda_{0}(\tau_{1}),\ldots,\lambda_{0}(\tau_{J }),\gamma_{1}^{\top},\ldots\gamma_{K}^{\top}\big{)}^{\top}\in\mathds{R}^{P},\]
where \(\beta_{k}=({\beta_{k}^{1}}^{\top}\cdots{\beta_{k}^{\ell}}^{\top})^{\top}\in \mathds{R}^{q}\) with \(q=\sum_{\ell=1}^{L}q_{\ell}\) for any \(k\in\{1,\ldots,K\}\), \(\phi=(\phi_{1},\ldots,\phi_{L})^{\top}\) and we consider the vectorization of matrix \(D\) in \(\theta\). Note that we classically (see, e.g., Klein, 1992) estimate \(\lambda_{0}\) by a function taking mass at each failure time \(\tau_{j}\in(\tau_{1},\ldots,\tau_{J})\), where \((\tau_{1},\ldots,\tau_{J})\) denote the \(J\in\mathds{N}^{+}\) unique failure times (obtained from \((T_{1},\ldots,T_{n})\) removing the duplicates and keeping only the uncensored times \(T_{i}\) for which \(\Delta_{i}=1\)). In this way, the estimation of the function \(\lambda_{0}\) amounts to the estimation of the vector \(\big{(}\lambda_{0}(\tau_{1}),\ldots,\lambda_{0}(\tau_{J})\big{)}\).
First, conditioning on the latent classes, we have
\[f^{\star}(T_{i},\Delta_{i},Y_{i})=\sum_{k=1}^{K}f^{\star}(g_{i}=k)f^{\star}(T_{ i},\Delta_{i},Y_{i}|g_{i}=k)=\sum_{k=1}^{K}f^{\star}(g_{i}=k)f^{\star}(T_{i}, \Delta_{i}|Y_{i},g_{i}=k)f^{\star}(Y_{i}|g_{i}=k).\]
This yields the negative log-likelihood
\[\mathcal{L}_{n}(\theta)=-n^{-1}\sum_{i=1}^{n}\log\sum_{k=1}^{K}f_{\theta}(g_{ i}=k)f_{\theta}(T_{i},\Delta_{i}\,|\,Y_{i},g_{i}=k)f_{\theta}(Y_{i}\,|\,g_{i}=k). \tag{9}\]
Assuming that both the censoring mechanism and the stochastic mechanism generating the observation times of the longitudinal markers are non-informative (Rizopoulos and Ghosh, 2011), the joint density of \((T_{i},\Delta_{i})\) can be factorized into a part depending on the distribution of \(T_{i}^{\star}\) and a part depending on that of \(C_{i}\), so that
\[f^{\star}(T_{i},\Delta_{i}|Y_{i},g_{i}=k) \propto f^{\star}(T_{i}|Y_{i},g_{i}=k)^{\Delta_{i}}S^{\star}(T_{i }|Y_{i},g_{i}=k)^{1-\Delta_{i}}\] \[=\lambda^{\star}(T_{i}|Y_{i},g_{i}=k)^{\Delta_{i}}S^{\star}(T_{i }|Y_{i},g_{i}=k), \tag{10}\]
where \(S^{\star}\) is the survival function and \(\lambda^{\star}\) the hazard function associated with the density \(f^{\star}\) of \(T_{i}^{\star}\). Given our model assumptions presented in the previous subsections, all the terms in (9) can be computed in closed form. Indeed, \(f_{\theta}(g_{i}=k)\) is simply equal to (1) and the density function \(f_{\theta}(Y_{i}|g_{i}=k)\) can be derived from (4) and (6) (detailed calculations are given in Appendix 7). Furthermore, following Equation (10), we have
\[f_{\theta}(T_{i},\Delta_{i}|Y_{i},g_{i}=k)\propto\lambda\big{(}T_{i}\,|\, \mathcal{Y}_{i}(T_{i}^{-}),g_{i}=k\big{)}^{\Delta_{i}}S_{k}(T_{i}),\]
and
\[S_{k}(t)=\exp\Big{(}-\int_{0}^{t}\lambda(s\,|\,\mathcal{Y}_{i}(s^{-}),g_{i}=k )\mathrm{d}s\Big{)}\]
is the survival function of subject \(i\) given that it belongs to latent class \(k\). Since the baseline hazard function \(\lambda_{0}\) takes mass only at each failure time \(\tau_{j}\in(\tau_{1},\ldots,\tau_{J})\) then the integration over the survival process (11) is simply a finite sum over the process evaluated at the \(J\) failure times. Then, we rewrite the function \(S_{k}\), for any \(t\geq 0\), as
\[S_{k}(t)=\exp\Big{(}-\sum_{j=1}^{J}\lambda(\tau_{j})\mathds{1}_{\{\tau_{j}\leq t \}}\Big{)}=\exp\Big{(}-\sum_{j=1}^{J}\lambda_{0}(\tau_{j})\exp\big{(}\psi_{i}( \tau_{j})^{\top}\gamma_{k}\big{)}\mathds{1}_{\{\tau_{j}\leq t\}}\Big{)}. \tag{11}\]
The fact that \(f_{\theta}(T_{i},\Delta_{i}|Y_{i},g_{i}=k)\) is closed-form is one of the major advantages of our model over standard SREMs. Indeed, computing this density in SREMs usually requires integrating it with respect to the distribution of the random effects \(b_{i}\), which introduces untracktable integrals in the log-likelihood function. These integrals are typically estimated using Monte Carlo techniques (Hickey et al., 2018), which are computationally intensive and require additional assumptions on the allowed association functions \(\psi_{i}\). These approaches usually do not scale in a high-dimensional context.
To minimize (9) with respect to \(\theta\), we use the EM algorithm, which is the common choice in the literature (Wulfsohn and Tsiatis, 1997; Lin et al., 2002). This requires to derive what we call the negative "complete" log-likelihood, that is, an estimation of the joint density \(f^{\star}(T_{i},\Delta_{i},Y_{i},b_{i},g_{i})\), where the random
effect \(b_{i}\) and the latent class \(g_{i}\) are not observed. To this end, we make two independence assumptions on our different variables.
**Assumption 2**: For any \(i\in\{1,\ldots,n\}\), \(T_{i}\) and \(\Delta_{i}\) are independent of \(b_{i}\) conditionally on \(Y_{i}\) and \(g_{i}\).
This assumption is reasonable when the representation features \(\psi_{i}(t)\) (together with the latent classes) are rich enough to encapsulate all information on the dependence between longitudinal markers and time-to-event.
**Assumption 3**: For any \(i\in\{1,\ldots,n\}\) and \(\ell\in\{1,\ldots,L\}\), the random effects \(b_{i}^{\ell}\) are independent of the latent class membership \(g_{i}\), and remain independent of it conditionally on \(T_{i}\), \(\Delta_{i}\), and \(Y_{i}\).
This assumption states that subject-and-longitudinal marker specific random effects \(b_{i}^{\ell}\) do not depend on the latent class membership. Then, the joint density of \(T_{i},\Delta_{i},Y_{i},b_{i},g_{i}\) can be written as
\[f^{\star}(T_{i},\Delta_{i},Y_{i},b_{i},g_{i}) =f^{\star}(b_{i},g_{i})f^{\star}(Y_{i}|b_{i},g_{i})f^{\star}(T_{i},\Delta_{i}|Y_{i},b_{i},g_{i})\] \[=f^{\star}(b_{i},g_{i})f^{\star}(Y_{i}|b_{i},g_{i})f^{\star}(T_{i},\Delta_{i}|Y_{i},g_{i})\] (by Assumption 2) \[=f^{\star}(b_{i})f^{\star}(g_{i})f^{\star}(Y_{i}|b_{i},g_{i})f^{ \star}(T_{i},\Delta_{i}|Y_{i},g_{i}).\] (by Assumption 3)
The negative complete log-likelihood is then given by
\[\mathcal{L}_{n}^{\mathrm{comp}}(\theta)=-n^{-1}\sum_{i=1}^{n} \Big{(} \log f_{\theta}(b_{i})+\sum_{k=1}^{K}\mathds{1}_{\{g_{i}=k\}} \big{(}\log\mathds{P}_{\theta}(g_{i}=k)+\log f_{\theta}(Y_{i}\,|\,b_{i},g_{i} =k)\big{)}\] \[+\log f_{\theta}(T_{i},\Delta_{i}\,|\,Y_{i},g_{i}=k)\Big{)}, \tag{12}\]
where \(f_{\theta}(b_{i})\) is the density of a multivariate gaussian \(\mathcal{N}(0,D)\) distribution (see Equation (4)) and \(f_{\theta}(Y_{i}\,|\,b_{i},g_{i}=k)\) is typically the density of a \(\mathcal{N}\big{(}M_{ik},\Sigma_{i}\big{)}\) distribution (see Equation (6)).
### Penalized objective
To avoid overfitting and provide interpretation on which longitudinal markers are relevant for predicting time-to-event, we propose to minimize the penalized negative log-likelihood
\[\mathcal{L}_{n}^{\mathrm{pen}}(\theta)=\mathcal{L}_{n}(\theta)+\Omega(\theta) =\mathcal{L}_{n}(\theta)+\sum_{k=1}^{K}\zeta_{1,k}\Omega_{1}(\xi_{k})+\sum_{k= 1}^{K}\zeta_{2,k}\Omega_{2}(\gamma_{k}), \tag{13}\]
where \(\Omega_{1}\) is an elastic net regularization (Zou and Hastie, 2005), \(\Omega_{2}\) is a sparse group lasso regularization (Simon et al., 2013), and \((\zeta_{1,k},\zeta_{2,k})^{\top}\in(\mathds{R}^{+})^{2}\) regularization hyperparameters that need to be tuned. We recall that we then have
\[\Omega_{1}(\xi_{k})=(1-\eta)\|\xi_{k}\|_{1}+\frac{\eta}{2}\|\xi_{k}\|_{2}^{2} \quad\text{and}\quad\Omega_{2}(\gamma_{k})=(1-\tilde{\eta})\|\gamma_{k}\|_{1}+ \tilde{\eta}\sum_{\ell=1}^{L}\|\gamma_{k}^{\ell}\|_{2},\]
where \((\eta,\tilde{\eta})\in[0,1]^{2}\) are fixed (depending on the level of sparsity expected), \(\gamma_{k}^{\ell}=(\gamma_{k,1}^{\ell},\ldots,\gamma_{k,M}^{\ell})^{\top}\in \mathds{R}^{M}\) is the subset of \(\gamma_{k}\) corresponding to the longitudinal marker \(\ell\), \(\|\cdot\|_{1}\) (resp. \(\|\cdot\|_{2}\)) denotes the usual \(\ell_{1}\) (resp. \(\ell_{2}\)) norm. In all our experiments, we take \(\eta=0.1\) and \(\tilde{\eta}=0.9\).
An advantage of this regularisation strategy is its ability to perform feature selection and to identify the most important features (longitudinal markers and time-independent) relative to the prediction objective. On the one hand, the support of \(\xi_{k}\), controlled by the \(\ell_{1}\) term in \(\Omega_{1}\), provides information about the time-independent features involved in the \(k\)-th latent class membership while the \(\ell_{2}\) regularization allows to handle correlations between time-independent features. On the other hand, for the sparse group lasso penalty, a group \(\ell\) corresponds to a trajectory, i.e. a longitudinal marker. Thus, if \(\gamma_{k}^{\ell}\) is completely zero (thanks to the group lasso part), it means that the \(\ell\)-th longitudinal process is discarded by the model in terms of risk effect for the \(k\)-th latent class. Then, the sparse part of the penalty allows a selection of representation features for each trajectory: for \(\gamma_{k}^{\ell}\) that are not completely zeroed, their support informs about the representation features involved in the risk of the \(k\)-th latent class event for the \(\ell\)-th longitudinal marker.
### Optimization
Given our regularization strategy, we use an extension of the EM algorithm (McLachlan and Krishnan, 2007) which we now describe. Extensive details on the algorithm are given in Appendix 7.
Our final optimization problem writes
\[\hat{\theta}\in\operatorname*{argmin}_{\theta\in\mathds{R}^{p}}\mathcal{L}_{n}^ {\operatorname{pen}}(\theta). \tag{14}\]
Recall that the EM algorithm consist in iterating the following two steps. Assume that we are at step \(w+1\) of the algorithm, with current iterate denoted by \(\theta^{(w)}\), then the algorithm consists in:
* E-step: compute the expected negative complete log-likelihood conditional on observed data and the current estimate of the parameters \(\theta^{(w)}\), that is, \[\mathcal{Q}_{n}(\theta,\theta^{(w)})=\operatorname{E}_{\theta^{(w)}}[ \mathcal{L}_{n}^{\operatorname{comp}}(\theta)\,|\,\mathcal{D}_{n}].\]
* M-step: find \[\theta^{(w+1)}\in\operatorname*{argmin}_{\theta\in\mathds{R}^{p}}\mathcal{Q}_ {n}^{\operatorname{pen}}(\theta,\theta^{(w)}),\] (15) where \(\mathcal{Q}_{n}^{\operatorname{pen}}(\theta,\theta^{(w)})=\mathcal{Q}_{n}( \theta,\theta^{(w)})+\Omega(\theta)\).
Under our assumptions, we can show that computing \(\mathcal{Q}_{n}(\theta,\theta^{(w)})\) reduces to computing the expectations \(\mathds{E}_{\theta^{(w)}}[b_{i}|T_{i},\Delta_{i},Y_{i}]\) and \(\mathds{E}_{\theta^{(w)}}[b_{i}b_{i}^{\top}|T_{i},\Delta_{i},Y_{i}]\), and the probabilities
\[\tilde{\pi}_{ik}^{\theta^{(w)}}=\mathds{P}_{\theta^{(w)}}[g_{i}=k|T_{i}, \Delta_{i},Y_{i}],\quad k\in\{1,\dots,K\}.\]
We give below the formula obtained for \(\tilde{\pi}_{ik}^{\theta^{(w)}}\) and refer to Appendix 7.1 for the expectations.
**Lemma 1**: At step \(w+1\) of the EM algorithm, the probability of subject \(i\) belonging to class \(k\) given current parameters \(\theta^{(w)}\) is
\[\tilde{\pi}_{ik}^{\theta^{(w)}}=\frac{\mathds{P}_{\theta^{(w)}}(g_{i}=k)f_{ \theta^{(w)}}(T_{i},\Delta_{i}|Y_{i},g_{i}=k)f_{\theta^{(w)}}(Y_{i}|g_{i}=k)}{ \sum_{j=1}^{K}\mathds{P}_{\theta^{(w)}}(g_{i}=j)f_{\theta^{(w)}}(T_{i},\Delta _{i}|Y_{i},g_{i}=j)f_{\theta^{(w)}}(Y_{i}|g_{i}=j)}.\]
Concerning the M-step, we divide the problem into several updates for which we maximize (15) with respect to blocks of coordinates of \(\theta\) separately. Note that the order of these updates matter (see Algorithm 1) and that the updates for \(D,(\beta_{k})_{k\in\{1,\dots,K\}},\lambda_{0}\), and \(\phi\) are easily obtained in closed-form. We refer the interested reader to Appendix 7.2 for more details on this and only discuss here the updates for the regularized parameters \((\xi_{k})_{k\in\{1,\dots,K\}}\) and \((\gamma_{k})_{k\in\{1,\dots,K\}}\). For these, we do not have a closed form but have to solve a non-smooth convex optimization problem.
More precisely, it can first be shown that the update for \(\xi_{k}^{(w)}\) reduces to the minimization problem
\[\xi_{k}^{(w+1)}\in\operatorname*{argmin}_{\xi\in\mathds{R}^{p}}\mathcal{F}_{1,k}(\xi)+\zeta_{1,k}\Omega_{1}(\xi), \tag{16}\]
where \(\mathcal{F}_{1,k}\) is defined by
\[\mathcal{F}_{1,k}(\xi)=n^{-1}\sum_{i=1}^{n}\bigg{(} \tilde{\pi}_{ik}^{\theta^{(w)}}\log\Big{(}1+\sum_{\begin{subarray}{ c}j=1\\ j\neq k\end{subarray}}^{K}e^{X_{i}^{\top}(\xi_{j}-\xi)}\Big{)}\] \[+\sum_{\begin{subarray}{c}m=1\\ m\neq k\end{subarray}}^{K}\tilde{\pi}_{im}^{\theta^{(w)}}\log\Big{(}1+e^{X_{i}^ {\top}(\xi-\xi_{m})}+\sum_{\begin{subarray}{c}j=1\\ j\neq k,j\neq m\end{subarray}}^{K}e^{X_{i}^{\top}(\xi_{j}-\xi_{m})}\Big{)}\bigg{)}.\]
We can show that \(\mathcal{F}_{1,k}\) is convex with respect to \(\xi\) and Problem (16) is solved using the L-BFGS-B algorithm (Zhu et al., 1997), which is a quasi-Newton method. Details on this update are given in Appendix 7.3 and 7.5.
Similarly, the update for \(\gamma_{k}^{(w)}\) requires to solve the following minimization problem
\[\gamma_{k}^{(w+1)}\in\operatorname*{argmin}_{\gamma\in\mathds{R}^{LM}}\mathcal{F} _{2,k}(\gamma)+\zeta_{2,k}\Omega_{2}(\gamma), \tag{17}\]
where \(\mathcal{F}_{2,k}\) is defined by
\[\mathcal{F}_{2,k}(\gamma)=-n^{-1}\sum_{i=1}^{n}\tilde{\pi}_{ik}^{\theta^{(w)}} \Big{(}\Delta_{i}\psi_{i}(T_{i})^{\top}\gamma-\sum_{j=1}^{J}\lambda_{0}^{(w)} (\tau_{j})\exp\big{(}\psi_{i}(\tau_{j})^{\top}\gamma\big{)}\mathds{1}_{\{\tau_{ j}\leq\tau_{1}\}}\Big{)}.\]
Again, \(\mathcal{F}_{2,k}\) is convex with respect to \(\gamma\) and we solve Problem (17) using proximal gradient descent (Boyd and Vandenberghe, 2004) and we refer to Appendix 7.4 for further details.
Algorithm 1 describes the main steps of the resulting extended EM algorithm and convergence is discussed in Appendix 7.6.
```
Data: Training data \(\mathcal{D}_{n}\); tuning hyper-parameters \((\zeta_{1,k},\zeta_{2,k})_{k\in\{1,\ldots,K\}}\) Input: maximum iteration \(W\), tolerance \(\varepsilon\) Output: Last parameters \(\hat{\theta}\in\mathds{R}^{P}\)
1: Initialize parameters \(\theta^{(0)}\in\mathds{R}^{P}\)
2:for\(w=1,\ldots,W\)do E-step:
3: Compute \((\mathds{E}_{\theta^{(w)}}[b_{i}|T_{i},\Delta_{i},Y_{i}])_{i\in\{1,\ldots,n\}}\)
4: Compute \((\mathds{E}_{\theta^{(w)}}[b_{i}b_{i}^{\top}|T_{i},\Delta_{i},Y_{i}])_{i\in\{1, \ldots,n\}}\)
5: Compute \((\pi_{ik}^{\theta^{(w)}})_{\begin{subarray}{c}i\in\{1,\ldots,n\}\\ k\in\{1,\ldots,K\}\end{subarray}}\)
6: M-step:
7: Update \(D^{(w+1)}\)
8: Update \((\xi_{k}^{(w+1)})_{k\in\{1,\ldots,K\}}\) with L-BFGS-B
9: Update \((\beta_{k}^{(w+1)})_{k\in\{1,\ldots,K\}}\)
10: Update \((\gamma_{k}^{(w+1)})_{k\in\{1,\ldots,K\}}\) with proximal gradient descent
11: Update \(\lambda_{0}^{(w+1)}\) and \(\phi^{(w+1)}\)
12:if\(\big{(}\mathcal{L}_{n}^{\mathrm{pen}}\big{(}\theta^{(w+1)}\big{)}-\mathcal{L}_{ n}^{\mathrm{pen}}\big{(}\theta^{(w)}\big{)}\big{)}/\mathcal{L}_{n}^{\mathrm{ pen}}(\theta^{(w)})<\varepsilon\)then break
13:endif
14:endfor
15:Return\(\hat{\theta}=\theta^{(w+1)}\)
```
**Algorithm 1** The extended EM algorithm for FLASH inference
## 4 Evaluation Methodology
In this section, we present our evaluation strategy to assess real-time prediction performance of our model and briefly introduce the models used for comparison.
### Real-time prediction and evaluation strategy
Developments in joint models have focused primarily on modeling and estimation, and most studies do not consider goodness-of-fit or predictive performance of latent class membership or time-to-event (Hickey et al., 2016). However, with the prospect of making predictions in real time or on a daily basis, practitioners will naturally need predictive prognostic tools to evaluate and compare survival models. Therefore, we place ourselves in a so-called "real-time" prediction setting. Once the learning phase for the model has been completed on a training set, so that one obtains \(\hat{\theta}\) from (14) using the approach described in Section 3.3, we want to make real-time predictions. More precisely, for each subject \(i\), we want to be able to give access to a predictive marker, typically the probability of belonging to a latent class at any time \(t\), using all the data available up to that time, but without using the supervision labels \((T_{i},\Delta_{i})\), which are a priori not available at any time \(t\).
#### Predictive marker
In our setting, since each latent class represents the different risk levels of a subject, we choose the probability of latent class membership as the predictive marker. This is similar to what is classically done in JLCMs, where \(\overline{\pi}_{ik}^{\delta}=\mathds{P}_{\hat{\theta}}[g_{i}=k|T_{i},\Delta_{i},Y _{i}]\) is typically used as the predictive rule (see for example Proust-Lima et al., 2014). However, this requires knowledge of the survival labels \((T_{i},\Delta_{i})\), which does not fit in our real-time prediction goal. Therefore, we define a new predictive marker as follows. For any subject \(i\) and any time \(s_{i}\) elapsed since entry into the study, given longitudinal markers \(\mathcal{Y}_{i}(s_{i}^{-})\) observed up to \(s_{i}\), for any \(k\in\{1,\ldots,K\}\), we let
\[\widehat{\mathcal{R}}_{ik}(s_{i})=\mathds{P}_{\hat{\theta}}\big{[}g_{i}=k\,| \,T_{i}^{\star}>s_{i},\mathcal{Y}_{i}(s_{i}^{-})\big{]}.\]
Indeed, for any subject \(i\) who is event-free when \(s_{i}\) has elapsed, all we know about that subject is that its time to the event of interest \(T_{i}^{\star}\) exceeds \(s_{i}\). This is equivalent to considering this subject as a new subject for which \(T_{i}=s_{i}\), \(\Delta_{i}=0\), and \(Y_{i}=\mathcal{Y}_{i}(s_{i}^{-})\). The expression of \(\widehat{\mathcal{R}}_{ik}(s_{i})\) can then be derived using Lemma 1, since
\[\widehat{\mathcal{R}}_{ik}(s_{i})=\mathds{P}_{\hat{\theta}}[g_{i}=k\,|\,T_{i} =s_{i},\Delta_{i}=0,Y_{i}=\mathcal{Y}_{i}(s_{i}^{-})].\]
We illustrate this real-time prediction setting in Figure 3, where we emphasize that \(s_{i}\) should be thought of as the period of time period between the enrollment of individual \(i\) and the "present" time.
#### Performance evaluation
Let us now describe the metric used to evaluate the prediction performance. We want to compare the quality of our predictions to the true labels \((T_{i},\Delta_{i})\) and therefore assume that we have access to them on a test set. We use the classical C-index (Harrell et al., 1996) as our performance metric. More precisely, we assume that we are in the case \(K=2\) and assume that the class \(g_{i}=2\) represents the high-risk group of subjects (the class \(g_{i}=1\) then representing a low-risk group). We denote by \(\widehat{\mathcal{R}}_{i}=\widehat{\mathcal{R}}_{i2}(s_{i})\) the predictive marker that a subject \(i\) belongs to class \(k=2\) when \(s_{i}\) has elapsed, Note that in practice, given a test set in which each individual's trajectory is fully observed until the end of the study, we mimic the real-time prediction setting by randomly sampling the \(s_{i}\). Then, we let
\[\mathcal{C}=\mathds{P}[\widehat{\mathcal{R}}_{i}>\widehat{\mathcal{R}}_{j}\, |\,T_{i}^{\star}<T_{j}^{\star}],\]
with \(i\neq j\) two random independent subjects (note that \(\mathcal{C}\) does not depend on \(i,j\) under the i.i.d. sample hypothesis). In our case, \(T^{\star}\) is subject to right censoring, so one would typically consider the modified \(\overline{\mathcal{C}}\) defined by
\[\overline{\mathcal{C}}=\mathds{P}[\widehat{\mathcal{R}}_{i}>\widehat{\mathcal{ R}}_{j}\,|\,T_{i}<T_{j},\,T_{i}<t^{\max}],\]
where \(t^{\max}\) corresponds to a fixed and predetermined follow-up period (Heagerty and Zheng, 2005). It has been shown by Uno et al. (2011) that a Kaplan-Meier estimator for the censoring distribution leads to a nonparametric and consistent estimator of \(\overline{\mathcal{C}}\). Therefore, in the following, we consider the C-index metric \(\overline{\mathcal{C}}\) to assess performance.
In Appendix 9.3, we give the complete procedure used to evaluate the performance of the models considered in our experiments.
Figure 3: Real-time prediction setting. In a practical application, we want to be able to make predictions at any “present” time while subjects have entered the study at different times. Therefore, some of them have a lot of recorded information while the others have a few.
### Competing models
In the experiments, we compare FLASH with two joint models, a JLCM and a SREM, which are described below. Their precise definitions and respective predictive markers are given in Appendix 8. These two types of models are very classical and widely used in the community. Note that not many joint models allow for multivariate longitudinal markers, which limits our choice of competing methods.
**Lcmm**
We consider a multivariate version of JLCM, implemented in the R package lcmm
(Proust-Lima et al., 2017) and called mpjlcmm (multivariate joint latent class mixed model). In this model, there are no shared associations between the longitudinal and survival submodels: given the latent class membership, each submodel is assumed to be independent, whereas in FLASH they are linked by features extracted from the longitudinal marker.
**JMbayes**
We consider the SREM model of Rizopoulos (2016) implemented in the R package JMbayes. It fits the joint model of longitudinal and survival outcomes with a Bayesian approach using Markov chain Monte Carlo algorithms. It also provides several options for modeling the association structure between the two outcomes.
## 5 Experimental results
To evaluate the proposed method, we first perform an extensive Monte Carlo simulation study that illustrates our estimation procedure, described in Subsection 5.1. We then turn to a comparison study on both simulated and real-world examples in Subsection 5.2, and conclude with an application to a high-dimensional dataset from the NASA in Subsection 5.3.
In all experiments, the features extracted by the tsfresh package (Christ et al., 2018) are used for association features \(\Psi_{m}\) in FLASH. This package extracts dozens of features from a time series such as absolute energy, kurtosis, or autocorrelation. Before running Algorithm 1, we use a screening phase procedure where we select the top ten association features by fitting the extracted feature of each candidate and the survival labels in individual Cox models and comparing their C-index scores. We tune the regularization hyperparameters with a grid search and a 10-fold cross-validation with the C-index metric. Extensive details on our experiments are given in Appendix 9.
### Well-specified simulation study
To assess our estimation procedure, we first simulate some data following our modeling assumptions. The population is divided into two equal groups: a high-risk group and a low-risk group. The time-independent features \(X_{i}\) are generated according to multivariate normal distributions with a different mean for each group. The corresponding coefficients \(\xi_{1}\) and \(\xi_{2}\) are generated as sparse vectors so that 30% of the features are active
Figure 4: **Simulated cohort of \(n=500\) samples for \(K=2\) groups (high-risk group in red curves and low-risk group in blue curves). Left: trajectories of five longitudinal markers of two individuals randomly selected in each group. Right: Kaplan-Meier survival curves for each group.**
(by active we mean that the coefficient is different from zero). We then sample the longitudinal markers from a generalized linear mixed model of the form of (6). We then generate survival times by a hazard function given by a Cox model in the form of (7). The baseline hazard is from a Gompertz distribution (Gompertz, 1825) and the representation mapping \(\Psi_{m}\) is in form of linear predictor (Chi and Ibrahim, 2006) and random effects (Hatfield et al., 2011). The corresponding coefficients \(\gamma\) are also chosen to be sparse so that only 30% of the longitudinal markers are active in the survival model. The measurement times of the longitudinal markers for each subject are simulated from a uniform distribution with maximum its survival time. Figure 4 shows some examples of simulated longitudinal markers and the Kaplan-Meier survival curves.
To illustrate our regularization strategy, we show in Figure 5 the time-independent parameter \(\xi\) and the joint representation parameters \((\gamma_{k})_{k\in\{1,2\}}\), obtained after running our learning procedure. We see in the sub-figure **(a)** that the support of \(\xi\) is completely recovered thanks to the elastic-net penalty. Furthermore, in the sub-figures **(b)** and **(c)**, the sparse group lasso effect can be seen in the fact that the only coefficients that are not zero correspond to active longitudinal features (represented by a green area), while for inactive longitudinal features, all coefficients of the group are zero. Our learning strategy results in a sparse and interpretable model.
### Comparison study
We compare FLASH with JMbayes and LCMM on two simulated and two real-world datasets. The first simulated dataset is the one from the previous subsection, and we present the other ones below. A summary of the datasets is given in Table 1.
#### 5.2.1 JoineRML simulation
We use the classical R package joineRML (Hickey et al., 2018) to simulate multivariate longitudinal and time-to-event data from a joint model. The multivariate longitudinal features are generated for all possible measurement times using multivariate Gaussian linear mixed model. Failure times are simulated from proportional hazards time-to-event models. We sample two time-dependent features and two longitudinal features for 250 individuals.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Dataset** & \(n\) & \(L\) & \(p\) & \(P\) \\ \hline FLASH\_simu & 500 & 5 & 10 & 224 \\ joineRML\_simu & 250 & 2 & 2 & 204 \\ PBCseq & 304 & 7 & 3 & 251 \\ Aids & 467 & 1 & 4 & 147 \\ NASA & 200 & 16 & 16 & 566 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets characteristics: the number of samples \(n\), the number of longitudinal features \(L\), number of time-independent features \(p\), and the overall number of parameters in FLASH model \(P\). The names FLASH_simu and joineRML_simu correspond to the datasets simulated from the well-specified simulation study in Section 5.1 and the joineRML package respectively.
Figure 5: Simulations results. (a): the support of both the true coefficient \(\xi\) in green and its estimated version \(\hat{\xi}\) in red. (b) and (c): in red the support of the estimated coefficient \(\hat{\gamma}_{k}\) for \(k\in\{1,2\}\), the dashed pink lines separate the features corresponding to each longitudinal marker \(\ell\), and active longitudinal markers are represented by a green area.
**PBCseq dataset**
This dataset contains the follow-up of 312 patients with primary biliary cirrhosis, a rare autoimmune liver disease. Several longitudinal features are measured over time (for example serum bilirubin, serum cholesterol, albumin), along with information on gender, age, and drug used recorded once at the beginning of the study. Time-to-event is also recorded with a censoring rate of 55%.
**Aids dataset**
This dataset compares the efficacy and safety of two drugs for 467 patients diagnosed with HIV who were either intolerant or resistant to zidovudine therapy. Information on gender, age, drug used, AIDS infection status, and level of intolerance to zidovudine is collected at the start of the study. The longitudinal feature of interest here is the measurement of the number of CD4 cell (a type of white blood cell), a laboratory test used to understand the progression of HIV disease. Time-to-event is also recorded with a censoring rate of 40%.
We compare the performance of FLASH with the two competing models LCMM and JMbayes, using the C-index metric presented in Section 4. We can see in Figure 6 that FLASH outperforms its competitors in terms of both C-index and running times on all datasets. The good performance of FLASH in terms of running times can be explained by the fact that it does not need to perform computationally intensive Monte Carlo techniques like JMbayes, while it is easier to satisfy the convergence criterion of our EM algorithm than that of lcmm.
### Application to NASA dataset
We conclude this section with a challenging high-dimensional dataset. This dataset describes the degradation of 200 aircraft engines, where 17 multivariate longitudinal features are measured for each different aircraft engine until its failure. There are also three operational settings that significantly affect engine performance. Note that we only apply FLASH to this dataset because the other models did not converge after running for one day, highlighting the fact that they do not scale to high-dimensional settings.
We illustrate in Figure 7 the results obtained by FLASH. In the left panel, we can see the effect of regularization where the coefficients learned by the model are sparse and some longitudinal markers are entirely discarded. In particular, three longitudinal markers are excluded for the first group \(k=1\) but not for the group \(k=2\) while the last two markers are never selected. In the right panel, we show the evolution in time of the predictive marker for each subject. We can see that, as time passes, more data is observed and the subjects are better separated into two groups of different risks.
Figure 6: **C-index (top figure) and runtime (bottom figures) comparison on the four datasets considered. The box plots of C-index and runtime are obtained with 50 independent experiments.**
## 6 Discussion
In this paper, a generalized joint model for high-dimensional multivariate longitudinal data and censored durations (FLASH) has been introduced, with an efficient estimation methodology based on an extension of the EM algorithm. This algorithm allows the use of regularization strategies in order to perform feature selection and results in an interpretable model. We evaluated the performance of the estimation procedure on an extensive Monte Carlo simulation study. It showed that our method successfully recovered the most significant features. The proposed methodology has then been applied on four different datasets. On these datasets, FLASH outperforms competing methods, both in terms of C-index and runtimes, in a so-called "real-time" prediction setting. In addition, we show on an example of a NASA dataset that our model scales to high-dimensional settings and automatically identifies the most important longitudinal markers and time-independent features, allowing important interpretations on the application at hand. Potential future work consist in extending the implementation to support more than two latent groups, to develop strategies to automatically select the number of latent groups with model selection tools, and to generalize our EM algorithm to support count or discrete longitudinal features.
## Acknowledgements
_Conflict of Interest_: None declared. The authors thank Linus Bleistein and Massil Hihat for fruitful discussions. This work was supported by the French National Cancer Institut (INCa) [grant number 2016-1-PL SHS-03-1].
|
2305.19499 | Deep into The Domain Shift: Transfer Learning through Dependence
Regularization | Classical Domain Adaptation methods acquire transferability by regularizing
the overall distributional discrepancies between features in the source domain
(labeled) and features in the target domain (unlabeled). They often do not
differentiate whether the domain differences come from the marginals or the
dependence structures. In many business and financial applications, the
labeling function usually has different sensitivities to the changes in the
marginals versus changes in the dependence structures. Measuring the overall
distributional differences will not be discriminative enough in acquiring
transferability. Without the needed structural resolution, the learned transfer
is less optimal. This paper proposes a new domain adaptation approach in which
one can measure the differences in the internal dependence structure separately
from those in the marginals. By optimizing the relative weights among them, the
new regularization strategy greatly relaxes the rigidness of the existing
approaches. It allows a learning machine to pay special attention to places
where the differences matter the most. Experiments on three real-world datasets
show that the improvements are quite notable and robust compared to various
benchmark domain adaptation models. | Shumin Ma, Zhiri Yuan, Qi Wu, Yiyan Huang, Xixu Hu, Cheuk Hang Leung, Dongdong Wang, Zhixiang Huang | 2023-05-31T02:16:53Z | http://arxiv.org/abs/2305.19499v1 | # Deep into The Domain Shift: Transfer Learning through Dependence Regularization
###### Abstract
Classical Domain Adaptation methods acquire transferability by regularizing the overall distributional discrepancies between features in the source domain (labeled) and features in the target domain (unlabeled). They often do not differentiate whether the domain differences come from the marginals or the dependence structures. In many business and financial applications, the labeling function usually has different sensitivities to the changes in the marginals versus changes in the dependence structures. Measuring the overall distributional differences will not be discriminative enough in acquiring transferability. Without the needed structural resolution, the learned transfer is less optimal. This paper proposes a new domain adaptation approach in which one can measure the differences in the internal dependence structure separately from those in the marginals. By optimizing the relative weights among them, the new regularization strategy greatly relaxes the rigidness of the existing approaches. It allows a learning machine to pay special attention to places where the differences matter the most. Experiments on three real-world datasets show that the improvements are quite notable and robust compared to various benchmark domain adaptation models.
domain adaptation, regularization, domain divergence, copula.
## I Introduction
Unsupervised domain adaptation emerges when one estimates a prediction function in a given target domain without any labeled samples by exploiting the knowledge available from a source domain where labels are known. The critical step in the transfer is to extract feature representations that are invariant across domains. A large body of work learns the domain-invariant feature representations by minimizing various metrics on the feature distributions between domains: Proxy-\(\mathcal{A}\) distance [1], total variation distance [2, 3, 4], maximum mean discrepancy (MMD, [5, 6, 7]), Wasserstein distance [8, 9], etc. It is worth noting that in most of the literature, the domain invariance is measured on the overall feature distributions between the source and the target domains.
However, the overall feature distribution difference encodes both the marginals' distinctions and the dependence difference into one metric value, making it hard to identify whether the domain difference comes from the marginals or the dependence structure. As a motivation example, we use Figure 1 to clarify this point. In Figure 1, there are three random vectors (\(\mathbf{X}\), \(\mathbf{Y}\) and \(\mathbf{Z}\)) that follow three different Gaussian distributions (\(P^{\mathbf{X}}\), \(P^{\mathbf{Y}}\) and \(P^{\mathbf{Z}}\), with the details shown in the caption) respectively. It can be observed that \(P^{\mathbf{X}}\) and \(P^{\mathbf{Y}}\) only differ in the 2nd marginal distribution where \(P^{\mathbf{X}}_{2}\) is \(N(1,1)\) and \(P^{\mathbf{Y}}_{2}\) is \(N(0,1)\). Namely, the overall distribution difference is completely caused by the marginal distinction. However, for \(P^{\mathbf{Y}}\) and \(P^{\mathbf{Z}}\), it is well observed that the marginals are the same while the covariance matrix differs. Namely, the overall distribution difference over the latter two distributions is controlled by the dependence difference. In general cases, any two distributions can differ in the marginal distributions and the dependence structure simultaneously. One can check that the KL divergence between \(P^{\mathbf{X}}\) and \(P^{\mathbf{Y}}\) is the same with that of \(P^{\mathbf{Z}}\) and \(P^{\mathbf{Y}}\), which is \(1/2\) in the example. That is to say, a single divergence value cannot distinguish between marginal difference and dependence difference. Furthermore, the overall feature distributions' divergence sums the marginals' and dependence differences up together in a relatively fixed manner. Such rigidness is undesirable when the prediction function has different sensitivities to the changes in the marginals versus changes in the dependence structure. Especially in many financial and business applications, the prediction function may depend heavily on the dependence structure of the features. For example, during the market crash, the stocks show remarkably synchronous co-movement, implying a stronger dependence than regular times. Such observations motivate us to relax the binding between the marginals' and dependence differences in one metric value and pay attention to the difference term that matters the most in the transfer.
We propose a new domain adaptation model to regularize the domain differences in the dependence structure via the copula distance, separately from the marginal divergence. The idea is inspired by Sklar's Theorem [10] which states that any multivariate distribution can be decomposed as the product of marginal distributions and a copula function, and vice versa. It explicitly shows that the copula function, together with the marginal distributions, is sufficient to recover the original multivariate distribution. The efficacy and versatility of our approach are demonstrated with real-world classification and regression problems.
The contributions of this paper are summarized as follows.
(1) We propose a novel deep domain adaptation framework that allows more flexibility to combine the marginal difference and the dependence difference into a regularizer. (2) We explore the structural properties of the copula distance that guarantee the algorithm convergence of our approach. (3) Our proposed model proves its efficiency on two novel datasets (a large-scale retail credit dataset and an intra-day equity price dataset) and one standard UCI dataset.
## II Related work
Due to the ability of deep neural nets to learn rich feature representations, deep domain adaptation models have focused on using these networks to learn invariant representations, i.e., intermediate features whose distributions are the same in the source and the target domains, while at the same time achieving a small prediction error on the source domain. The hope is that the learned representation, together with the hypothesis learned from the source domain, can generalize to the target domain. There are many ways to measure the domain invariance [11]. [12] uses \(L_{2}\) norm to directly align features of different domains. [6] proposes a Deep Adaptation Network (DAN) architecture that embeds the feature representations in a reproducing kernel Hilbert space (RKHS) and reduces the domain discrepancy through multi-kernel MMD. [1] proposes a domain adversarial neural network (DANN) to learn the domain-invariant features with a min-max formulation. [13] characterizes a fundamental tradeoff between learning invariant representations and achieving a small joint error on both domains when the marginal label distributions differ from the source to the target. Furthermore, [14] tackles the knowledge transfer problem under the generalized covariate shift condition by Bregman divergence. [15] proposes a novel neural embedding matching method by enforcing consistent class-wise cross-domain instance distributions in the embedding space. [16] proposes a two-stage progressive training strategy to learn invariant, discriminative, and domain-agnostic subspace. Another line of work proposes to reduce the domain discrepancy through minimizing the optimal transport loss between the source and target distributions [8, 9]. For example, [9] minimizes the empirical Wasserstein distance between the source and target samples. However, these methods focus on the overall distribution discrepancy and often do not differentiate whether the domain differences come from the marginals or the dependence structure.
To explicitly encode the dependence difference in the domain adaptation framework, [17] proposes a correlation alignment (CORAL) model to measure the dis-similarity by the Frobenius norm of the covariance matrices from the two domains. [18] further combines CORAL with deep neural networks and verifies its effectiveness through extensive experiments on the standard benchmark datasets. [19] matches distributions by aligning the RKHS covariance matrices across domains. [20] integrates the MMD and CORAL into a unified framework and exploits the higher-order statistics for domain alignment. Our approach is more general in that it separates the marginals' divergence and the dependence difference and integrates them into one regularizer. It facilitates us to detect the changes in the marginals and the dependence structure simultaneously.
Our work is closely related to copulas. Copulas have been successfully used in many deep learning methods. [21] performs the missing value imputation by developing a semi-parametric algorithm to estimate copula parameters from incomplete mixed data. [22] and [23] propose to summarize and measure the pairwise correlations between variables, which is shown to well capture the various dependence patterns. By assuming the underlying features have a specific structure, [24] adopts non-parametric vine copula for semi-supervised domain adaptation problems. [25] uses copula to generate dependent data in a segmented way. [26] incorporates copula into a doubly nonparametric sparse nonnegative matrix factorization framework. [27] well documents the literature that takes advantage of copulas to model the correlation in multivariate data in smart grid. We extend the idea of copulas and construct a copula-based divergence measure to quantify the dependence difference. Our proposed measure incorporates more statistical information than the commonly-used covariance matrices that only capture the second-order statistics.
## III Copula distance
Suppose that a \(d\)-dimensional random variable \(\mathbf{X}=[X_{1},\ldots,X_{d}]\) is characterized by the cumulative distribution function (CDF) \(P\) and its density function is denoted by \(p\). Sklar's theorem [10] states that there exists a copula \(C\) such that \(P(x_{1},\ldots,x_{d})=C(P_{1}(x_{1}),\ldots,P_{d}(x_{d}))\), where \(P_{i}(\cdot)\) is the marginal CDF. Furthermore, any continuous density function \(p(x_{1},\ldots,x_{d})\) can be written in terms of univariate marginal density functions \(\{p_{i}(x_{i})\}_{i=1}^{d}\) and a unique copula density function \(c:[0,1]^{d}\rightarrow\mathbb{R}\) which characterizes the dependence structure:
\[p(x_{1},\ldots,x_{d})=c(u_{1},\ldots,u_{d})\times\prod_{i=1}^{d}p_{i}(x_{i}), \tag{1}\]
where \(u_{i}:=P_{i}(x_{i}),\,\forall\,1\leq i\leq d\).
One can use the copula function to extract a clean quantification of the dependence strength between any components [\(X_{i}\), \(X_{j}\)] in the random vector \(\mathbf{X}\). For example, the _mutual information_ between \(X_{i}\) and \(X_{j}\), a well-known dependence measure in information theory, is equivalent to the negative entropy of the copula function, namely,
\[\mathcal{H}_{KL}(P_{ij},P_{i}P_{j})=\int_{0}^{1}\int_{0}^{1}c_{ij}(u_{i},u_{j} )\log c_{ij}(u_{i},u_{j})\mathrm{d}u_{i}\mathrm{d}u_{j}.\]
Here, \(\mathcal{H}_{KL}\) denotes the Kullback-Leibler (KL) divergence, \(P_{ij}\) is the joint distribution of \([X_{i},X_{j}]\), and \(P_{i}\), \(P_{j}\) represent the marginal distributions. \(c_{ij}(u_{i},u_{j})\) is short for the density function \(c(1,\ldots,1,u_{i},1,\ldots,1,u_{j},1,\ldots,1)\) where the \(i\)-th and the \(j\)-th arguments are \(u_{i}\) and \(u_{j}\) respectively, and the other arguments are all \(1\). It should be noted that mutual information is a special case of a more general dependence measurement framework [22] that computes the distance \(\mathcal{H}(P_{ij},P_{i}P_{j})\) with any divergence measure \(\mathcal{H}\). We list a few examples below where \(\mathcal{H}\) takes \(\chi^{2}\) distance [28], Hellinger distance [28], and \(\alpha\)-divergence [29] (the detailed derivation can be found in the Appendix):
* \(\mathcal{H}_{\chi^{2}}(P_{ij},P_{i}P_{j})=\int_{0}^{1}\int_{0}^{1}(c_{ij}^{2}( u_{i},u_{j})-1)\mathrm{d}u_{i}\mathrm{d}u_{j}\).
* \(\mathcal{H}_{\mathcal{H}}(P_{ij},P_{i}P_{j})=\int_{0}^{1}\int_{0}^{1}[\sqrt{c_ {ij}(u_{i},u_{j})}-1]^{2}\mathrm{d}u_{i}\mathrm{d}u_{j}\).
* \(\mathcal{H}_{\alpha}(P_{ij},P_{i}P_{j})=\frac{1}{1-\alpha^{2}}\int_{0}^{1} \int_{0}^{1}[1-c_{ij}(u_{i},u_{j})^{-\frac{\alpha+1}{2}}]c_{ij}(u_{i},\)\(u_{j})\mathrm{d}u_{i}\mathrm{d}u_{j}\).
For most of the divergence measures \(\mathcal{H}\), it can be proved that \(\mathcal{H}(P_{ij},P_{i}P_{j})\) is a function of the copula densities (see Appendix). For a given random vector \(\mathbf{X}\), if \(\mathcal{H}(P_{ij},P_{i}P_{j})\) provides a clean measure of any of its component pairs [\(X_{i}\), \(X_{j}\)], one can run through the indexes, give them different weights \(\beta_{ij}\), and sum over all the weighted \(\mathcal{H}\)s. The resulting quantity would allow one to look into the complete internal structure of \(\mathbf{X}\) along the direction of any user-defined attention angle using any measure \(\mathcal{H}\). This observation motivates us to use such a measure to quantify the difference in internal dependencies between two random vectors \(\mathbf{X}\) and \(\mathbf{Y}\). Below is its precise definition.
**Definition 1**.: _(**Copula distance**) Let \(\mathbf{X}=[X_{1},\ldots,X_{d}]\in\mathbb{R}^{d}\) and \(\mathbf{Y}=[Y_{1},\ldots,Y_{d}]\in\mathbb{R}^{d}\) be two random vectors. Let \(P_{ij}^{\mathbf{X}}\) and \(P_{ij}^{\mathbf{Y}}\) be the cumulative joint distributions of any of their component pairs [\(X_{i}\), \(X_{j}\)] and [\(Y_{i}\), \(Y_{j}\)], respectively. Let \(\mathcal{H}(\cdot,\cdot)\) be a measure of distribution difference. We define the copula distance between \([X_{i},X_{j}]\) and \([Y_{i},Y_{j}]\) (\(\forall 1\leq i<j\leq d\)) as:_
\[CD_{\mathcal{H}}([X_{i},X_{j}],[Y_{i},Y_{j}])=[\mathcal{H}(P_{ij}^{\mathbf{X} },P_{i}^{\mathbf{X}}P_{j}^{\mathbf{X}})-\mathcal{H}(P_{ij}^{\mathbf{Y}},P_{i} ^{\mathbf{Y}}P_{j}^{\mathbf{Y}})].\]
_The copula distance between random vectors \(\mathbf{X}\) and \(\mathbf{Y}\) w.r.t. the positive weights \(\boldsymbol{\beta}=\{\beta_{ij}\}_{1\leq i<j\leq d}\) is defined as:_
\[CD_{\mathcal{H}}(\mathbf{X},\mathbf{Y};\boldsymbol{\beta})=\sum_{1\leq i<j\leq d }\beta_{ij}CD_{\mathcal{H}}([X_{i},X_{j}],[Y_{i},Y_{j}]). \tag{2}\]
As explained before, although the copula distance seems irrelevant to the copulas at first glance, it is actually a function of the copula densities. In the following context, we will use a set of \(d\) terms \(\{\mathcal{H}(P_{i}^{\mathbf{X}},P_{i}^{\mathbf{Y}})\}_{i=1}^{d}\) to measure the marginals' distinctions and the copula distance \(CD_{\mathcal{H}}(\mathbf{X},\mathbf{Y};\boldsymbol{\beta})\) to measure the dependence difference of any two random vectors \(\mathbf{X}\in\mathbb{R}^{d}\) and \(\mathbf{Y}\in\mathbb{R}^{d}\). To explain the concepts in more details, we use the distributions in Figure 1. We record the marginals' distinctions and dependence difference between each two of the three distributions under KL divergence (KL), Jenson-Shannon divergence (JS) and MMD in Table I.
When the copula density takes some specific function form, it can be proved that the copula distance has two nice properties: bounded and monotonic. These two properties guarantee the convergence of our proposed algorithms in the later sections. There are multiple parametric copula density functions (Gaussian copula, \(t\) copula, Gumbel copula, etc.) that can be effectively incorporated into our model framework. In this work, we take the copula density in the form of Gaussian copula. Gaussian copula has been widely used in the financial literature because it is convenient to capture the dependence embedded in the random vector [30, 31, 32]. A random vector \(\mathbf{X}\in\mathbb{R}^{d}\) is said to have Gaussian copula with parameter \(\Sigma\in\mathbb{R}^{d\times d}\) if the copula density function \(c(u_{1},\ldots,u_{d})=|\Sigma|^{-\frac{1}{2}}\exp(-\frac{1}{2}\mathbf{x}^{T}( \Sigma^{-1}-\mathbf{I})\mathbf{x})\), where \(\mathbf{x}:=[\Phi^{-1}(u_{1}),\ldots,\Phi^{-1}(u_{d})]^{T}\) with \(\Phi\) being the cumulative density function for the standard normal distribution. We state the boundedness and monotonicity under the Gaussian copula here and defer the proof to the Appendix.
**Proposition 1**.: _(**Boundedness**) The copula distance defined in Eq. (2) is bounded when the divergence metric \(\mathcal{H}\) is taken to be MMD distance, Wasserstein-2 distance or some of the \(\phi\)-divergence (including Jensen-Shannon distance, Hellinger distance and total variation distance)._
**Proposition 2**.: _(**Monotonicity**) Let \(\Sigma^{\mathbf{X}}\) and \(\Sigma^{\mathbf{Y}}\) be the Gaussian copula parameters for the random vectors \(\mathbf{X}\in\mathbb{R}^{d}\) and \(\mathbf{Y}\in\mathbb{R}^{d}\), respectively. Given all of the other entries in \(\Sigma^{\mathbf{X}}\) and \(\Sigma^{\mathbf{Y}}\) fixed, the copula distance \(CD_{\mathcal{H}}([X_{i},X_{j}],[Y_{i},Y_{j}])\) is monotonically increasing with \(|(\Sigma^{\mathbf{X}}_{ij})^{2}-(\Sigma^{\mathbf{Y}}_{ij})^{2}|\), \(\forall\,1\leq i<j\leq d\). The monotonicity is satisfied by general probability distribution divergence measures, including MMD distance, Wasserstein-2 distance, and most of the commonly-used \(\phi\)-divergence (including KL divergence, \(\chi^{2}\) distance, Hellinger distance, etc.)._
## IV Unsupervised domain adaptation
**Notations.** In unsupervised domain adaptation, we are given a _source_ domain \(\mathcal{D}_{s}=\{(\mathbf{x}_{n}^{s},y_{n}^{s})\}_{n=1}^{N_{s}}\) with \(N_{s}\) labeled
examples, and a _target_ domain \(\mathcal{D}_{t}=\{\mathbf{x}_{n}^{t}\}_{n=1}^{N_{t}}\) with \(N_{t}\) unlabeled examples. It is assumed that the two domains are characterized by different probability distributions, while they share the same feature space. In a classification task, the goal is to learn a transferable classifier to minimize the classification error on the target domain using all the given data.
Deep domain adaptation methods begin with a feature extractor that can be implemented by a neural network. The feature extractor is supposed to learn the domain-invariant feature representations from both domains. Specifically, the feature extractor learns a function \(F(\mathbf{x};\theta_{f}):\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) that maps an instance to an \(m\)-dimensional representation with the network parameters \(\theta_{f}\) (as illustrated in Figure 2). For simplicity, we denote the feature representation of a source instance \(\mathbf{x}_{n}^{s}\) as \(\mathbf{F}_{n}^{s}:=F(\mathbf{x}_{n}^{s};\theta_{f})\), and that of a target instance \(\mathbf{x}_{n}^{t}\) as \(\mathbf{F}_{n}^{t}:=F(\mathbf{x}_{n}^{t};\theta_{f})\). We define the source feature set \(\mathcal{F}^{s}:=\{\mathbf{F}_{n}^{s}\}_{n=1}^{N_{t}}\) and the target feature set \(\mathcal{F}^{t}:=\{\mathbf{F}_{n}^{t}\}_{n=1}^{N_{t}}\).
When domain adaptation is applied to a classification problem, the feature extractor is followed by a discriminator trained with samples from the source domain \(\mathcal{D}_{s}\). Given the feature representations \(\mathcal{F}^{s}\) computed by the feature extractor on the source domain, together with the labels \(\{y_{n}^{s}\}_{n=1}^{N_{s}}\) (\(y_{n}^{s}\in\{1,2,\ldots,l\}\)), we can train a discriminator \(D(\ \cdot\ ;\theta_{d}):\mathbb{R}^{m}\rightarrow\mathbb{R}^{l}\) that is characterized by the parameters \(\theta_{d}\). The discriminator loss function is defined as the cross-entropy between the predicted probabilistic distribution and the one-hot encoding of the class labels:
\[\mathcal{L}_{cls}(\theta_{f},\theta_{d}):=-\frac{1}{N_{s}}\sum_{n=1}^{N_{s}} \sum_{i=1}^{l}\mathbf{1}(y_{n}^{s}=i)\cdot\log D\big{(}F(\mathbf{x}_{n}^{s}; \theta_{f});\theta_{d}\big{)}_{i}, \tag{3}\]
where \(\mathbf{1}(\cdot)\) is the indicator function and \(D(\ \cdot\ ;\theta_{d})_{i}\) represents the \(i\)-th element in the predicted distribution \(D(\ \cdot\ ;\theta_{d})\).
Besides the discriminator that is trained to learn the discriminative features, the feature extractor is also followed by a discrepancy term that measures the difference between the source features \(\mathcal{F}^{s}\) and the target features \(\mathcal{F}^{t}\) to learn domain-invariant feature representations. To be specific, for a given discrepancy measure \(\mathcal{H}\), the empirical discrepancy between the source feature distribution and the target feature distribution is given by \(\mathcal{H}(\mathcal{F}^{s},\mathcal{F}^{t})\). In literature, there have been multiple choices of the discrepancy measures \(\mathcal{H}\), such as MMD distance [6], Wasserstein distance [9], JS distance [33], Proxy-\(\mathcal{A}\) distance [34], etc. We call the latter few measures as _adversarial distance_ because they are implemented with a domain classifier to quantify the invariance between \(\mathcal{F}^{s}\) and \(\mathcal{F}^{t}\). We illustrate the commonly-chosen discrepancy measures in domain adaptation models in Figure 3(a)-(b).
In summary, the detailed objective function to train a deep domain adaptation network is:
\[\min_{\theta_{f},\theta_{d}}\big{\{}\mathcal{L}_{cls}(\theta_{f},\theta_{d})+ \lambda\mathcal{H}(\mathcal{F}^{s},\mathcal{F}^{t})\big{\}}, \tag{4}\]
Fig. 2: The network flow in deep domain adaptation models. All samples, no matter from the source domain \(\{\mathbf{x}_{n}^{s}\}_{n=1}^{N_{t}}\) (blue) or the target domain \(\{\mathbf{x}_{n}^{s}\}_{n=1}^{N_{t}}\) (green), are fed into a feature extractor \(F(\cdot;\theta_{f})\) to extract features that are both discriminative and domain-invariant. The discriminative features are achieved by training a discriminator \(D(\cdot;\theta_{d})\) to minimize the loss function \(\mathcal{L}_{cls}(\theta_{f},\theta_{d})\), while the domain invariance is measured by \(\mathcal{H}(\mathcal{F}^{s},\mathcal{F}^{t})\).
where the coefficient \(\lambda\) controls the tradeoff between discriminative and transferable feature learning.
### _Copula-based domain adaptation networks_
In this work, we propose a copula-based domain adaptation network (CDAN) to allow more flexibility and emphasis on the dependence structure. To be specific, on the source domain, we split the distribution of the source features \(\mathcal{F}^{s}\subseteq\mathbb{R}^{N_{s}\times m}\) into \(m\) marginal distributions and the copula between the marginals. We do the same on the target domain. This facilitates us to evaluate the feature differences between the source domain and the target domain by the sum of two terms: (1) the sum of marginal feature differences \(\{\mathcal{H}(\mathcal{F}^{s}_{i},\mathcal{F}^{t}_{i})\}_{i=1}^{m}\), and (2) the copula distance between the source features and the target features \(CD_{\mathcal{H}}(\mathcal{F}^{s},\mathcal{F}^{t};\mathbf{\beta})\). Here, for \(*\in\{s,t\}\), \(\mathcal{F}^{*}_{i}:=\{\mathbf{F}^{*}_{n,i}\}_{n=1}^{N_{s}}\subseteq\mathbb{R} ^{N_{s}\times 1}\) with \(\mathbf{F}^{*}_{n,i}\) being the \(i\)-th dimensional value of \(\mathbf{F}^{*}_{n}\). In the later context, we call (1) the _marginal divergence_ (MD) and call (2) the _copula distance_ (CD). By adding the marginal divergence (with hyperparameters \(\{\alpha_{i}\}_{i=1}^{m}\)) and the copula distance (with hyperparameters \(\mathbf{\beta}\)) as regularization terms, we arrive at the objective function to train a CDAN, namely:
\[\min_{\theta_{f},\theta_{d}}\big{\{}\mathcal{L}_{cls}(\theta_{f},\theta_{d})+ \sum_{i=1}^{m}\alpha_{i}\mathcal{H}_{1}(\mathcal{F}^{s}_{i},\mathcal{F}^{t}_{ i})+CD_{\mathcal{H}_{2}}(\mathcal{F}^{s},\mathcal{F}^{t};\mathbf{\beta})\big{\}}. \tag{5}\]
The detailed divergence framework is illustrated in Figure 3(c). Notice that in Eq. (5), we differentiate the divergence metric \(\mathcal{H}_{1}\) that is used to calculate the marginal divergence with \(\mathcal{H}_{2}\) that is for the copula distance calculation. It is worth noting that our model can well accommodate the commonly-used divergence measures in literature.
Our proposed model has three advantages. On one hand, rather than encoding the divergence of each marginal feature and the dependence difference in a single value as in Eq. (4), we split the divergence of the joint feature distributions. Thus we can identify to what extent the marginal feature differences and the copula distance contribute to the target risk respectively. Moreover, using hyperparameters \(\{\alpha_{i}\}_{i=1}^{m}\) and \(\mathbf{\beta}\) to separately control the marginal divergence and copula distance allows us to dynamically adjust the hyperparameters in a data-driven manner. It implicitly shows that there can be a tradeoff between the marginal divergence and the copula distance. Finally, different from using one distance metric to measure both the marginal feature differences and the dependence difference, our proposed model provides a more convenient and elaborate way to detect the changes of the marginal distributions and the dependence structure.
It is straightforward to generalize our CDAN model to the regression tasks. Same as in the classification setting, a domain adaptation model for a regression task combines the feature extractor \(F(\cdot;\theta_{f})\) together with a regressor network \(D(\cdot;\theta_{d})\) to form the basis of a supervised learning network. We denote the predicted value for a sample \(\mathbf{x}^{s}_{n}\) as \(\widehat{y}^{s}_{n}:=D(F(\mathbf{x}^{s}_{n};\theta_{f});\theta_{d})\). The regressor loss function is defined as the mean squared error between the predicted values \(\{\widehat{y}^{s}_{n}\}_{n=1}^{N_{s}}\) and the ground-truth values \(\{y^{s}_{n}\}_{n=1}^{N_{s}}\) on the source domain: \(\mathcal{L}_{rgr}(\theta_{f},\theta_{d}):=\sum_{n=1}^{N_{s}}(\widehat{y}^{s}_ {n}-y^{s}_{n})^{2}/N_{s}=\sum_{n=1}^{N_{s}}\left(D(F(\mathbf{x}^{s}_{n};\theta _{f});\theta_{d})-y^{s}_{n}\right)^{2}/N_{s}\). Adding the marginal divergence and the copula distance as regularizer, we obtain the objective function to train a CDAN for a regression task:
\[\min_{\theta_{f},\theta_{d}}\big{\{}\mathcal{L}_{rgr}(\theta_{f},\theta_{d})+ \sum_{i=1}^{m}\alpha_{i}\mathcal{H}_{1}(\mathcal{F}^{s}_{i},\mathcal{F}^{t}_{ i})+CD_{\mathcal{H}_{2}}(\mathcal{F}^{s},\mathcal{F}^{t};\mathbf{\beta})\big{\}}.\]
### _Algorithm_
The complete process of CDAN algorithm is presented in Algorithm 1. In particular, we provide a detailed description on how to learn the Gaussian copula parameter \(\Sigma\) and how to update the model parameters \(\theta_{f}\) and \(\theta_{d}\).
Fig. 3: There can be various choices of \(\mathcal{H}\) to measure the feature difference in Eq. (4), such as (a) MMD distance and (b) Wasserstein distance, JS distance, Proxy-\(\mathcal{A}\) distance, etc. Our proposed model (c) looks into the detailed feature structures and regularizes the source error by dynamically adjusting the marginal divergence and the copula distance.
**Learning the Gaussian copula parameter \(\Sigma\)**[35] proposes a moment-matching approach to learn the Gaussian copula parameter \(\Sigma\) through Kendall's tau. Denote \(\rho_{\tau}(X_{i},X_{j})\) as Kendall's tau between random variables \(X_{i}\) and \(X_{j}\), that is, \(\rho_{\tau}(X_{i},X_{j}):=\mathbb{E}[\text{sign}\big{(}(X_{i}-\widetilde{X}_{i} )(X_{j}-\widetilde{X}_{j})\big{)}]\), where \([\widetilde{X}_{i},\widetilde{X}_{j}]\) is an independent copy of \([X_{i},X_{j}]\). Then it can be proved that \(\Sigma_{ij}=\sin\frac{\pi}{2}\rho_{\tau}(X_{i},X_{j})\). However, computing Kendall's tau by definition incurs a complexity of \(O(N^{2})\) when the sample size is \(N\), making it expensive to train deep neural networks. Moreover, gradient vanishing occurs during training the neural network because of the _sign_ function.
```
Input : Source data \(\mathcal{D}_{s}=\{(\mathbf{x}_{n}^{s},y_{n}^{s})\}_{n=1}^{N_{s}}\), Target data \(\mathcal{D}_{t}=\{\mathbf{x}_{n}^{t}\}_{n=1}^{N_{t}}\), Maximum training epoch \(S\), Divergence metric \(\mathcal{H}_{1}\), \(\mathcal{H}_{2}\), Parameters \(\{\alpha_{i}\}_{i=1}^{m}\), \(\boldsymbol{\beta}\). Output : Optimal model parameters \(\theta_{f},\theta_{d}\).
1forepoch \(=1\) to \(S\)do
2\(\mathcal{F}^{s}\leftarrow\{F(\mathbf{x}_{n}^{s};\theta_{f})\}_{n=1}^{N_{s}}\);
3\(\mathcal{F}^{t}\leftarrow\{F(\mathbf{x}_{n}^{s};\theta_{f})\}_{n=1}^{N_{t}}\);
4\(MD(\theta_{f})\leftarrow\sum\limits_{i=1}^{m}\alpha_{i}\mathcal{H}_{1}( \mathcal{F}_{i}^{s},\mathcal{F}_{i}^{t})\);
5\(CD(\theta_{f})\gets CD_{\mathcal{H}_{2}}(\mathcal{F}^{s},\mathcal{F}^{t };\boldsymbol{\beta})\) as calculated in Eq. (2);
6 Get the source error \(\mathcal{L}_{cls}(\theta_{f},\theta_{d})\) with Eq. (3);
7\(Loss\leftarrow\mathcal{L}_{cls}(\theta_{f},\theta_{d})+MD(\theta_{f})+CD( \theta_{f})\);
8\(Loss.backward()\)
```
**Algorithm 1**CDAN algorithm
To address the gradient vanishing issue, we propose to replace the \(sign\) function with the \(\tanh\) function with parameter \(a\), namely, \(\rho_{\tau}(X_{i},X_{j})\approx\rho(X_{i},X_{j};a):=\mathbb{E}[\tanh\big{(}a(X _{i}-\widetilde{X}_{i})(X_{j}-\widetilde{X}_{j})\big{)}]\). We prove that \(\lim_{a\rightarrow\infty}\rho(X_{i},X_{j};a)=\rho_{\tau}(X_{i},X_{j})\) (with the proof attached in Appendix.)
**Proposition 3**.: _For two random variables \(X_{1}\) and \(X_{2}\), it holds that \(\lim\limits_{a\rightarrow\infty}\rho(X_{1},X_{2};a)=\rho_{\tau}(X_{1},X_{2})\)._
To further reduce the computational complexity, we adopt the unbiased estimate of Kendall's tau that can be computed with linear complexity. More specifically, given \(\{[x_{n,1},x_{n,2}]\}_{n=1}^{N}\subseteq\mathbb{R}^{N\times 2}\) as \(N\) realizations of \([X_{i},X_{j}]\), an unbiased estimator for \(\rho_{\tau}(X_{i},X_{j})\) is \(\rho_{\tau}(X_{i},X_{j})=\frac{2}{N}\sum_{n=1}^{N/2}\tanh\big{(}a(x_{2n-1,1}- x_{2n,1})(x_{2n-1,2}-x_{2n,2})\big{)}\). It significantly reduces the computational cost of Kendall's tau from \(O(N^{2})\) to \(O(N)\).
**Learning the model parameters \(\theta_{f}\) and \(\theta_{d}\).** For illustration convenience, we conduct the calculation in the classification setting. It can be similarly done for the regression problem, thus we omit here. From Eq. (5), we have:
\[\bigtriangledown_{\theta_{d}} =\partial_{\theta_{d}}\mathcal{L}_{cls}(\theta_{f},\theta_{d})\] \[=-\sum_{n=1}^{N_{s}}\sum_{i=1}^{l}\mathbf{1}(y_{n}^{s}=i)\cdot \partial_{\theta_{d}}D(\mathbf{F}_{n}^{s};\theta_{d})_{i}/\big{(}N_{s}D( \mathbf{F}_{n}^{s};\theta_{d})_{i}\big{)},\] \[\bigtriangledown_{\theta_{f}} =\sum_{i=1}^{m}\alpha_{i}\partial_{\theta_{f}}\mathcal{H}_{1}( \mathcal{F}_{i}^{s},\mathcal{F}_{i}^{t})+\partial_{\theta_{f}}CD_{\mathcal{H}_ {2}}+\partial_{\theta_{f}}\mathcal{L}_{cls}(\theta_{f},\theta_{d}),\]
where \(CD_{\mathcal{H}_{2}}\) is the abbreviation for \(CD_{\mathcal{H}_{2}}(\mathcal{F}^{s},\mathcal{F}^{t};\boldsymbol{\beta})\).
By the chain rule,
\[\partial_{\theta_{f}}\mathcal{L}_{cls}(\theta_{f},\theta_{d})\] \[=-\sum_{n=1}^{N_{s}}\sum_{i=1}^{l}\frac{\mathbf{1}(y_{n}^{s}=i) \cdot\partial_{\mathbf{F}_{n}^{s}}D(\mathbf{F}_{n}^{s};\theta_{d})_{i}\cdot \partial_{\theta_{f}}F(\mathbf{x}_{n}^{s};\theta_{f})}{N_{s}D(\mathbf{F}_{n}^{ s};\theta_{d})_{i}}.\]
The derivatives \(\partial_{\theta_{f}}\mathcal{H}_{1}(\mathcal{F}_{i}^{s},\mathcal{F}_{i}^{t})\) and \(\partial_{\theta_{f}}CD_{\mathcal{H}_{2}}(\mathcal{F}^{s},\mathcal{F}^{t}; \boldsymbol{\beta})\) depend on the choice of \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\). In the experiment, we will take \(\mathcal{H}_{1}\) as the MMD distance and \(\mathcal{H}_{2}\) as the KL divergence. Specifically, if \(\mathcal{H}_{1}\) is MMD distance with the characteristic kernel function \(k\), the unbiased estimate of squared MMD distance between \(\mathcal{F}_{i}^{s}\) and \(\mathcal{F}_{i}^{t}\) is given as [6]:
\[\mathcal{H}_{\text{MMD}}^{2}(\mathcal{F}_{i}^{s},\mathcal{F}_{i}^{t}) :=\sum_{n,n^{\prime}=1}^{N_{s}}\frac{k(\mathbf{F}_{n,i}^{s}, \mathbf{F}_{n^{\prime},i}^{s})}{N_{s}^{2}}+\sum_{n,n^{\prime}=1}^{N_{t}}\frac{k( \mathbf{F}_{n,i}^{t},\mathbf{F}_{n^{\prime},i}^{t})}{N_{t}^{2}}\] \[\quad-\sum_{n=1}^{N_{s}}\sum_{n^{\prime}=1}^{N_{t}}\frac{2k( \mathbf{F}_{n,i}^{s},\mathbf{F}_{n^{\prime},i}^{t})}{N_{s}N_{t}}.\]
Thus,
\[\partial_{\theta_{f}}\mathcal{H}_{\text{MMD}}(\mathcal{F}_{i}^{s}, \mathcal{F}_{i}^{t})\] \[= \Big{[}\sum_{n,n^{\prime}=1}^{N_{s}}\frac{\partial_{\theta_{f}}k( \mathbf{F}_{n,i}^{s},\mathbf{F}_{n^{\prime},i}^{s})}{N_{s}^{2}}+\sum_{n,n^{ \prime}=1}^{N_{t}}\frac{\partial_{\theta_{f}}k(\mathbf{F}_{n,i}^{t},\mathbf{F}_{n^{ \prime},i}^{t})}{N_{t}^{2}}\] \[\quad-\sum_{n=1}^{N_{s}}\sum_{n^{\prime}=1}^{N_{t}}\frac{2 \partial_{\theta_{f}}k(\mathbf{F}_{n,i}^{s},\mathbf{F}_{n^{\prime},i}^{t})}{N_{s}N _{t}}\Big{]}\big{/}\big{(}2\mathcal{H}_{\text{MMD}}(\mathcal{F}^{s},\mathcal{F}^{t })\big{)}.\]
When \(\mathcal{H}_{2}\) is the KL divergence, then
\[\partial_{\theta_{f}}CD_{\mathcal{H}_{KL}}=\sum_{i<j}\frac{\beta_{ij}\partial_{ \theta_{f}}|\log(1-(\Sigma_{ij}^{s})^{2})/(1-(\Sigma_{ij}^{t})^{2})|}{2},\]
where \(\Sigma_{ij}^{s}\) (\(\Sigma_{ij}^{t}\)) is the Gaussian copula parameter of \(\{[\mathbf{F}_{n,i}^{s},\mathbf{F}_{n,j}^{s}]\}_{n=1}^{N_{s}}\) (\(\{|\mathbf{F}_{n,i}^{t},\mathbf{F}_{n,j}^{t}]
### _Toy problem: Two inter-twinning moons_
The source domain considered here is the classical binary problem with two inter-twinning moons, each class corresponding to one moon. Specifically, for the blue moon in Figure 4, the points roughly falls on the upper half circle of \(y=\sqrt{1-x^{2}}\), with points for the red moon falling on lower half circle of \(y=0.5-\sqrt{1-(1-x)^{2}}\). We then consider 4 different target domains by stretching the circle into ellipses where the length of the major axis can be 2, 3, 4 and 5 times that of the minor axis. Such stretching from source domain to target domain strongly affects the relationship between the vertical ordinates and the horizontal ordinates of each point, thus making changes to the internal dependence structure of the data. For each domain, we generate 1024 instances (512 of each class).
For each transfer task, we compare the CDAN model with DAN [6], CORAL [17] and the no-adaptation baseline (MLP). Each trial of the models is repeated 10 times, and we report the average accuracy in Table III. We remark that the larger the length of the major axis, the more difficult the problem becomes, as all of the four models unanimously show a weaker adaptation ability. Our CDAN provides the best performance in all of the four transfer tasks, indicating that CDAN actually captures the dependence difference precisely.
### _Retail credit classification_
In this section, we apply our methods to improve the classification accuracy of a credit risk model on a novel real-world anonymous dataset. The dataset is kindly provided by one of the largest global technology firms that operates in both the e-commerce business and the lending business. It records the monthly credit status of roughly one-half million customers, spanning from 2016 January to 2020 June. The credit status shows whether a customer is in default. In addition to the credit status, customers' monthly shopping, purchasing, and loan history are also included in the dataset in detail. We collect 69 features for each customer in each month from the raw dataset and normalize them to the range \([0,1]\). A binary classification model is constructed to distinguish customers who default (labeled as 0) from those who have paid off all the debts on time (labeled as 1).
Domain adaptation is needed when one forecasts customers' credit risk with the classification model trained with the past data because significant distribution shifts exist between months, especially between the off-season and the peak season, between before-COVID times and after-COVID times. We record the distribution shifts between two consecutive months in Figure 5. Specifically, we collect 2 features that show great importance for the classification: each customer's monthly total purchase and his monthly credit ratio (available credit amount / total credit limit). To illustrate the distribution shifts, we take the three points on 19Jun in Figure 5 as an example. The blue (blue-dashed, resp.) point records the MMD distance of monthly purchase (credit ratio, resp.) distribution between May and June, and the black point records the copula distance between May and June. From Figure 5, we can identify three peak periods circled by a red box (19Jun, 19Nov, 20Feb) that show substantial marginal differences, verifying the necessity of transfer learning. What's more, the copula distance remains high over the whole year, suggesting that special attention to the dependence difference is in urgent need.
We first evaluate our methods on the transfer between the off-seasons and the peak seasons (specifically, the sales seasons June and November), and build two transfer tasks: 19May\(\rightarrow\)Jun, 19Oct\(\rightarrow\)Nov. We further investigate the COVID impact on transferring the classi
Fig. 4: Illustration of the four transfer tasks on the synthetic dataset. The two classes of the source samples are blue and red, and points that are more transparent represent the target samples.
Fig. 5: Distribution shift in the raw data.
fication models and include the evaluation on 6 more transfer tasks: 19Dec\(\rightarrow\)20Jan, 20Jan\(\rightarrow\)Feb, 20Feb\(\rightarrow\)Mar, 20Mar\(\rightarrow\)Apr, 20Apr\(\rightarrow\)May, 20May\(\rightarrow\)Jun. In each transfer task, the source domain consists of samples from the former month and the target domain samples are from the latter month. The sample size for each domain is in the magnitude of \(10^{5}\) customers.
We mainly follow standard evaluation protocol [36] for unsupervised domain adaptation and use all source samples with binary labels and all target samples without labels [6]. We compare our CDAN model to 4 classical domain adaptation models: DAN [6], CORAL [17], AFN [37], MCD [38] as well as the no-adaptation baseline (MLP), which is a fully connected neural network with multiple hidden layers. For our model CDAN, we set \(\mathcal{H}_{1}\) to be the MMD distance [6], and set \(\mathcal{H}_{2}\) to be the KL divergence [39]. We set \(\alpha_{i}=\alpha\left(\forall\,i\right)\) and \(\beta_{ij}=\beta\left(\forall\,i,j\right)\), and select the hyperparameter pair \(\left(\alpha,\beta\right)\) by grid search. The detailed implementation procedures are summarized in the Appendix.
In Table IV, we record the averages and standard errors of AUC over 100 randomized trials for each model in each task. CDAN outperforms the other models in 6 transfer tasks. It deserves attention that the outperformance is significant in that the increase in the AUC score is far larger than the standard deviation. As an illustration, we plot the density of the learned features' MD and CD in Figure 6 for each model. The density is estimated from the 100 trials for each model in a specific transfer task (20Jan\(\rightarrow\)Feb). We find that CDAN again shows superiority in terms of contracting both the marginal and the dependence differences. Such observation, together with the optimized hyperparameters, explains why CDAN outperforms other domain adaptation models.
### _Intra-day equity price regression_
We collect the intraday 5-minute asset prices of 22 stocks selected from HEXX according to the market cap and daily turnover. The 22 stocks cover 8 industries (according to the Hang Seng Industry Classification System) that include Information Technology, Financials, Consumer Discretionary, etc. The data spans from Dec 1st, 2014 to Dec 31st, 2020, and consists of 71242 observations with information of the first half-hour in each trading day excluded. We divide the observations into two domains according to the Hang Seng Index (HSI) daily returns. Specifically, the target domain includes observations of the days when the daily return is less than the 0.1-quantile of the whole 6-year return series, and the source domain includes the remaining observations. The goal is to forecast the next 5-minute price of the 22 stocks with their last-hour price as the input.
To illustrate the distribution shift of the two domains, we record the quantiles and the quantile dependence of each domain in Table V. We denote \(X_{i}^{s}\) (\(X_{i}^{t}\)) as the price series for stock \(i\) in the source (target) domain and define its whole price series as \(X_{i}:=X_{i}^{s}\bigcup X_{i}^{t}\). Each \(X_{i}\) is normalized by its first price. For a given quantile level \(\alpha\) and \(*\in\{s,t\}\) indicating the domain, the average \(\alpha\)-quantile of domain \(*\) is defined by \(Q_{\alpha}^{*}:=\sum_{i=1}^{22}Q_{\alpha,i}^{*}/22\), where \(Q_{\alpha,i}^{*}\) is the \(\alpha\)-quantile of the price series\(X_{i}^{*}\). The quantile dependence is given by \(\tau_{\alpha}^{*}:=\sum_{1\leq i<j\leq 22}\tau_{\alpha,[i,j]}^{*}/231\), where \(\tau_{\alpha,[i,j]}^{*}:=100\times Pr(X_{j}^{s}\leq Q_{\alpha,j}|X_{i}^{s} \leq Q_{\alpha,i})\) and \(Q_{\alpha,i}\) is the \(\alpha\)-quantile of \(X_{i}\). We see that the marginal quantiles of either domain do not differ that much, but as \(\alpha\) increases, the marginals' differences between the two domains get larger. Furthermore, the quantile dependence significantly differs irrespective of the \(\alpha\) value.
Fig. 6: Distribution of the learned features’ marginal divergence (left) and copula distance (right) over 100 trials for each model.
We compare to the following 5 models: RNN, LSTM, DANN [1], CORAL [18] and DAN [6]. Specifically, RNN and LSTM serve as the no-adaptation benchmarks and only utilize the source samples to do the training. CORAL and DAN are the same as in Section V-B, except that they use an LSTM as a feature extractor in this section. Though AFN and MCD are SOTA models, they are not originally designed for sequential data and perform not well in this case, so we do not list the corresponding results. For more implementation details, see the Appendix.
In Table VI, we summarize the experimental results of the 6 models. There are 7 performance metric columns in Table VI. The 1st column represents the mean and the standard deviation of the RMSE over 100 trials. The 2nd-5th columns record the detailed information of the relative errors (RE) over 100 trials. Specifically, the 2nd column records the mean and the standard deviation. The 3rd-5th columns record the 0.25-quantile, 0.5-quantile and 0.75-quantile of the RE over 100 trials. In addition to the RE over 100 trials, we also record the maximal (in the 6th column LRE) and the minimal RE (in the 7th column SRE) among the 22 stocks. We find that CDAN achieves the best performance. Moreover, its Q2 RE is close to its mean RE, and the standard deviation of RE is quite small, showing that the CDAN model is pretty stable in terms of the model performance. We plot additional visualizing pictures in Figure 7 to have a better understanding of the results. In Figure 7(a), CDAN converges fast in terms of the test RMSE, and it's efficient in controlling the relative errors. Diving deeper into the training details, from Fig 7(b) we observe that the CD of CDAN decreases more significantly (to around \(10^{-2}\)) than that of DAN (to around \(10^{-1}\)). It shows that CDAN does capture the dependence difference which explains and contributes to the outperformance of CDAN.
### _Wine quality regression_
The UCI wine quality dataset [40] contains records of red and white vinho Verde wine samples from the north of Portugal, with sample size 1599 and 4598 respectively. Each record has 12 features, such as pH, alcohol and quality. The red wine and white wine samples differ in the feature distributions. Our goal is to predict the wine quality with two transfer tasks, from white wine to red wine (W\(\rightarrow\)R) and from red wine to white wine (R\(\rightarrow\)W).
We compare CDAN to 6 neural network baselines, namely MLP, AFN [37], MCD [38], DANN [1], CORAL [18], and DAN [6]. Each neural network model has 2 hidden layers and each hidden layer has 8 units. For each model, we run 100 trials and record the RMSE, R2 scores, and relative errors.
The results are summarized in Table VII. We conclude that the CDAN model outperforms the other benchmarks by achieving the highest R2 score, the smallest relative error and the smallest RMSE. Furthermore, we plot the R2 score distribution over the 100 runs for the two transfer tasks in Figure 8. We see that the R2 score for CDAN model is more concentrated than its competitors, showing that the outper
Fig. 7: The performance of forecasting the equity price in a particular (typical) trial. All values are in the base 10 logarithms.
### _Parameter sensitivity and ablation study_
To further look into the sensitivity of the parameters \(\alpha\) and \(\beta\), we compare the model performance on the retail credit dataset and the equity price dataset. For the retail dataset, we list the model performance under various combinations of \(\alpha\) and \(\beta\) in the 8 transfer tasks in Table VIII. Among the best performances in each transfer task, we find that the coefficient of MD is in most cases larger than that of CD, indicating that the marginal differences and the dependence difference weigh differently in measuring the overall domain divergence. For the equity price dataset, we test 9 candidate pairs of hyperparameters \((\alpha,\beta)\) and record corresponding model performances in Table IX. It shows that as the coefficients increase, the model performance tends to get worse. And the change in the CD parameter \(\beta\) can significantly affect the model performance. It thus confirms the motivation of learning deep features by jointly adapting marginal divergence and copula distance, since a good trade-off between them could enhance feature transferability.
To evaluate the efficiency of taking MD and CD separately into the regularizer, we run experiments on the wine quality dataset. We compare the results of either \(\alpha=0\) or \(\beta=0\) and summarize them in Table X. The ablation study shows that MD and CD are both essential in terms of a good model performance.
Fig. 8: R2 score distributions over 100 runs for the transfer task W\(\rightarrow\)R (left) and R\(\rightarrow\)W (right).
### _Comparison of different divergence measures_
As we have mentioned, there can be multiple choices over the divergence measures \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\). In this section, we investigate the performance difference caused by the various divergence measures. Specifically, the candidate divergence measures for \(\mathcal{H}_{1}\) include KL divergence, W1 (abbreviated for Wasserstein-1) distance and MMD. And the candidate divergence measures for \(\mathcal{H}_{2}\) include KL divergence, \(\chi^{2}\) (abbreviated for Pearson \(\chi^{2}\)) divergence and W1 distance. In Table XI, we record the model performance of CDAN with the various combinations of \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) on the UCI wine quality dataset. From the table, we see that, the CDAN model with \(\mathcal{H}_{1}\) taking MMD distance and \(\mathcal{H}_{2}\) taking KL divergence performs the best in terms of RMSE and RE. Also, it should be noted that the performance difference caused by the divergence measures can be as large as that brought by different models. That reminds us to be prudent in choosing the suitable divergence measures.
## VI Conclusion
This work proposes a new domain adaptation framework that facilitates a user to detect whether the domain difference in a transfer task comes from the marginals' differences or the dependence difference. Specifically, we quantify the dependence difference with copula distance, a difference measure endowed with boundedness and monotonicity to guarantee the algorithm convergence. By optimizing the relative weights between the marginal divergence and the copula distance, we can acquire transferability across domains in a more flexible way. Experiments on the real-world datasets demonstrate the efficacy and robustness of our approach compared to a variety of existing domain adaptation models.
## Acknowledgments
Shumin Ma acknowledges the support from: Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, BNU-HKBU United International College (2022B1212010006), Guangdong Higher Education Upgrading Plan (2021-2025) (UIC R0400001-22) and UIC (UICR0700019-22). Qi Wu acknowledges the support from the Hong Kong Research Grants Council [General Research Fund 14206117, 11219420, and 11200219], CityU SRG-Fid fun 7005300, and the support from the CityU-JD Digits Laboratory in Financial Technology and Engineering, HK Institute of Data Science. The work described in this paper was partially supported by the InnoHK initiative, The Government of the HKSAR, and the Laboratory for AI-Powered Financial Technologies.
## Proof of Proposition 1 and 2
Proof.: We begin with the analysis over the explicit form of the bivariate copula distance \(CD_{\mathcal{H}}(\mathbf{X},\mathbf{Y})\) when \(\mathcal{H}\) is taken to be different divergence measures. Note that the copula distance between multivariate distributions is defined in terms of bi-variate sub-distributions. Thus, it is enough to prove the boundedness and monotonicity of \(CD_{\mathcal{H}}(\mathbf{X},\mathbf{Y})\) between any two bivariate random vectors \(\mathbf{X}\), \(\mathbf{Y}\in\mathbb{R}^{2}\). Suppose that the Gaussian copula parameters for \(\mathbf{X}\) and \(\mathbf{Y}\) are \(\Sigma^{\mathbf{X}}\) and \(\Sigma^{\mathbf{Y}}\), respectively. Their copula density functions are:
\[\begin{split} c^{\mathbf{X}}(u_{1},u_{2})&=|\Sigma^ {\mathbf{X}}|^{-\frac{1}{2}}\exp\Big{(}-\frac{1}{2}\mathbf{X}^{T}\big{(}( \Sigma^{\mathbf{X}})^{-1}-I\big{)}\mathbf{x}\Big{)},\\ c^{\mathbf{Y}}(u_{1},u_{2})&=|\Sigma^{\mathbf{Y}}| ^{-\frac{1}{2}}\exp\Big{(}-\frac{1}{2}\mathbf{X}^{T}\big{(}(\Sigma^{\mathbf{ Y}})^{-1}-I\big{)}\mathbf{x}\Big{)},\end{split} \tag{6}\]
where \(\mathbf{x}:=[x_{1},x_{2}]^{T}=[\Phi^{-1}(u_{1}),\Phi^{-1}(u_{2})]^{T}\) with \(\Phi\) being the CDF of the standard normal distribution.
The first divergence class is \(\phi\)-divergence (see [41] for the detailed descriptions of the \(\phi\)-divergence family). Given a convex function \(\phi(x)\) such that \(\phi(1)=0\), the \(\phi\) divergence between two distributions \(P^{\mathbf{X}}\) and \(P^{\mathbf{Y}}\) is defined by \(\mathcal{H}_{\phi}(P^{\mathbf{X}},P^{\mathbf{Y}})=\int\phi(\frac{dP^{\mathbf{ X}}}{dP^{\mathbf{Y}}})dP^{\mathbf{Y}}\). With the following proposition, we prove that the copula distance between bivariate random vectors \(\mathbf{X}\) and \(\mathbf{Y}\), \(CD_{\mathcal{H}_{\phi}}(\mathbf{X},\mathbf{Y})\), can be fully characterized by the copula density functions \(c^{\mathbf{X}}\) and \(c^{\mathbf{Y}}\).
**Proposition 4**.: _For any bivariate random vector \(\mathbf{X}\in\mathbb{R}^{2}\), the \(\phi\)-divergence between the probability distribution \(P^{\mathbf{X}}\) and the product of marginal distributions \(P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}}\) is,_
\[\mathcal{H}_{\phi}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}})=\int_{ 0}^{1}\int_{0}^{1}\phi\big{(}c^{\mathbf{X}}(u_{1},u_{2})\big{)}du_{1}du_{2}.\]
_For any two bivariate random vectors \(\mathbf{X}\), \(\mathbf{Y}\in\mathbb{R}^{2}\), the copula distance between \(\mathbf{X}\) and \(\mathbf{Y}\) when \(\mathcal{H}\) takes \(\phi\)-divergence is_
\[\begin{split}& CD_{\mathcal{H}_{\phi}}(\mathbf{X},\mathbf{Y})\\ =&|\int_{0}^{1}\int_{0}^{1}\phi\big{(}c^{\mathbf{X}}(u_ {1},u_{2})\big{)}-\phi\big{(}c^{\mathbf{Y}}(u_{1},u_{2})\big{)}du_{1}du_{2}|. \end{split}\]
Proof.: We denote the two marginal density functions for the bivariate random vector \(\mathbf{X}\) as \(p_{1}^{\mathbf{X}}(\cdot)\) and \(p_{2}^{\mathbf{X}}(\cdot)\). By the definition of copula density function, the \(\phi\)-divergence between \(P^{\mathbf{X}}\) and \(P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}}\) is,
\[\begin{split}&\mathcal{H}_{\phi}(P^{\mathbf{X}},P_{1}^{\mathbf{X}} P_{2}^{\mathbf{X}})\\ =&\int\phi\big{(}p_{1}^{\mathbf{X}}(x_{1})p_{2}^{ \mathbf{X}}(x_{2})c^{\mathbf{X}}(P_{1}^{\mathbf{X}}(x_{1}),P_{2}^{\mathbf{X}} (x_{2}))\big{)}dP_{1}^{\mathbf{X}}(x_{1})dP_{2}^{\mathbf{X}}(x_{2})\\ =&\int\phi\big{(}c^{\mathbf{X}}(P_{1}^{\mathbf{X}}(x _{1}),P_{2}^{\mathbf{X}}(x_{2}))\big{)}dP_{1}^{\mathbf{X}}(x_{1})dP_{2}^{ \mathbf{X}}(x_{2}).\end{split}\]
With change of variables \(u_{1}=P_{1}^{\mathbf{X}}(x_{1})\) and \(u_{2}=P_{2}^{\mathbf{X}}(x_{2})\), we finally have \(\mathcal{H}_{\phi}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}})=\int_{ 0}^{1}\int_{0}^{1}\phi\big{(}c^{\mathbf{X}}(u_{1},u_{2})\big{)}du_{1}du_{2}\). For the bivariate random vector \(\mathbf{Y}\), we similarly have \(\mathcal{H}_{\phi}(P^{\mathbf{Y}},P_{1}^{\mathbf{Y}}P_{2}^{\mathbf{Y}})=\int_{ 0}^{1}\int_{0}^{1}\phi(c^{\mathbf{Y}}(u_{1},u_{2}))du_{1}du_{2}\). The copula distance between \(\mathbf{X}\) and \(\mathbf{Y}\) is defined as the absolute difference between \(\mathcal{H}_{\phi}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}})\) and \(\mathcal{H}_{\phi}(P^{\mathbf{Y}},P_{1}^{\mathbf{Y}}P_{2}^{\mathbf{Y}})\). That completes the proof.
With Proposition 4, we can directly obtain the results in Section 3 in the main paper:
* When \(\phi(x)=x^{2}-1\), the resulting \(\phi\)-divergence is a \(\chi^{2}\) distance. Thus, \(\mathcal{H}_{\chi^{2}}(P_{ij},P_{i}P_{j})=\int_{0}^{1}\int_{0}^{1}(c_{ij}^{2}(u_ {i},u_{j})-1)du_{i}du_{j}\).
* When \(\phi(x)=(\sqrt{x}-1)^{2}\), it corresponds to Hellinger distance. So we have \(\mathcal{H}_{H}(P_{ij},P_{i}P_{j})=\int_{0}^{1}\int_{0}^{1}[\sqrt{c_{ij}(u_{i},u_{j} )}-1]^{2}\mathrm{d}u_{i}\mathrm{d}u_{j}\).
* When \(\phi(x)=\frac{x(1-x^{-(a+1)/2})}{1-a^{2}}\), it results in \(\alpha\)-divergence. Thus, \(\mathcal{H}_{\alpha}(P_{ij},P_{i}P_{j})=\frac{1}{1-\alpha^{2}}\int_{0}^{1}\int_{ 0}^{1}[1-c_{ij}(u_{i},u_{j})^{-\frac{a+1}{2}}]c_{ij}(u_{i},\,u_{j})\mathrm{d}u_{ i}\mathrm{d}u_{j}\).
Proposition 4 states that the copula distance defined by the \(\phi\)-divergence is a function of the copula densities. Also, it proves that the \(\phi\)-divergence between the joint distribution and the product of marginals is purely a function of the copula densities. That is to say, when the divergence metric is taken to be \(\phi\)-divergence, the calculation of the copula distance has nothing to do with the marginal distributions. Thus, in the following proofs, when calculating the copula distance between any two random vectors \(\mathbf{X}\) and \(\mathbf{Y}\), we will assume the marginals are both standard normal distributions that has mean 1 and variance 0. That will greatly simplify our calculation. We can just calculate the copula distance between two bivariate Gaussian vectors with copula densities \(c^{\mathbf{X}}(\mathbf{u})\) and \(c^{\mathbf{Y}}(\mathbf{u})\), respectively. Furthermore, given that \(\mathbf{X}\) is Gaussian with standard normal marginals, we know that their Gaussian copula parameter \(\Sigma^{\mathbf{X}}\) is exactly the correlation matrix ([42]). In the following proof, we will write \(\Sigma^{\mathbf{X}}:=\begin{pmatrix}1&\rho\\ \rho&1\end{pmatrix}\), with \(\rho\in[-1,1]\). Now we are ready to provide the explicit forms of the copula distance for various choices of the divergence measures.
**KL divergence.** When \(\phi(x)=x\log x\), the corresponding \(\phi\)-divergence is KL divergence. By definition, we have
\[\mathcal{H}_{\text{KL}}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{ \mathbf{X}})=\iint p^{\mathbf{X}}(x_{1},x_{2})\log\frac{p^{\mathbf{X}}(x_{1},x _{2})}{p_{1}^{\mathbf{X}}(x_{1})p_{2}^{\mathbf{X}}(x_{2})}dx_{1}dx_{2}\]
Given that \(p^{\mathbf{X}}(x_{1},x_{2})=p^{\mathbf{X}}(x_{1})p^{\mathbf{X}}(x_{2})c^{ \mathbf{X}}(u_{1},u_{2})\), with Eq. (6), we have
\[\mathcal{H}_{\text{KL}}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{ \mathbf{X}})\] \[= \iint\frac{p^{\mathbf{X}}(x_{1},x_{2})([x_{1},x_{2}](\mathbf{I}-( \Sigma^{\mathbf{X}})^{-1}][x_{1},x_{2}]^{T}-\log|\Sigma^{\mathbf{X}}|)}{2}dx_{ 1}dx_{2}\] \[= \frac{1}{2}\mathbb{E}_{p^{\mathbf{X}}}([x_{1},x_{2}](\mathbf{I}-( \Sigma^{\mathbf{X}})^{-1}][x_{1},x_{2}]^{T}-\log|\Sigma^{\mathbf{X}}|)\] \[= \frac{1}{2}(-\log|\Sigma^{\mathbf{X}}|+2-2)\] \[= -\frac{1}{2}\log|\Sigma^{\mathbf{X}}|.\]
The third equality comes from [43], where it proves that \(\mathbb{E}_{p^{\mathbf{X}}}([x_{1},x_{2}](\Sigma^{\mathbf{X}})^{-1}[x_{1},x_{ 2}]^{T})=2\). Finally, with the definition of the copula distance, we have:
\[CD_{\mathcal{H}_{\text{KL}}}(\mathbf{X},\mathbf{Y})=\frac{1}{2}|\log|\Sigma^{ \mathbf{X}}|-\log|\Sigma^{\mathbf{Y}}||.\]
\(\chi^{2}\) **distance.** When \(\phi(x)=x^{2}-1\), the resulting \(\phi\)-divergence is a \(\chi^{2}\) distance. By definition, we have
\[\mathcal{H}_{\chi^{2}}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{ \mathbf{X}})\] \[= \iint(\frac{p^{\mathbf{X}}(x_{1},x_{2})}{p_{1}^{\mathbf{X}}(x_{1} )p_{2}^{\mathbf{X}}(x_{2})})^{2}p_{1}^{\mathbf{X}}(x_{1})p_{2}^{\mathbf{X}}(x_ {2})dx_{1}dx_{2}-1.\]
Since \(\frac{p^{\mathbf{X}}(x_{1},x_{2})}{p_{1}^{\mathbf{X}}(x_{1})p_{2}^{\mathbf{X}}( x_{2})}=c^{\mathbf{X}}(u_{1},u_{2})\) and \(p_{1}^{\mathbf{X}}(x)=p_{2}^{\mathbf{X}}(x)=\frac{1}{\sqrt{2\pi}}\exp{(-\frac{ \pi^{2}}{2})}\), we can further simplify the calculation as:
\[\mathcal{H}_{\chi^{2}}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{ \mathbf{X}})\] \[= \iint\frac{\exp{\left[[x_{1},x_{2}]\big{(}\frac{I}{2}-(\Sigma^{ \mathbf{X}})^{-1}\big{)}[x_{1},x_{2}]^{T}\right)}}{2\pi|\Sigma^{\mathbf{X}}|} dx_{1}dx_{2}-1\] \[= |\Sigma^{\mathbf{X}}|^{-1}-1.\]
The last equality comes from the following fact: \(2(\Sigma^{\mathbf{X}})^{-1}-I\) is positive definite with determinant \(1\). That gives: \(\iint\exp{\left([x_{1},x_{2}]\big{(}\frac{I}{2}-(\Sigma^{\mathbf{X}})^{-1} \big{)}[x_{1},x_{2}]^{T}\right)}dx_{1}dx_{2}=2\pi\). Finally, we have:
\[CD_{\mathcal{H}_{\chi^{2}}}(\mathbf{X},\mathbf{Y})=||\Sigma^{\mathbf{X}}|^{-1} -|\Sigma^{\mathbf{Y}}|^{-1}|.\]
It is not hard to derive more results about the copula distance for other \(\phi\)-divergence. So we omit them here and turn to the derivation for Wasserstein-2 distance and MMD distance.
**Wasserstein-2 distance.** Assume that the marginal distributions of \(\mathbf{X},\mathbf{Y}\) are standard normals. By directly applying the conclusion in Proposition 7 in [44], we have
\[\mathcal{H}_{\mathbf{W}}^{2}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{ \mathbf{X}})=4-2\text{Tr}\big{(}(\Sigma^{\mathbf{X}})^{\frac{1}{2}}\big{)}=4-2 \sqrt{2+2\sqrt{|\Sigma^{\mathbf{X}}|}}.\]
Thus,
\[CD_{\mathcal{H}_{\text{tw}}}(\mathbf{X},\mathbf{Y})\] \[= \Big{|}\sqrt{4-2\sqrt{2+2\sqrt{|\Sigma^{\mathbf{X}}|}}}-\sqrt{4-2 \sqrt{2+2\sqrt{|\Sigma^{\mathbf{Y}}|}}}\Big{|}.\]
**Gaussian MMD distance.** Assume that the marginal distributions of \(\mathbf{X},\mathbf{Y}\) are standard normals. For ease of calculation, we take the simplest kernel function \(k(\mathbf{X},\mathbf{Y})=e^{-||\mathbf{X}-\mathbf{Y}||_{2}^{2}}\). But we emphasize that, the following calculation applies to all Gaussian kernels. Using the kernel trick, the squared MMD distance can be computed as the expectation of kernel functions:
\[\mathcal{H}_{\text{MMD}}^{2}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{ \mathbf{X}}) \tag{7}\] \[= \mathbb{E}_{\mathbf{X},\mathbf{X}}k(\mathbf{X},\mathbf{X})+\mathbb{E }_{\tilde{\mathbf{X}},\tilde{\mathbf{X}}}k(\tilde{\mathbf{X}},\tilde{\mathbf{X}})-2 \mathbb{E}_{\mathbf{X},\tilde{\mathbf{X}}}k(\mathbf{X},\dot{\mathbf{X}}),\]
where \(\dot{\mathbf{X}}\in\mathbb{R}^{2}\) is a random Gaussian vector with CDF \(P^{\tilde{\mathbf{X}}}(x_{1},x_{2})=P_{1}^{\mathbf{X}}(x_{1})P_{2}^{\mathbf{X}}( x_{2})\) and the Gaussian copula parameter \(\Sigma^{\tilde{\mathbf{X}}}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\).
From Eq. (7), we know that to calculate the squared MMD distance \(\mathcal{H}_{\text{MMD}}^{2}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}})\), we need to calculate the three expectations on the right-hand-side of this equation. We begin with the calculation of \(\mathbb{E}_{\mathbf{X},\tilde{\mathbf{X}}}k(\mathbf{X},\dot{\mathbf{X}})\). By definition, we have:
\[\mathbb{E}_{\mathbf{X},\tilde{\mathbf{X}}}k(\mathbf{X},\dot{ \mathbf{X}})\] \[= \iint\frac{k(\mathbf{X},\mathbf{Y})\exp{\left(-\frac{1}{2} \mathbf{X}^{T}(\Sigma^{\tilde{\mathbf{X}}})^{-1}\mathbf{X}-\frac{1}{2} \mathbf{Y}^{T}(\Sigma^{\tilde{\mathbf{X}}})^{-1}\mathbf{Y}\right)}}{4\pi^{2} \sqrt{|\Sigma^{\tilde{\mathbf{X}}}\Sigma^{\tilde{\mathbf{X}}}|}}dx_{1}dx_{2}-1\] \[= \iiint\frac{\exp(-\frac{1}{2}[x_{1},x_{2},y_{1},y_{2}]A[x_{1},x_{2
Here, the matrix \(A:=\begin{pmatrix}(\Sigma^{\mathbf{X}})^{-1}+2I&-2I\\ -2I&(\Sigma^{\mathbf{X}})^{-1}+2I\end{pmatrix}\in\mathbb{R}^{4\times 4}\) is positive semidefinite with determinant \(\frac{|2\Sigma^{\mathbf{X}}+2\Sigma^{\mathbf{X}}+I_{2}|}{|\Sigma^{\mathbf{X}} \Sigma^{\mathbf{X}}|}\). Similarly, we have:
\[\mathbb{E}_{\mathbf{X},\mathbf{X}}k(\mathbf{X},\mathbf{X}) =\frac{1}{\sqrt{4|\Sigma^{\mathbf{X}}+I|}}=\frac{1}{\sqrt{9+16| \Sigma^{\mathbf{X}}|}},\] \[\mathbb{E}_{\tilde{\mathbf{X}},\tilde{\mathbf{X}}}k(\dot{ \mathbf{X}},\dot{\mathbf{X}}) =\frac{1}{\sqrt{|4\Sigma^{\mathbf{X}}+I|}}=\frac{1}{5}.\]
Organizing the three terms together, we have the squared MMD distance:
\[\mathcal{H}^{2}_{\text{MMD}}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{ X}})=\frac{1}{\sqrt{9+16|\Sigma^{\mathbf{X}}|}}+\frac{1}{5}-\frac{2}{\sqrt{21+4| \Sigma^{\mathbf{X}}|}},\]
and the copula distance
\[CD_{\mathcal{H}_{\text{MMD}}}(\mathbf{X},\mathbf{Y}) =\Big{|}\sqrt{\frac{1}{\sqrt{9+16|\Sigma^{\mathbf{X}}|}}+\frac{1 }{5}-\frac{2}{\sqrt{21+4|\Sigma^{\mathbf{X}}|}}}\] \[-\sqrt{\frac{1}{\sqrt{9+16|\Sigma^{\mathbf{Y}}|}}+\frac{1}{5}- \frac{2}{\sqrt{21+4|\Sigma^{\mathbf{Y}}|}}}\Big{|}.\]
**Boundedness.** Given that \(\Sigma^{\mathbf{X}}\) and \(\Sigma^{\mathbf{Y}}\) for Gaussian random vectors are in essence the correlation matrix, we know that \(|\Sigma^{\mathbf{X}}|\leq 1\) and \(|\Sigma^{\mathbf{Y}}|\leq 1\). Thus, it is easy to verify that when \(\mathcal{H}\) is Wasserstein-2 distance or Gaussian MMD distance, the copula distance is bounded. Furthermore, we know that the divergence measures (including the total variation distance, Hellinger distance, Jensen-Shannon divergence, etc.) are bounded by definition. Consequently, the corresponding copula distance is bounded.
**Monotonicity.** We fix the Gaussian copula parameter \(\Sigma^{\mathbf{Y}}\) and express \(CD_{\mathcal{H}}(\mathbf{X},\mathbf{Y})\) as a function of \(\Sigma^{\mathbf{X}}_{12}=\rho\). A simple observation is that, if a function \(f(x)\) is monotonically increasing with respect to \(x\), then given \(y\) fixed, \(|f(x)-f(y)|\) is monotonically increasing with respect to \(|x-y|\). So if \(\mathcal{H}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}})\) is increasing with respect to \(\rho^{2}\), we can conclude that the corresponding copula distance is monotonically increasing with \(|(\Sigma^{\mathbf{X}}_{12})^{2}-(\Sigma^{\mathbf{Y}}_{12})^{2}|\). We check them one by one.
* \(\mathcal{H}_{\text{KL}}(P_{12},P_{1}P_{2})=-\frac{1}{2}(1-\rho^{2})\) monotonically increases with \(\rho^{2}\).
* \(\mathcal{H}_{\chi^{2}}(P_{12},P_{1}P_{2})=\frac{1}{1-\rho^{2}}-1\) monotonically increases with \(\rho^{2}\).
* \(\mathcal{H}_{\text{W}}^{\text{\tiny{$\mathbf{W}$}}}(P^{\mathbf{X}},P_{1}^{ \mathbf{X}}P_{2}^{\mathbf{X}})=4-2\sqrt{2+2\sqrt{1-\rho^{2}}}\) monotonically increases with \(\rho^{2}\).
* \(\mathcal{H}_{\text{MMD}}^{\text{\tiny{$\mathbf{W}$}}}(P^{\mathbf{X}},P_{1}^{ \mathbf{X}}P_{2}^{\mathbf{X}})=\frac{1}{\sqrt{25-16\rho^{2}}}+\frac{1}{5}- \frac{2}{\sqrt{25-4\rho^{2}}}\).
From the above calculations, we conclude that \(\mathcal{H}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}})\) is increasing with respect to \(\rho^{2}\) when \(\mathcal{H}\) is KL divergence, \(\chi^{2}\) distance and Wasserstein-2 distance. For Gaussian MMD distance, we consider the function \(f(x)=\frac{1}{\sqrt{25-16x}}-\frac{2}{\sqrt{25-4x}},x\in[0,1]\). The first derivative \(f^{\prime}(x)=8(25-16x)^{-3/2}-4(25-4x)^{-3/2}>0\), suggesting that \(\mathcal{H}_{\text{MMD}}(P^{\mathbf{X}},P_{1}^{\mathbf{X}}P_{2}^{\mathbf{X}})\) is increasing with respect to \(\rho^{2}\).
## Proof of Proposition 3
Proof.: Without loss of generality, we assume that the probability density function \(p(x_{1},x_{2})\) of \([X_{1},X_{2}]\) is continuous. Consider the area \(A_{\delta}=\{(x_{1},x_{2},\widetilde{x}_{1},\widetilde{x}_{2}):|x_{1}- \widetilde{x}_{1}|\leq\delta\text{ or }|x_{2}-\widetilde{x}_{2}|\leq\delta\} \subset\mathbb{R}^{4}\). Define \(c_{\delta}:=\iiint_{A_{\delta}}p(x_{1},x_{2})p(\widetilde{x}_{1},\widetilde{x} _{2})dx_{1}dx_{2}d\widetilde{x}_{1}d\widetilde{x}_{2}\). It always holds that,
\[\rho(X_{1},X_{2};a)=\mathbb{E}[\tanh\left(a(X_{1}-\widetilde{X}_{1 })(X_{2}-\widetilde{X}_{2})\right)]\] \[= (\iiint_{A_{\delta}}+\iiint_{\mathcal{H}_{\delta}-A_{\delta}}) \tanh\left(a(x_{1}-\widetilde{x}_{1})(x_{2}-\widetilde{x}_{2})\right)\] \[\times p(x_{1},x_{2})p(\widetilde{x}_{1},\widetilde{x}_{2})d \widetilde{x}_{2}d\widetilde{x}_{1}dx_{2}dx_{1}.\]
Outside \(A_{\delta}\), we have \(|a(x_{1}-\widetilde{x}_{1})(x_{2}-\widetilde{x}_{2})|\geq a\delta^{2}\), and
\[\lim_{a\to\infty}\iiint_{\mathbb{R}^{4}-A_{\delta}}\tanh\left(a(x_{1}- \widetilde{x}_{1})(x_{2}-\widetilde{x}_{2})\right)\] \[\times p(x_{1},x_{2})p(\widetilde{x}_{1},\widetilde{x}_{2})dx_{1}dx_{ 2}d\widetilde{x}_{1}d\widetilde{x}_{2}\] \[= \lim_{a\to\infty}\iiint_{\mathbb{R}^{4}-A_{\delta}}\operatorname{ sign}\bigl{(}a(x_{1}-\widetilde{x}_{1})(x_{2}-\widetilde{x}_{2})\bigr{)}\] \[\times p(x_{1},x_{2})p(\widetilde{x}_{1},\widetilde{x}_{2})dx_{1}dx_{ 2}d\widetilde{x}_{1}d\widetilde{x}_{2}\] (dominated convergence theorem) \[= \rho_{T}(X_{1},X_{2})-\lim_{a\to\infty}\iiint_{\mathcal{H}_{ \delta}}\operatorname{sign}\bigl{(}a(x_{1}-\widetilde{x}_{1})(x_{2}- \widetilde{x}_{2})\bigr{)}\] \[\times p(x_{1},x_{2})p(\widetilde{x}_{1},\widetilde{x}_{2})dx_{1}dx_{ 2}d\widetilde{x}_{1}d\widetilde{x}_{2}.\]
Notice that inside \(A_{\delta}\), \(|\tanh(x)|\leq 1\). Thus, \(\forall\delta\),
\[\lim_{a\to\infty}\rho(X_{1},X_{2};a)-\rho_{\tau}(X_{1},X_{2})\] \[\leq \lim_{a\to\infty}\iiint_{\mathcal{H}_{\delta}}\bigl{(}| \operatorname{sign}\bigl{(}a(x_{1}-\widetilde{x}_{1})(x_{2}-\widetilde{x}_{2}) \bigr{)}|+\] \[|\tanh\left(a(x_{1}-\widetilde{x}_{1})(x_{2}-\widetilde{x}_{2}) \bigr{)}|\bigr{)}p(x_{1},x_{2})p(\widetilde{x}_{1},\widetilde{x}_{2})dx_{1}dx_{ 2}d\widetilde{x}_{1}d\widetilde{x}_{2}\] \[\leq 2\iiint_{\mathcal{H}_{\delta}}p(x_{1},x_{2})p(\widetilde{x}_{1}, \widetilde{x}_{2})dx_{1}dx_{2}d\widetilde{x}_{1}d\widetilde{x}_{2}=2c_{\delta}.\]
Indeed, it is easy to verify that \(c_{0}=0\), \(c_{\delta}\) is finite and is continuous with respect to \(\delta\). Thus, we let \(\delta\) approach zero and get the desired result that \(\lim_{a\to\infty}\rho(X_{1},X_{2};a)=\rho_{\tau}(X_{1},X_{2})\).
## Implementation Details
All experimental models were trained using the Adam optimizer implemented by Pytorch with initial learning rate 0.01. All activation functions were taken to be the ReLU.
### _Toy problem_
The MLP model is a simple neural network with 2 hidden layers of 8 and 4
The domain discrepancy is evaluated by the MMD distance of the 16-dimensional feature representations between the source and the target domains. The discriminator is the final output layer.
The AFN model contains 3 hidden layers with 128, 64 and 64 neurons respectively. The output tensors of the second hidden layer are aligned to a scale vector, which is pre-determined according to [37].
In MCD model, the neural network consists of 3 hidden layers with 128, 64 and 64 neurons respectively. For output tensors of the second hidden layer from the target domain, two additional networks are used to construct the regularization term of [38].
The CORAL model is essentially the same model as the DAN model, except that the domain discrepancy is evaluated by the Frobenius norm of the covariance matrices of the 16-dimensional feature representations.
In the CDAN model, the feature extractor is a neural network with 6 hidden layers of 128, 128, 128, 128, 64 and 8 neurons respectively. The marginal divergence is calculated by the Gaussian MMD distance of each dimensional representations among the 8-dimensional features. The copula distance is with respect to the KL divergence. We run the model for 100 trials, and each trial costs about 2-3 minutes.
### _Intra-day equity price regression_
After separating the historical prices of 22 stocks into two domains, we slice the data into pieces with length 12 and package them into batches of size 1024. For each stock in either domain, we use the MinMaxScaler to normalize its price. For each model, we use an LSTM layer as the feature extractor, and conduct the batch normalization for each Linear layer. We train each model for at most 100 epochs, and the early-stopping threshold is set to be 20 epochs. We tune the hyperparameters by grid search, and we also fine-tune the network parameters (including but not limited to number of layers, number of units, etc.).
The LSTM (RNN resp.) model consists of one LSTM (RNN resp.) layer of hidden size 64, and two Linear layers of size 64 and 32 respectively.
The DANN model consists of 3 neural networks, namely a feature extractor, a discriminator and a regressor. The feature extractor contains one LSTM layer of hidden size 64 and a Linear layer of size also 64. The discriminator is a binary classifier, which consists of three Linear layers of size 64, 32 and 16 respectively. The regressor consists of two Linear layers of size 64 and 32 respectively.
The CORAL model consists of one LSTM layer of hidden size 64, and 3 Linear layers of size 64, 32, 16 respectively. After extracting the features by LSTM, we calculate the regularization term according to [17].
The DAN model consist of one LSTM layer of hidden size 64, and 3 Linear layers of size 64, 32, 16 respectively. After extracting the features by LSTM, we calculate the MMD with a two-Gaussian-kernel function.
Our model CDAN consists of one LSTM layer of hidden size 64, and two Linear layers of size 64 and 32 respectively. After extracting the features by LSTM, we calculate the divergence between marginal distributions by a two-Gaussian-kernel MMD, and calculate the copula distance with respect to KL divergence. We run the model for 100 trials, and each trial costs about 10-20 minutes.
|
2308.16756 | Large volume fibered knots in 3-manifolds | We prove that for hyperbolic fibered knots in any closed, connected, oriented
3-manifold the volume and genus are unrelated. As an application we answer a
question of Hirose, Kalfagianni, and Kin about volumes of mapping tori that are
double branched covers. | J. Robert Oakley | 2023-08-31T14:23:38Z | http://arxiv.org/abs/2308.16756v1 | # Large volume fibered knots in 3-manifolds
###### Abstract.
We prove that for hyperbolic fibered knots in any closed, connected, oriented 3-manifold the volume and genus are unrelated. As an application we answer a question of Hirose, Kalfagianni, and Kin about volumes of mapping tori that are double branched covers.
## 1. Introduction
Let \(M\) denote a closed, connected, oriented 3-manifold. Alexander in [1] proved that every such \(M\) contains a fibered link. This result has been strengthened in many ways such as the link being a knot [13] and the monodromy being right-veering and pseudo-Anosov [1]. More recently, this construction has been used in [1] to show that for hyperbolic fibered knots in the 3-sphere, volume and genus are unrelated. In this paper we generalize the approach in [1] to show that for hyperbolic, fibered knots in closed, connected, oriented 3-manifolds the volume and genus are unrelated. In particular we prove the following theorem.
**Theorem 1.1**.: _Let \(M\) be a closed, connected, oriented 3-manifold. There exists some \(g_{0}>1\) such that for all \(g\geq g_{0}\) and \(V>0\) there exists a knot \(K\subseteq M\) such that \(M-K\) is fibered over the circle with genus \(g\) and \(M-K\) is hyperbolic with \(\text{vol}(M-K)>V.\)_
As an application of theorem 1.1 we answer a question asked by Hirose, Kalfagianni, and Kin in [13]. Let \(\mathfrak{D}_{g}(M)\subseteq\text{Mod}(S_{g})\) be the subset of the mapping class group of a closed surface of genus \(g\) consisting of elements whose mapping tori are 2-fold branched covers of \(M\) branched along a link. Hirose, Kalfagianni, and Kin asked the following question in [13].
**Question 1**.: _For \(g\) sufficiently large, does \(\mathfrak{D}_{g}(M)\) contain an infinite family of pseudo-Anosov mapping classes whose mapping tori have arbitrarily large volume?_
Combining theorem 1.1 and theorem 10 of [13] yields the following affirmative answer to question 1.
**Corollary 1.2**.: _For any closed, connected, oriented 3-manifold \(M\) there exists some \(g_{0}>1\) such that for any \(g>g_{0}\) the set \(\mathfrak{D}_{2g}(M)\) contains an infinite family of pseudo-Anosov elements whose mapping tori have arbitrarily large volume._
### Outline of the proof.
The strategy of the proof is to construct for every closed, connected, oriented 3-manifold \(M\) and for every sufficiently large genus \(g\) of the fiber, a family of fibered hyperbolic knots of genus \(g\) with linearly growing volume. We use three main tools to construct these families. The first is the theory of branched covers in dimensions 2 and 3. Using branched covers of the 3-sphere branched over a knot or link in braid position to obtain a fibered link in a 3-manifold goes back to Alexander. Here we use an improvement on Alexander's theorem due to Hilden and Montesinos [14, 15]. This refinement allows us to consider simple 3-fold branched covers. That the branched covers are degree 3 is crucial for control of the number of preimages of the braid axis which will become the fibered link. This control over the degree of the cover comes at the expense of the covers considered by Hilden and Montesinos being irregular. Fortunately these irregular covers have the property of being _simple_ (see definition 2.1). Simple branched covers are well studied in dimension 2 (see for example [14, 15, 16]). This allows us to control the monodromy of the fibered link.
In section 3 we use this control of the monodromy and the second tool, open book decompositions and stabilization, to ensure that our fibered link becomes a fibered knot with pseudo-Anosov monodromy (see section 3 for a brief discussion of open book decompositions). To that end we use the equivalence between open book decompositions and fibered links as well as the technique introduced by Colin and Honda in
[10] to transform our fibered 2-component link with possibly reducible monodromy, obtained from the branched cover construction described above, into a fibered knot with pseudo-Anosov monodromy.
Crucially, even after this transformation we maintain the control over the monodromy. This allows us to use our third tool, subsurface projections and Brock's work in [11] relating translation distance of a pseudo-Anosov in the pants graph and the volume of the associated mapping torus. The control over the monodromy that we have maintained throughout the construction of these fibered hyperbolic knots ensures that in addition to the monodromies being pseudo-Anosov, they factor as \((A)(S^{-1}F^{n}S)(F^{-n})\) where \(A\) is constant as \(n\) varies and \(F^{\pm n}\) are pseudo-Anosov on essential subsurfaces. Then we apply the work of Clay-Leininger-Mangahas [10] to see that the monodromy has linearly growing subsurface projections to the supports of \(F^{-n}\) and \(S^{-1}F^{n}S\). The Masur-Minsky distance formula for the pants graph [14] then implies that the monodromies have linearly growing translation distance in the pants graph. Finally, the work of Brock [11] implies that this linearly growing translation distance corresponds to linearly growing volume of the associated mapping tori.
### Acknowledgments
The author thanks his advisor Dave Futer for pointing him towards this problem as well as his guidance. The author thanks the NSF for its support via grant DMS-1907708.
## 2. Branched Cover Construction
Let \(M\) be a closed, connected, oriented 3-manifold. We will begin by constructing a fibered 2-component link in \(M.\) We need the following definition.
**Definition 2.1**.: _A branched covering \(p:M\longrightarrow N\) of degree \(d\) is called **simple** if each point in \(N\) has at least \(d-1\) preimages in \(M\). In particular, points away from the branch locus in \(N\) will have \(d\) preimages while points on the branch locus in \(N\) will have \(d-1\) preimages in \(M.\)_
Due to a theorem of Hilden and Montesinos, [12, 13], there exists a 3-fold simple branched covering \(p:M\longrightarrow S^{3}\) branched along a knot \(K.\) Since the branched covers constructed are simple, points on the branch locus \(K\subseteq S^{3}\) have two preimages in \(M.\) One preimage has branching index 1 while the other has branching index 2. By a theorem of Alexander [1] we can represent \(K\) as the closure of a braid \(\Pi\in\mathfrak{B}_{k}\) for some \(k.\)
Figure 1. The braid whose closure will be the branching locus of our branched cover. The braid \(\Pi\) is the braid word representing the knot \(K\) which we branch over. The braid \(\Phi\) is a pseudo-Anosov on its support. The word \(\Sigma\) totally consists of stabilizations and destabilizations. Note that \(\gamma_{n}\) and \(\delta_{n}\) and the disks they bound do not change as \(n\) increases.
Fix \(g>1\) such that \(2g+1>k.\) Let \(\sigma_{i}\) denote the positive half-twist between the \(i^{\text{th}}\) and \((i+1)^{\text{th}}\) strands. We will consider \(\Pi\in\mathfrak{B}_{k}\) under the natural inclusion into \(\mathfrak{B}_{2g+1}\) where \(\Pi\) is on the last \(k\) strands (see figure 1). In a variation on the construction of [1] we now define the following braids.
\[\Phi =(\sigma_{2g+2-k})^{-1}(\sigma_{2g+3-k})(\sigma_{2g+4-k})^{-1}( \sigma_{2g+5-k})\] \[\Sigma =(\sigma_{2g+1-k})(\sigma_{2g-k})^{-1}\cdots(\sigma_{2})^{\pm}( \sigma_{1})^{\mp}\] \[\beta_{n} =\Pi\Phi^{n}\Sigma\Phi^{-n}\]
Let \(\widehat{\beta_{n}}\) be the braid closure of \(\beta_{n}.\) Let \(\omega_{n}\) be its braid axis encircling the braid after the \(\Phi^{n}\) factor and before the \(\Sigma\) factor of \(\beta_{n}.\) Let \(\Lambda_{n}=\widehat{\beta_{n}}\cup\omega_{n}.\) Note that \(\omega_{n}\) bounds a disk \(\Omega_{n}\) in \(S^{3}\) meeting \(\widehat{\beta_{n}}\) in \(2g+1\) points. Hence, \(S^{3}-\Lambda_{n}\) is a punctured disk bundle over the circle with monodromy \(\beta_{n}.\)
**Lemma 2.2**.: _The braid closure \(\widehat{\beta_{n}}\) is the knot \(K.\)_
Proof.: Note that \(\widehat{\Pi}\) is \(K,\) and that \(\widehat{\beta_{n}}\) is stabilized along the first \(2g+1-k\) strands. Destabilizing smooths the crossings coming from \(\Sigma\) and and deletes the first \(2g+1-k\) strands. Now the \(\Phi^{n}\) and \(\Phi^{-n}\) factors cancel. We are left with the \(k\)-strand braid closure \(\widehat{\Pi}\) which by definition is \(K.\)
We will now introduce some notation. Let \(\gamma_{n}\) and \(\delta_{n}\) be the simple closed curves encircling the \(2g+2-k,\)\(2g+3-k,\)\(2g+4-k,\)\(2g+5-k,\) and \(2g+6-k\) strands of \(\beta_{n}\) taken immediately before and after the \(\Sigma\) factor of \(\beta_{n}\) (see figure 1). Let \(\Gamma_{n}\) and \(\Delta_{n}\) denote the disks bounded by \(\gamma_{n}\) and \(\delta_{n}\) respectively. Note that \(\Gamma_{n}\) and \(\Delta_{n}\) each intersect \(\beta_{n}\) in \(5\) points. Let \(w_{n}\) and \(W_{n}\) denote \(p^{-1}(\omega_{n})\) and \(p^{-1}(\Omega_{n})\) respectively.
**Lemma 2.3**.: _The manifold \(N_{n}=M-w_{n}\) is an \(S_{g-1,2}-\)bundle over the circle._
Proof.: That \(N_{n}\) is fibered over the circle follows from the fact that \(S^{3}-\Lambda_{n}\) is a punctured disk bundle over the circle. Since \(p\) is a simple, \(3\)-fold cover and \(\Omega_{n}\) intersects \(\beta_{n}\) in \(2g+1\) points, a Riemann-Hurwitz formula calculation shows that \(W_{n}\) is homeomorphic to \(S_{g-1,2}.\)
## 3. Hyperbolicity and Volume
To prove the main theorem we need to construct from \(N_{n}\) the desired knot complements, verify their hyperbolicity, and show that their volumes grow linearly with \(n\). In service of this we will change our perspective to a \(2\)-dimensional perspective. We will focus on the fiber surface \(W_{n}\subseteq N_{n}\) which is fixed as the monodromy of the fibration varies. In particular we specify a marking of the disk \(\Omega=\Omega_{n}\subseteq S^{3}\). We let \(S^{3}=\mathbb{R}^{3}\cup\infty.\) Let \(\Omega\) be the unit disk in the plane \(\mathbb{R}^{2}\times 0.\) Moreover, by fixing a simple \(3\)-fold branched covering of the disk, we fix a marking on the fiber surface \(W=W_{n}\) as shown in figure 2 below. Moreover, we note that \(\gamma_{n},\delta_{n},\Gamma_{n},\) and \(\Delta_{n}\) do not depend on \(n\), so from now on we will set \(\gamma=\gamma_{n}\), \(\delta=\delta_{n}\), \(\Gamma=\Gamma_{n}\), and \(\Delta=\Delta_{n}.\) Let \(b_{n}=P\cdot F^{n}\cdot S\cdot F^{-n}\) be the monodromy of \(N_{n}\), where \(P\), \(F\), and \(S\) are the lifts of \(\Pi\), \(\Phi,\text{and}\,\Sigma\) respectively.
Let \(c\), \(d\), and \(w_{n}\) denote \(p^{-1}(\gamma)\), \(p^{-1}(\delta)\), and \(p^{-1}(\omega_{n})\) respectively. Also, let \(C^{+}\) and \(D^{+}\) denote \(p^{-1}(\Gamma)\) and \(p^{-1}(\Delta)\) respectively.
**Lemma 3.1**.: \(C^{+}\) _and \(D^{+}\) are each homeomorphic to the disjoint union of a disk and \(S_{2,1}.\) Furthermore, \(C^{+}\cap D^{+}\) is the disjoint union of a \(S_{1,2}\) and a disk._
Proof.: If we restrict \(p\) to a single fiber of \(N_{n}\) we get a simple \(3\)-fold branched cover \(p^{\prime}:S_{g-1,2}\longrightarrow D^{2}\) branched over \(2g+1\) points. It follows from the work of Gabai and Kazee in [1] that \(3\)-fold simple branched covers of the disk are unique up to equivalence. By uniqueness of the cover we may assume it takes on the configuration shown in figure 2 in the disk fiber of \(S^{3}\) immediately following \(\Pi.\) That is, the distinguished branch point labeled \(b\) may be taken to lie on the \((2g+1)^{\mathrm{st}}\) strand of the braid in the fiber \(\Omega_{n}.\) In particular, every simple closed curve that bounds any \(5\) of the branch points excluding the distinguished point \(b\) on the far right of the disk is taken by a homeomorphism of the disk to the simple closed curve
Figure 3. After cutting the disk \(\Omega=\Omega_{n}\) along \(\alpha\) and the covering surface along \(p^{-1}(\alpha)\) we obtain two components of the covering surface. One component covers the cut disk trivially. The other component covers the cut disk by an order \(2\) involution that identifies the two boundary components.
Figure 2. The unique simple cover from \(S_{3,2}\) to a disk. The gray curves divide the covering surface into three ”fundamental domains” for the cover. See the papers by Winarski and Fuller (section 3.2 of[21] and [22]) for more on a similar simple branched cover.
\(\eta\) as shown in figure 2. Observe that \(c\) and \(d\) will consist of two simple closed curves since \(\gamma\) and \(\delta\) each bound an odd number of branch points. One of the two preimages is completely contained in the upper fundamental domain of the cover and bounds a disk as in the left panel of figure 3. For example every simple closed curve that bounds five branch points other than the distinguished red branch point in figure 2 is homeomorphic to the curve \(\eta\) by the change of coordinates principle. Again by a Riemann-Hurwitz formula calculation we conclude that the other component is homeomorphic to \(S_{2,1},\) a surface of genus two with one boundary component. Note that \(\Gamma\cap\Delta\) is a disk that intersects \(\beta_{n}\) in \(4\) points. Thus, \(C^{+}\cap D^{+}\) consists of a \(2\) disconnected components. One is the intersection of two disks in the upper fundamental domain that must be connected by the properties of the cover and hence is a disk. The other component is the double branched cover of a disk, branched over \(4\) points all of which have branching index \(2\). By another Riemann-Hurwitz calculation this is a surface of genus \(1\) with \(2\) boundary components.
Observe that \(b_{n}\) factors as \((PS)\cdot(S^{-1}F^{n}S)\cdot F^{-n}.\) Let \(C\) and \(D\) denote the \(S_{2,1}\) component of \(C^{+}\) and \(D^{+}\) respectively.
**Lemma 3.2**.: \(F\) _is a partial pseudo-Anosov with \(\text{supp}(F)=D\) and \(\text{supp}(S^{-1}FS)=C\)._
Proof.: Recall that \(p^{-1}(\text{supp}(\Phi))=p^{-1}(\Gamma)=C^{+}.\) By lemma 3.1\(C^{+}\) is the disjoint union of \(C\) and a disk. Moreover, the component of \(C^{+}\) that contains branching index one points is the disk component again by lemma 3.1 (see also figure 3). We observe that \(C^{+}-C\) and \(D^{+}-D\) are both disks and hence any mapping class restricted to either one is trivial. Note that \(p\) restricted to the subsurface \(C\) that contains only index two points is a double branched cover of \(\Gamma.\) Since \(\Phi\) is a pseudo-Anosov on \(\Gamma\) it preserves two transverse singular foliations on \(\Gamma.\) The foliations lift through the branched cover to give singular transverse foliations on the surface \(C.\) Any possible \(1\)-prong singularities at punctures of \(\Omega_{n}\) are double covered in \(C\) and \(D\) and hence become \(2\)-prong singularites. Therefore, \(F\) is a partial pseudo-Anosov with \(\text{supp}(F)=D\) and \(\text{supp}(S^{-1}FS)=C.\) See section 14.1 of [12] for more on pseudo-Anosov mapping classes arising from branched covers.
We will need the following notation. Let \(S\) be a surface and let \(\alpha,\)\(\beta\subseteq S\) be essential simple closed curves on \(S.\) Let \(d_{S}(\alpha,\beta)\) denote distance in the curve graph of \(S\) between the vertices of the curve graph that represent the simple closed curves \(\alpha\) and \(\beta.\) Now, let \(Y\subseteq S\) be a subsurface of \(S.\) Given a curve \(\eta\subseteq S\) isotope \(\eta\) so that it intersects \(Y\) minimally. Define \(\pi_{Y}(\eta)\) to be the arcs of \(\eta\cap Y\) or the curve \(\eta\) in the case that \(\eta\subseteq Y.\) If \(\eta\) projects to a collection of arcs in \(Y,\) then we arbitrarily choose a single arc \(\eta^{\prime}\) of \(\pi_{Y}(\eta)\) and "close it up" into a curve in the following way. If \(\eta^{\prime}\) has endpoints on a single component of \(\partial Y\) then let \(\overline{\eta^{\prime}}\) denote the curve obtained by concatenating the endpoints of \(\eta^{\prime}\) with a sub arc of \(\partial Y.\) If \(\eta^{\prime}\) has endpoints on distinct components of \(\partial Y\) then a neighborhood \(P\subseteq Y\) of \(\eta^{\prime}\cup\partial Y\) is homeomorphic to a pair of pants. Let \(\overline{\eta^{\prime}}\) be the component of \(\partial P\) disjoint from \(\partial Y\) (see figure 4). Now let \(\alpha,\beta\subseteq S\) be two simple closed curves that project to arcs in \(Y.\) Let \(d_{Y}(\alpha,\beta)=d_{\mathcal{C}(Y)}(\overline{\alpha^{\prime}},\overline{ \beta^{\prime}}),\) if \(\beta\) projects to a closed curve then let \(d_{Y}(\alpha,\beta)=d_{\mathcal{C}(Y)}(\overline{\alpha^{\prime}},\beta),\) and if both \(\alpha\) and \(\beta\) project to closed curves then \(d_{Y}(\alpha,\beta)=d_{\mathcal{C}(Y)}(\alpha,\beta).\) Note that we made several choices in defining distance in a subsurface. For example we could have chosen a different (but disjoint) arc or closed up the arc differently. Hence, our definition is only coarsely well-defined up to a small additive error.
**Lemma 3.3**.: \(b_{n}\) _has linearly growing translation distance in the curve complexes of \(C\) and \(D.\) In particular there exists some \(n_{0}>0\) such that \(b_{n}\) is not periodic for all \(n\geq n_{0}.\)_
Proof.: By lemma 3.2, \(S^{-1}F^{n}S\) and \(F^{-n}\) are each pseudo-Anosov on their support, namely \(C\) and \(D\). By lemma 3.1\(C\cap D\) is homeomorphic to a surface with genus \(1\) and \(2\) boundary components. Hence, Theorem 5.2 of [11] implies that \((S^{-1}F^{n}S)\cdot F^{-n}\) has linearly growing translation distance in the curve complexes of each of \(C\) and \(D\). Therefore, there exists some \(n_{0}\) such that for all \(n\geq n_{0},\)\(b_{n}\) has large translation distance in each of \(C\) and \(D,\) and hence is infinite order.
While we have shown that our monodromies are eventually not periodic, we need monodromies that are eventually pseudo-Anosov. Since, this may not be true for \(b_{n},\) we need to modify our manifolds \(N_{n}\) slightly. To that end we introduce a more \(2\)-dimensional way of discussing fibered links in closed oriented \(3\)-manifolds. We will use the equivalent notion of an _open book decomposition_ of our closed, oriented \(3\)-manifold \(M.\) An open book decomposition of \(M\) is a pair \((S,\varphi)\) where \(S\) is the fiber surface (\(\partial S\neq\varnothing\)) and \(\varphi\) is the monodromy.
The mapping torus \(M_{\varphi}\) of \(\varphi\) is homeomorphic to a link complement in \(M.\) Then \(M\) is homeomorphic to the quotient of \(M_{\varphi}\) under the identification \((x,t)\sim(x,t^{\prime})\) where \(x\in\partial S\) and \(t,t^{\prime}\in[0,1].\) The quotient of \(\partial M_{\varphi}\) under the above identification is the fibered link in \(M\) which is called the _binding_ of \((S,\varphi).\) We now describe an operation on an open book decomposition \((S,\varphi)\) of \(M\) called a stabilization.
**Definition 3.4**.: _Let \((S,\varphi)\) be an open book decomposition of \(M,\) and let \((\gamma,\partial\gamma)\subseteq(S,\partial S)\) be a properly embedded arc. A **stabilization** along \(\gamma\) of the open book decomposition \((S,\varphi)\) is the open book decomposition \((S^{\prime},\varphi^{\prime})\) of \(M.\) where \(S^{\prime}\) is obtained by attaching a 1-handle to \(S\) that connects the endpoints of \(\gamma\) on \(\partial S\) and \(\varphi^{\prime}=\varphi\circ T_{\gamma^{\prime}}\) where \(\gamma^{\prime}\) is the curve that agrees with \(\gamma\) in \(S\) and intersects the co-core of the 1-handle once (see figure 4) and \(T_{\gamma^{\prime}}\) denotes the Dehn twist along \(\gamma^{\prime}.\)_
The following proposition is essentially theorem 1.1 of [1]. In the same way as described when we defined distance in the curve graph of a subsurface, we associate to any arc \(\gamma\) a simple closed curve \(\overline{\gamma}.\)
**Proposition 3.5**.: _Let \((S_{g-1,2},\varphi)\) be an open book decomposition of a closed, oriented 3-manifold \(M\) such that \(\varphi\) is not periodic. Let \(\overline{\gamma}\) be the simple closed curve associated to \(\gamma\) as described above. If \(d_{S_{g-1,2}}(\overline{\gamma},\varphi(\overline{\gamma}))=N>16\) then stabilization along the arc \(\gamma\) yields an open book decomposition \((S_{g,1},\varphi\circ T_{\gamma^{\prime}})\) such that \(\varphi\circ T_{\gamma^{\prime}}\) is pseudo-Anosov. Recall that \(\gamma^{\prime}\) is the extension of \(\gamma\) to the stabilized surface and \(d_{S}(\cdot,\cdot)\) denotes the distance in curve graph of \(S.\)_
Proof.: We note here that while theorem 1.1 of [1] contains the hypothesis that the monodromy \(\varphi\) is right-veering this is not needed to show that \(\varphi\circ T_{\overline{\gamma}}\) is pseudo-Anosov. All that is required is that \(\varphi\) is not periodic. While the lemma is essentially theorem 1.1 of [1], the required distance between \(\overline{\gamma}\) and its image under \(\varphi\) is not made explicit there. For the sake of completeness, we sketch their proof and make the distance explicit.
Let \(\varphi^{\prime}=\varphi\circ T_{\gamma^{\prime}}.\) We argue that \(\varphi^{\prime}\) is psuedo-Anosov by showing that it is neither reducible nor periodic. We will first argue that \(\varphi^{\prime}(\delta)\neq\delta\) for any multicurve \(\delta\subseteq S_{g,1}.\)
First suppose that \(\delta\subseteq S_{g-1,2}\subseteq S_{g,1}.\) If \(i(\varphi(\delta),\gamma^{\prime})=n\) then there is a representative, \(g\) of \(\varphi^{\prime}(\delta)\) that intersects the co-core of the 1-handle, \(a,\)\(n\) times. If \(i(\varphi^{\prime}(\delta),a)<n\) then there is a bigon consisting of a subarc of \(a\) and an arc of \(g.\) Then one checks that this implies there is a bigon consisting of an arc of \(\gamma\) and an arc of \(\varphi(\delta).\) See the third paragraph of the proof of case 1 of theorem 1.1 of [1] for this argument. This is a contradiction if \(n>0.\) If \(n=0\) then \(i(\varphi(\beta),\overline{\gamma})=0\) for any component \(\beta\) of \(\delta.\) Hence, \(d(\varphi(\beta),\overline{\gamma})=1\) for all such \(\beta.\) However, \(d(\overline{\gamma},\varphi(\overline{\gamma}))=N\) and therefore, \(d(\varphi(\overline{\gamma}),\varphi(\beta))\geq N-1\) for every \(\beta.\) Since \(n=0,\) we see that \(i(\varphi^{\prime}(\delta),\overline{\gamma})=0.\) This yields contradiction so long as \(N>2.\)
Now suppose that \(\delta\nsubseteq S_{g-1,2}.\) Let \(i(\delta,a)=k>0.\) Let \(B=S_{g,1}-S_{g-1,2}\) denote the stabilization band. Normalize \(\delta\) so that it intersects \(\gamma^{\prime}\) and \(a\) transversely and efficiently. We then subdivide \(\delta\) into arcs
Figure 4. The surface obtained by stabilizing \(S_{2,2}.\) The curve \(\gamma^{\prime}\) is the curve we twist along while stabilizing. The curve \(\overline{\gamma}\) is the result of the ”closing up” of the arc \(\gamma\) which agrees with \(\gamma^{\prime}\) on \(S_{2,2}.\)
\(\delta_{1},\ldots,\delta_{k},\delta^{\prime}_{1},\ldots,\delta^{\prime}_{k}\) so that \(\delta_{i}\subseteq S_{g-1,2}\) and \(\delta^{\prime}_{i}\subseteq B.\) The \(\delta^{\prime}_{i}\) are all linear arcs in \(B.\) If \(i(\delta^{\prime}_{i},\gamma^{\prime})=0\) we call such an arc _vertical_. If \(i(\delta^{\prime}_{i},\gamma^{\prime})>0\) we say it has _positive slope_ or _negative slope_. See figure 5. We normalize and subdivide \(\varphi(\delta)\) in the same way. Up to isotopy we may assume that there is no triangle contained in \(S_{g-1,2}\) with boundary an arc of \(\gamma^{\prime},\) an arc of \(\delta,\) and an arc of \(\partial S_{g-1,2}.\) Now let \(m\) denote the number of arcs of \(\varphi(\delta)\cap B\) that are not vertical. Let \(n\) denote the number of intersections between \(\varphi(\delta)\) and \(\gamma^{\prime}\) that are outside of \(B\) so that \(i(\varphi(\delta),\gamma^{\prime})=m+n.\) Note that this definition of \(n\) agrees with the definition in the above paragraph if \(\varphi(\delta)\) does not intersect the band \(B.\) Colin and Honda show that \(i(\varphi^{\prime}(\delta),a)=k\pm m+n\) where the sign depends on whether the non-vertical arcs have positive or negative slope. If \(m\neq n\) then \(i(\varphi^{\prime}(\delta),\gamma^{\prime})\neq i(\delta,a)=k,\) so \(\varphi^{\prime}(\delta)\neq\delta.\) It remains to deal with the case that \(m=n.\)
Suppose that \(m=n.\) Then \(i(\varphi^{\prime}(\delta),\gamma^{\prime})=m+n=2m\leq 2k\) by the definitions of \(m\) and \(k.\) Suppose for the sake of contradiction that \(\varphi^{\prime}(\delta)=\delta.\) Then \(i(\delta,\gamma^{\prime})=i(\varphi^{\prime}(\delta),\gamma^{\prime}).\) Therefore \(i(\delta,\gamma^{\prime})\leq 2k,\) and hence there is some \(1\leq i\leq k\) such that \(i(\delta_{i},\gamma)\leq 2k/k=2.\) Now we note that
\[i(\overline{\delta_{i}},\overline{\gamma})\leq 2+4=6.\]
By Hempel's lemma (lemma 2.1 of [10])
\[d(\overline{\delta_{i}},\overline{\gamma})\leq\lfloor 2\log_{2}(6)+2\rfloor=7.\]
Observe that \(\overline{\delta_{i}}\) and \(\overline{\delta_{j}}\) cannot jointly fill the surface, so
\[d(\overline{\delta_{i}},\overline{\delta_{j}})\leq 2\:\forall i,j\]
Therefore,
\[d(\overline{\delta_{j}},\overline{\gamma})\leq 2+7=9\:\forall j.\]
Applying \(\varphi\) yields
\[d(\varphi(\overline{\delta_{j}}),\varphi(\overline{\gamma}))\leq 9\:\forall j.\]
Thus,
\[d(\varphi(\overline{\delta_{j}}),\overline{\gamma})\geq N-9.\]
Applying Hempel's lemma again,
\[i(\varphi(\overline{\delta_{j}}),\overline{\gamma})\geq 2^{(N-9)/2-1}.\]
Hence,
\[i(\varphi(\delta_{j}),\gamma)\geq 2^{(N-9)/2-1}-4.\]
Therefore,
\[i(\varphi(\delta),\gamma^{\prime})\geq(2^{(N-9)/2-1}-4)k>2k\]
this is a contradiction so long as \(N>16\) which proves that \(\varphi^{\prime}\) is not reducible.
It just remains to show that \(\varphi^{\prime}\) is not periodic. Colin and Honda show that \(\varphi^{\prime}\) is not periodic by considering one of the connected components, \(\eta,\) of \(\partial S_{g-1,2}\) as a curve in \(S_{g,1}.\) They argue that \(i((\varphi^{\prime})^{n}(\eta),a)\to\infty\) as \(n\to\infty.\)
Figure 5. On the left is the case that \(\delta\) has negative slope, and on the right is the case that \(\delta\) has positive slope. In both cases \(k=5,\)\(m=3,\) and \(n=4.\)
We now apply the above lemma to \(M-w_{n}\) to obtain the following:
**Proposition 3.6**.: _Let \((S_{g-1,2},b_{n})\) be the open book decomposition of \(M\) corresponding to the fibered link \(w_{n}\subseteq M.\) There exists an arc of stabilization \(\gamma\) and \(n_{1}\geq n_{0}>0\) such that \(b_{n}\circ T_{\gamma^{\prime}}\) is pseudo-Anosov for all \(n\geq n_{1},\) and hence \(M-\overline{w_{n}}\) is hyperbolic, where \(\overline{w_{n}}\) is the image of \(w_{n}\) after stabilization._
Proof.: We first explain how to choose an arc of stabilization \(\gamma\) such that \(d_{S_{g-1,2}}(\overline{\gamma},b_{n_{0}}(\overline{\gamma}))>16\) following [1]. First let \(\gamma_{0}\) be any properly embedded arc connecting the two boundary components of \(S_{g-1,2}.\) Fix a pseudo-Anosov \(f\in\operatorname{Mod}(S_{g-1,2})\) that does not share a stable or unstable lamination with \(b_{n_{0}}.\) Let \(\nu\) be the stable lamination of \(f.\) The sequence \(f^{i}(\overline{\gamma_{0}})\to\nu\) as \(i\to\infty.\) We claim that \(d_{S_{g-1,2}}(f^{i}(\overline{\gamma_{0}}),b_{n_{0}}(f^{i}(\overline{\gamma_{0 }})))\to\infty.\) If \(d_{S_{g-1,2}}(f^{i}(\overline{\gamma_{0}}),b_{n_{0}}(f^{i}(\overline{\gamma_{0 }})))\) does not approach \(\infty,\) then up to passing to a subsequence \(d_{S_{g-1,2}}(f^{i}(\overline{\gamma_{0}}),b_{n_{0}}(f^{i}(\overline{\gamma_{0 }})))=N\) for some constant \(N.\) Let \(f^{i}(\overline{\gamma_{0}})=v_{i,0},v_{i,1},\ldots,v_{i,N-1},v_{i,N}=b_{n_{0} }(f^{i}(\overline{\gamma_{0}}))\) be a geodesic in the curve graph. Note that \(v_{i,0}\) and \(v_{i,1}\) are disjoint. Up to passing to a subsequence, \(v_{i,1}\) converges to a lamination \(\nu^{\prime}\) such that \(i(\nu,\nu^{\prime})=0.\) Similarly, \(v_{i,N-1}\) converges to a lamination \(\lambda\) such that \(i(\lambda,b_{n_{0}}(\nu))=0.\) Since \(\nu\) is minimal, \(\nu=\nu^{\prime}\) and \(b_{n_{0}}(\nu)=\lambda.\) Hence, we have \(v_{i,1}\to\nu\) and \(v_{i,N-1}\to b_{n_{0}}(\nu)\) as \(i\to\infty,\) but \(d_{S_{g-1,2}}(v_{i,1},v_{i,N-1})<N.\) After repeating this process finitely many times, we conclude that \(b_{n_{0}}(\nu)=\nu.\) This contradicts the choice of \(f.\) Therefore, there exists some \(m>0\) such that \(d_{S_{g-1,2}}(f^{m}(\overline{\gamma_{0}}),b_{n_{0}}(f^{m}(\overline{\gamma_{0 }})))>16.\) Moreover, there exists some \(l_{0}>0\) so that for all \(l\geq l_{0}\)\(d_{S_{g-1,2}}(f^{l}(\overline{\gamma_{0}}),\partial C)>18.\) Let \(k=\max\{m,l\}.\) Let \(\gamma\) be the arc such that \(f^{k}(\overline{\gamma_{0}})=\overline{\gamma}.\)
Note that although the above only shows how to choose an arc of stablization for \(b_{n_{0}}\) we may choose such an arc of stabilization uniformly for all open book decompositions \((S_{g-1,2},b_{n})\) where \(n\) is large enough. To see why this is true consider the subsurface \(C.\) Observe that \(d_{C}(\overline{\gamma},b_{n}(\overline{\gamma}))\approx d_{C}(\overline{ \gamma},S^{-1}F^{n}SF^{-n}(\overline{\gamma})).\) By lemma 3.3 there exists \(n_{1}\geq n_{0}>0\) such that \(d_{C}(\overline{\gamma},S^{-1}F^{n}SF^{-n}(\overline{\gamma}))\gg 0\) for all \(n\geq n_{1}.\) Hence, by the bounded geodesic image theorem (theorem 3.1 of [10]) the geodesic in \(\mathcal{C}(S_{g-1,2}),\) the curve graph of \(S_{g-1,2},\) between \(\overline{\gamma}\) and \(b_{n}(\overline{\gamma})\) must pass within distance \(1\) of \(\partial C\) for all \(n\geq n_{1}.\) Therefore we have the following inequality.
\[d_{S_{g-1,2}}(\overline{\gamma},\partial C)-1\leq d_{S_{g-1,2}}(\overline{ \gamma},b_{n}(\overline{\gamma}))\]
Now note that \(d_{S_{g-1,2}}(\overline{\gamma},\partial C)\) is unchanged as \(n\to\infty,\) and hence \(d_{S_{g-1,2}}(\overline{\gamma},b_{n}(\overline{\gamma}))>16\) for all \(n\geq n_{1}.\) Thus, by proposition 3.5 the family of open book decompositions \((S_{g,1},b_{n}\circ T_{\gamma^{\prime}})\) have pseudo-Anosov monodromies for all \(n\geq n_{1}.\)
**Remark 3.7**.: _Note that stabilization does not affect the conclusions of lemmas 3.2 and 3.3. We consider the subsurfaces \(C\) and \(D\) included into the stabilized surface and the mapping class \(F\) as a mapping class on the stabilized surface. Then, \(b_{n}\circ T_{\gamma^{\prime}}\) has linearly growing translation distance in the curve graphs of \(C\) and \(D,\) and \(F\) is a partial pseudo-Anosov on \(S_{g,1},\) the stabilized surface._
**Proposition 3.8**.: _In fixed genus \(g,\) as \(n\) tends to infinity, the knot complements \(M_{n}=M-\overline{w_{n}}\) are eventually hyperbolic, with volumes tending to infinity._
Proof.: As is shown in proposition 3.6\(b_{n}^{\prime}=b_{n}\circ T_{\gamma^{\prime}}\) are pseudo-Anosov for \(n\geq n_{1}.\) Hence, their mapping tori, \(M_{n},\) are hyperbolic for \(n\geq n_{1}.\) As is noted in remark 3.7 and lemma 3.3, \(b_{n}^{\prime}\) has linearly growing subsurface projections to \(C_{n}\) and \(D_{n}.\) Now, by applying the Masur-Minsky distance formula for the pants graph (see theorem 6.12 of [10]) we see that \(b_{n}^{\prime}\) has linearly growing translation distance in the pants graph of \(S_{g,1}.\) By the work of Brock [1], the mapping tori of \(b_{n}^{\prime}\) have linearly growing volume. These mapping tori are precisely the fibered knot complements \(M_{n}.\)
For fixed \(g\) the family \(\overline{w_{n}}\) are the promised family of fibered knots in \(M\) satisfying the conclusion of Theorem 1.1.
|
2302.00147 | Influence of the magnetic field topology in the evolution of small-scale
two-fluid jets in the solar atmosphere | We perform a series of numerical simulations to recreate small-scale
two-fluid jets using the JOANNA code, considering the magnetohydrodynamics of
two fluids (ions + electrons and neutral particles). We first excite the jets
in a uniform magnetic field by using velocity pulse perturbations located at
$y_{0}=$1.3, 1.5, and 1.8 Mm, considering the base of the photosphere at $y=0$
Mm. Then, we repeat the excitation of the jets in a magnetic field that mimics
a flux tube. Mainly, the jets excited at the upper chromosphere ($y\sim1.8$ Mm)
reach lower heights than those excited at the lower chromosphere ($y\sim1.3$
Mm); this is due to the higher initial vertical location because of the lesser
amount of plasma dragging. In both scenarios, the dynamics of the neutral
particles and ions show similar behavior; however, we can still identify some
differences in the velocity drift, which in our simulations is of the order of
$10^{-3}$ km s$^{-1}$ at the tips of the jets once they reached their maximum
heights. Also, we estimate the heat generation due to the friction between ions
and neutrals ($Q^{in}_{i,n}$), which is of the order of $0.002-0.06$ W
m$^{-3}$; however it is small to contribute to the heating of the surroundings
of the solar corona. The jets in the two magnetic environments do not show
substantial differences other than a slight variation in the maximum heights
reached, particularly in the uniform magnetic field scenario. Finally, the
maximum heights reached by the three different jets are in the range of some
morphological parameters corresponding to macrospicules, Type I spicules, and
Type II spicules. | E. E. Díaz-Figueroa, G. Ares de Parga, J. J. González-Avilés | 2023-01-31T23:45:44Z | http://arxiv.org/abs/2302.00147v1 | Influence of the magnetic field topology in the evolution of small-scale two-fluid jets in the solar atmosphere
###### Abstract
We perform a series of numerical simulations to recreate small-scale two-fluid jets using the JOANNA code, considering the magnetohydrodynamics of two fluids (ions + electrons and neutral particles). We first excite the jets in a uniform magnetic field by using velocity pulse perturbations located at \(y_{0}=\)1.3, 1.5, and 1.8 Mm, considering the base of the photosphere at \(y=0\) Mm. Then, we repeat the excitation of the jets in a magnetic field that mimics a flux tube. Mainly, the jets excited at the upper chromosphere (\(y\sim 1.8\) Mm) reach lower heights than those excited at the lower chromosphere (\(y\sim 1.3\) Mm); this is due to the higher initial vertical location because of the lesser amount of plasma dragging. In both scenarios, the dynamics of the neutral particles and ions show similar behavior; however, we can still identify some differences in the velocity drift, which in our simulations is of the order of \(10^{-3}\) km s\({}^{-1}\) at the tips of the jets once they reached their maximum heights. Also, we estimate the heat generation due to the friction between ions and neutrals (\(Q_{(\mu,\mu)}^{(\beta_{\mu})}\), which is of the order of \(0.002-0.06\) W m\({}^{-3}\); however it is small to contribute to the heating of the surroundings of the solar corona. The jets in the two magnetic environments do not show substantial differences other than a slight variation in the maximum heights reached, particularly in the uniform magnetic field scenario. Finally, the maximum heights reached by the three different jets are in the range of some morphological parameters corresponding to macrospicules, Type I spicules, and Type II spicules.
Sun: chromosphere; Sun: atmosphere; methods: numerical; magnetohydrodynamics (MHD)
## 1 Introduction
Solar jet-type phenomena are ubiquitous in the solar atmosphere. Its importance has generated numerous investigations on its origin and evolution (see, e.g., 1,2, and references therein). Although there are many open questions about its nature, enormous progress has been made in understanding its dynamics, particularly in 2D, 2.5D, and 3D MHD numerical simulations. These models have swept a broad spectrum of complexity from ideal MHD to resistive MHD and two-fluid MHD (see, e.g., 3, 4, 5, 6, 7, 8, 9).
Jet-type phenomena have been proposed as promising candidates to generate large amounts of heat at the upper atmospheric layers of the Sun (10). Their frequency of occurrence is high compared to other solar phenomena, which gives the impression that jets are ongoing events that could represent a continuous source of energy (11). These collimated plasma jets are called spicules and are cataloged according to their different characteristics (12, 13, 14). The spicules were first described in Secchi (15), but they owe their name to Roberts (16). Additionally, Beckers (10) developed a theoretical and observational analysis to show that the chromosphere is mostly populated by these elongated structures that may supply plasma to the corona. The first spicules observed, also called Type I spicules, can reach heights in the range of 7000-13000 km (12), diameters of 500-2000 km (17), vertical speeds of 25 km
s\({}^{-1}\), lifetimes of 1-10 min, temperatures of 5000-15000 K and densities of 3\(\times 10^{-13}\) g cm\({}^{-3}\) that remain quasi-constant with height [12; 18]. On the other hand, the Type II spicules, which could be generated due to magnetic reconnection, reach heights of 1000 to 7000 km above the chromosphere, in a range of vertical speeds between 40 km s\({}^{-1}\) to 300 km s\({}^{-1}\), with the bulk between 50 and 150 km s\({}^{-1}\), lifetimes from 10 to 150 s, with characteristic diameters less than 200 km, temperatures of approximately \(10^{4}\) K [5; 13; 19; 20; 21; 22]. Besides, the macrospicules can reach heights of 7 to 70 Mm and speeds of 10 to 150 km s\({}^{-1}\), and they can have lifetimes of 3 to 45 min, according to observations and numerical simulations [14; 23; 24]. Finally, there is also another complex chromospheric ejections, such as the surges, which are seen as darkenings in the blue/red wings of the line with line-of-sight (LOS) apparent velocities of a few to several tens of km s\({}^{-1}\) on areas with projected lengths of 10-50 Mm [25]. Surges also can be consist of small-scale thread-like structures that appear to related to shocks and Kelvin-Helmholtz instabilities [26; 27; 28].
The origin of the small-scale plasma jets found in the lower solar atmosphere is still a matter of debate. Therefore, there is still room for new models, such as the two-fluid approximation, which includes the dynamics of neutrals apart from the ions, and therefore is more realistic for studying the generation, evolution, and morphology of small-scale jets from the chromosphere to the solar corona. The two-approximation is essential to modeling of partially ionized plasmas in astrophysical scenarios in general [see, e.g., 29]. For example, in [30], the authors use numerical simulations using the two-fluid equations in 2D Cartesian geometry to study the formation and evolution of solar spicules. They found that the simulated spicule consists of a dense, cold core dominated by neutrals. More recently, in [31], the authors study the formation and evolution of jets employing localized non-linear Gaussian pulses of ion and neutral pressures initially launched from the magnetic null point of a potential arcade in a partially ionized solar atmosphere. They found that the shock propagates upwards into the solar corona and lifts the cold and dense chromospheric plasma in the form of a collimated jet with an inverted-Y shape. These kinds of inverted-Y jets and their heating may explain the properties of some jets observed in the solar atmosphere. Additionally, there are recent papers related to investigations about the two-fluid effects playing an essential role in the non-linear regime, particularly in the context of wave damping and plasma heating of the solar chromosphere [see, e.g., 32; 33; 34].
In this paper, with the use of the JOANNA code [35], we solve the two-fluid MHD equations numerically to simulate different chromospheric small-scale two-fluid jets excited at three different vertical locations (\(y_{0}=1.3,1.5,1.8\) Mm) to analyze the effect two different magnetic configurations on the evolution and morphology of the jets. We organize the paper as follows. First, section 2 describes the two-fluid equations, the numerical methods, the perturbations, and the magnetic field configurations. Then, in section 3, we present the most significant results of the numerical simulations for the two different magnetic field configurations. Next, in section 4, we discuss the most relevant differences between the two simulation cases. Finally, in Section 5, we draw the results and the conclusions.
## 2 Model and methods
### The system of two-fluid equations
We consider a stratified solar atmosphere composed of two fluids, i.e., ions+electrons and neutral particles. We write the system of the two-fluid equations as follows [30; 36; 37]:
\[\frac{\partial\rho_{i}}{\partial t}+\nabla\cdot(\rho_{i}\mathbf{V}_{i}) =0, \tag{1}\] \[\frac{\partial\rho_{n}}{\partial t}+\nabla\cdot(\rho_{n}\mathbf{V}_ {n}) =0,\] (2) \[\frac{\partial\rho_{i}\mathbf{V}_{i}}{\partial t}+\nabla\cdot( \rho_{i}\mathbf{V}_{i}\mathbf{V}_{i}+p_{i\epsilon}\mathbf{I})-\frac{1}{\mu}( \nabla\times\mathbf{B})\times\mathbf{B}+\rho_{i}\mathbf{g}=-\mathbf{S}_{in},\] (3) \[\frac{\partial\rho_{n}\mathbf{V}_{n}}{\partial t}+\nabla\cdot( \rho_{n}\mathbf{V}_{n}\mathbf{V}_{n}+p_{n}\mathbf{I})+\rho_{n}\mathbf{g}= \mathbf{S}_{ni},\] (4) \[\frac{\partial p_{i}}{\partial t}+\mathbf{V}_{i}\cdot\nabla p_{i} +\gamma p_{i}\nabla\cdot\mathbf{V}_{i} =(\gamma-1)Q_{i}^{in},\] (5) \[\frac{\partial p_{n}}{\partial t}+\mathbf{V}_{n}\cdot\nabla p_{n} +\gamma p_{n}\nabla\cdot\mathbf{V}_{n} =(\gamma-1)Q_{n}^{in},\] (6) \[\frac{\partial\mathbf{B}}{\partial t}=\nabla\times(\mathbf{V}_{i }\times\mathbf{B}),\] (7) \[\nabla\cdot\mathbf{B}=0, \tag{8}\]
where \(\rho_{i,n}\), \(p_{i,n}\), \(\mu\), \(\mathbf{B}\), \(\mathbf{V}_{i,n}\), \(\mathbf{S}_{in}\), \(Q_{i,n}^{in}\), represents the ion (\(i\)) and neutral (\(n\)) densities, gas pressures, the magnetic permeability of the medium, magnetic field, the ion and neutral velocities, the collisional momentum and the heat generation due to collisions between species, respectively. But, the electrical resistivity is not included in the simulations. The collisional momentum, \(S_{in}\), between particles (ions and neutrals); specifically, in Eq. (3), \(S_{in}\) is defined as \(\mathbf{S}_{in}=v_{in}\rho_{i}(\mathbf{V}_{i}-\mathbf{V}_{n})\); where the collision frequency between species is \(v_{in}=\alpha_{in}/\rho_{n}\); and \(\alpha_{in}=\frac{4}{3}\frac{\sigma_{in}}{m_{p}+m_{n}}\sqrt{\frac{8k_{B}}{\pi} }\left(\frac{T_{i}}{m_{i}}+\frac{T_{n}}{m_{n}}\right)\rho_{i}\rho_{n}\), where, \(\sigma_{in}\), is the collisional cross-section between ions and neutrals and takes a value of \(0.75\times 10^{-18}\) m\({}^{2}\)[37]. Finally, the heat generation due to collisions between species, respectively is defined as
\[Q_{i}^{in} =\alpha_{in}\bigg{[}\frac{1}{2}|\mathbf{V}_{i}-\mathbf{V}_{n}|^{ 2}+\frac{3}{2}\frac{k_{B}}{m_{i}}(T_{n}-T_{i})\bigg{]}, \tag{9}\] \[Q_{n}^{in} =\alpha_{in}\bigg{[}\frac{1}{2}|\mathbf{V}_{i}-\mathbf{V}_{n}|^{ 2}+\frac{3}{2}\frac{k_{B}}{m_{i}}(T_{i}-T_{n})\bigg{]}; \tag{10}\]
being \(\gamma=5/3\) the adiabatic index; \(\mathbf{I}\) is a unity matrix. A gravitational acceleration acting only on the \(y\)-axis are taken (\(\mathbf{g}=[0,-g]\)) with a value equal to \(g=\)274 m s\({}^{-2}\). We define the gas pressures by using the ideal gas laws:
\[p_{i}=\frac{k_{B}}{m_{i}}\rho_{i}T_{i}, \tag{11}\] \[p_{n}=\frac{k_{B}}{m_{n}}\rho_{n}T_{n}; \tag{12}\]
where \(T_{i,n}\) represents the ion and neutral temperatures, respectively; \(m_{i}=m_{H}\mu_{i}\), \(m_{n}=m_{H}\mu_{n}\), with \(m_{H}\) being the hydrogen mass, which is the main ingredient of the gas, and therefore \(m_{n}\simeq m_{i}=m_{H}=m_{p}\) (with \(m_{p}\) being the proton mass), and \(k_{B}\) is the Boltzmann constant. The mean masses \(\mu_{i}\approx 0.58\) and \(\mu_{n}\approx 1.21\).
### Model of the solar atmosphere
At the initial time of simulations, we assume that the solar atmosphere is in hydrostatic equilibrium, i.e., we set the ion and neutral velocities equal to zero (\(\mathbf{V_{i}=V_{n}=0}\)). Then,
considering the ideal gas laws for ions and neutrals, given by equations (11)-(12), and taking into account the \(y\)-components of the hydrostatic equation (\(-\nabla p_{i,n}+\rho_{i,n}\mathbf{g}=\mathbf{0}\)), we arrive to the following expressions for the equilibrium gas pressures:
\[p_{n}(y) = p_{0n}\exp\left(-\int_{y_{0}}^{y}\frac{dy^{\prime}}{\Lambda_{n}(y ^{\prime})}\right), \tag{13}\] \[p_{i}(y) = p_{0i}\exp\left(-\int_{y_{0}}^{y}\frac{dy^{\prime}}{\Lambda_{i}(y ^{\prime})}\right). \tag{14}\]
Here
\[\Lambda_{i}(y)=\frac{k_{B}T_{i}(y)}{m_{H}\mu_{i}g}\quad\text{and}\quad\Lambda_ {n}(y)=\frac{k_{B}T_{n}(y)}{m_{H}\mu_{n}g} \tag{15}\]
are the pressure scale heights, and \(p_{0_{i,n}}\) denote the gas pressures at the reference level \(y_{0}=10\) Mm. Specifically, this paper adopts the semi-empirical model of Table 26 of [38] for the temperature field. We consider that temperatures of ions and neutrals are initially equal (at \(t=0\) s), i.e., they are in thermal equilibrium, and we set \(T_{i}=T_{n}=T\). We also display the equilibrium profiles, including the mass densities for ions and neutrals and the ionization fraction, \(\varrho_{i}/\varrho_{n}\) in Fig. 1.
### Magnetic field configurations
We perform the simulations for two magnetic fields: i) constant vertical field and ii) flux tube. For the constant magnetic field, we use the following:
\[\mathbf{B}=[0,B_{0}], \tag{16}\]
where \(B_{0}=30\) G, displayed in the top-left panel of Fig. 2. For the flux tube, we use the following normalized expressions in Cartesian coordinates, which recreate a flux tube in a particular simple geometric way
\[B_{x}=0.075\bar{x}B_{0}\text{sech}^{2}(\bar{y}-3), \tag{17}\]
\[B_{y}=B_{0}(0.3-0.075\text{tanh}(\bar{y}-3)). \tag{18}\]
where \(\bar{x}=x/L\), \(\bar{y}=y/L\) and \(L=10^{6}\) m. Here \(B_{0}=100\) G represents the magnitude of the field in the lower part of the flux tube, i.e., in the photosphere. This magnetic field satisfies the divergence-free condition \(\nabla\cdot\mathbf{B}=0\), and it is analogous to an inverted magnetic bottle that can
accelerate to the charged particles at the footpoints of the flux tubes where the jets are emerging. Despite the fact that \(\nabla\times\mathbf{B}\neq 0\) and \(\frac{(\nabla\times\mathbf{B})\times\mathbf{B}}{\mu_{0}}\neq 0\), there should be no significant effects on the evolution of the jet, since if we calculate the curl of \(\mathbf{B}\) for a two-dimensional Cartesian system, we have that \(\nabla\times\mathbf{B}=-\frac{\partial\mathbf{E}_{x}}{\partial y}\,\hat{z}\), thus
\[\nabla\times\mathbf{B}=-1.5\times 10^{-13}\hat{x}B_{0}tanh(\hat{y}-3)sech^{2}( \hat{y}-3)\hat{\mathbf{z}}. \tag{19}\]
For \(\frac{1}{\mu_{0}}(\nabla\times\mathbf{B})\times\mathbf{B}=\frac{1}{\mu_{0}} \Big{(}B_{y}\frac{\partial\mathbf{B}_{x}}{\partial y}-B_{y}\frac{\partial \mathbf{B}_{y}}{\partial x}\Big{)}\hat{\mathbf{x}}+\frac{1}{\mu_{0}}\Big{(} -B_{x}\frac{\partial\mathbf{B}_{x}}{\partial y}+B_{x}\frac{\partial\mathbf{B} _{y}}{\partial x}\Big{)}\hat{\mathbf{y}}\), then
\[\frac{1}{\mu_{0}}(\nabla\times\mathbf{B})\times\mathbf{B} = (3.5\times 10^{-8}B_{0}^{2}\mathbf{\check{x}}\mathrm{sech}^{2}( \hat{y}-3)\tanh(\hat{y}-3) \tag{20}\] \[+8.95\times^{-9}B_{0}^{2}\mathbf{\check{x}}\mathrm{sech}^{2}( \hat{y}-3)\tanh^{2}(\hat{y}-3))\hat{\mathbf{x}}+\] \[(-8.95\times^{-15}B_{0}^{2}\mathbf{\check{x}}^{2}\mathrm{sech}^{4 }(\hat{y}-3)\tanh(\hat{y}-3))\hat{\mathbf{y}}.\]
If we set the value \(B_{0}=100\) G = 0.01 T and \(\mu_{0}=1.256\times^{-6}\) N A\({}^{-2}\), in the case of Eq. (19), the maximum value is around \(10^{-15}\) T m\({}^{-1}\), while for Eq. (20), the maximum value of the Lorentz force in \(x\) is of the order of \(10^{-12}\) N m\({}^{-3}\), while in the direction \(y\) is of the order of \(10^{-17}\) N m\({}^{-3}\). So the current density and the Lorentz force are negligible compared to the gravity force, balanced by the pressure of the plasma. Let us explicitly see it by calculating \(\rho_{i}\mathbf{g}\). Suppose we take the value of the ion density (\(\rho_{i}\approx 6\times 10^{-10}\) kg m\({}^{-3}\)) and the gravitational acceleration (\(\mathbf{g}=-274\) m s\({}^{-2}\)\(\hat{\mathbf{y}}\)) at 1.3 Mm, where we place the velocity pulse that generates one of the jets under study, we have \(\rho_{i}\mathbf{g}=-1.6\times 10^{-7}\) N m\({}^{-3}\)\(\hat{\mathbf{y}}\), i.e., is about ten orders of magnitude greater than the Lorentz force acting on \(y\) direction. Therefore, due to the fact of having a much larger order of magnitude of the force of gravity (\(\rho_{i}\mathbf{g}>>\frac{1}{\mu}(\nabla\times\mathbf{B})\times\mathbf{B}\)), we can consider that the magnetic flux tube model is very close to equilibrium, i.e., practically force-free, at \(t=0\). Then, the nature of the spicule is subject primarily to the velocity pulse and not to the currents and forces provided by the flux tube.
The magnetic field is intense at \((x,y)=(0,0)\) Mm, where the field lines open upwards, reaching a quasi-constant value after \(y=3\) Mm. We show the magnetic field lines of the flux tube in the top-right panel of Fig. 2. In addition, the plasma \(\beta\) parameter helps estimate the ratio between ion and neutral pressures to magnetic pressure, which is defined as follows:
\[\beta(x,y)=\frac{p_{i}(y)+p_{n}(y)}{B^{2}/2}. \tag{21}\]
Here, the pressures \(p_{i,n}(y)\) are given by equations (13)-(14), and \(B^{2}=(B_{x}^{2}+B_{y}^{2})\). We display the spatial profiles of plasma \(\beta\) for both magnetic field configurations on the bottom panels of Fig. 2, where we observe that \(\beta>1\) in the lower atmosphere (the photosphere and the chromosphere) for both cases. Otherwise, \(\beta<1\) in the solar corona (\(y>2.1\) Mm). Such behavior of plasma \(\beta\) is consistent with a vertical dominant magnetic field, as shown in [30].
### Perturbations
We perturb the hydrostatic equilibrium atmosphere, initially (at \(t=0\) s), by localized Gaussian pulses in ion and neutral vertical velocities, given as
\[v_{y_{\alpha i}}(x,y,t=0)=A_{v}\exp\biggl{(}-\frac{(x-x_{0})^{2}+(y-y_{0})^{2} }{w^{2}}\biggr{)}. \tag{22}\]
Here, \(A_{v}\) is the amplitude of the pulses, \((x_{0},y_{0})\) their positions and \(w\) is their width. We locate the pulses in \(x_{0}=0\) Mm and \(y_{0}=1.3,1.5,1.8\) Mm for the three cases, and we hold fixed
\(w=0.3\) Mm and \(A_{v}=100\) km s\({}^{-1}\), this value falls in the range of velocities of Type II spicules [19].
### Numerical methods
To solve the two-fluid equations (1)-(8) numerically, we employ the JOANNA code [35]. In all simulations, we set the Courant-Friedrichs-Levy (CFL) number equal to 0.9 and choose the third-order strong stability preserving Runge-Kutta (SSP-RK3) time integrator [39]. Additionally, we adopt the Harten-Lax-van Leer discontinuities (HLLD) approximate Riemann solver [40] in combination with a linear reconstruction and the minmod limiter. To control numerically the growth of the solenoidal constraint condition given by equation (8), we use the extended generalized Lagrange multiplier method [41]. This method is robust in low plasma beta (\(\sim 10^{-3}-10^{-2}\)) regions, as implied in the solar corona; see, for example, the bottom panels of Fig. 2.
We carry out the simulations in the domain \(x\in[-5,5]\), \(y\in[0,30]\), in units of Mm, in a uniform grid, which is covered by 200\(\times\)600 cells. Here, \(y=0\) Mm represents the bottom of the photosphere. Next, we impose outflow boundary conditions at the side edges specified
Figure 2: _Top_: Magnetic field lines for the vertical straight magnetic field configuration (left) and for the flux tube configuration (right) at the initial time (\(t=0\) s). In this figure, the color bar represents the magnitude of the magnetic field \(|\mathbf{B}|\). _Bottom_: The plasma \(\beta\) corresponding to the vertical straight magnetic field (left) and the flux tube (right) at the initial time (\(t=0\) s).
by \(x=-5\) Mm and \(x=5\) Mm. Finally, we set all the plasma quantities to their equilibrium values at the bottom and top boundaries delimited by \(y=0\) Mm and \(y=30\) Mm.
## 3 Results of the numerical simulations
We perform a series of simulations for the case when the collisions between ions and neutrals are considered [30; 42]. In particular, we define the following two scenarios: 1.- Uniform magnetic field, 2.- Flux tube type configuration. For each of the two scenarios, we perform three different simulations corresponding to three different vertical locations (\(y_{0}=1.3,1.5,1.8\) Mm) of the velocity pulses at the initial time, \(t=0\) s. Within the range that covers the chromosphere (\(0.6\leq y\leq 2.5\) Mm). These velocity perturbations give rise to the jets that are of interest for this analysis (we will hereafter call them \(Jet_{1}\), \(Jet_{2}\) and \(Jet_{3}\) for the respective values of \(y_{0}\) mentioned above). In the following subsections, we describe the results of the numerical simulations under the two magnetic scenarios already mentioned.
### Uniform magnetic field
Here we implement a uniform magnetic field to observe the jets' behavior in an environment where the magnetic field lines are straight and constant. In general, the magnetic conditions of the chromosphere are complex. However, in some simulations, such as in the works [43; 44], there are a few bounded regions where some jets evolve within field lines that vary smoothly. Therefore we wanted to explore these conditions in a more general context to have as a control test a magnetically uniform environment. The latter helped to make the comparison with a more complex magnetic field that mimics a flux tube, as we will describe in subsection 3.2.
We perform three simulations for the vertical magnetic field by launching a velocity pulse for ions and neutrals with an amplitude (\(A_{v}\)) of 100 km s\({}^{-1}\) at different vertical positions \(y_{0}=1.3,1.5,1.8\) Mm. We set a uniform magnetic field \(\mathbf{B}=[0,B_{0}]\), with \(B_{0}=30\) G, and we launch the pulses in a region where the condition \(\beta<1\) is satisfied (See the bottom-left panel of Fig. 2). The simulations were allowed to run up to a physical time of \(t_{f}=600\) s. On the left side of Fig. 3, we show the logarithm of the mass density for ions and neutrals, \(\rho_{i,n}\) [kg m\({}^{-3}\)]. From left to right \(Jet_{1}\), \(Jet_{2}\) and \(Jet_{3}\), respectively. At the top, we display \(\rho_{i}\), while at the bottom, we show \(\rho_{n}\); both quantities represent the jets. Each snapshot shows the jets' maximum heights (\(h_{max}\)) for both fluids. The times at which they reached their maximum heights are \(t=300\) s, \(t=270\) s, and \(t=210\) s, respectively. For this simulation, \(S_{ni}\neq 0\) allows the collision between fluids and the exchange of momentum between the ions and neutrals. Both fluids that make up the jets reach equal heights, evidencing the coupling of particles and collimating equally to the neutral part even after exceeding the times when the maximum heights are reached. As discussed in [37], when the charged particles act over neutrals, the species show a joint dynamicity. The temperatures in the cores of the jets remain constant throughout the evolution (see Fig. 3). However, on the tips, we observe temperatures of up to 3.6 - 6.5\(\times 10^{5}\) K; since ions and neutrals are coupled, they exhibit collective behavior.
### Flux tube type configuration
As we have seen in some works [43; 45], the magnetic configuration over spicule-type jets evolves as it diverges from bottom to top. For example, in [30; 46], the authors use a general 2D expression for an open magnetic field, while in [47], the authors employ a general 3D expression of a flux tube. In our analysis and for this simulation stage, we use a simple analytic expression that recreates a 2D flux tube as described below.
The jets propagate in an embedded environment in a magnetic field configuration as described in Equations (17)-(18). Here \(B_{0}=100\) G. This magnitude decreases with a height reaching a constant value of \(0.6B_{0}\) after exceeding \(y=3\) Mm. Under this flux tube, a component
of the Lorentz force pulls in the \(-y\) direction for the ions emerging from areas where the magnetic field is intense. Since the field becomes weaker at higher values of \(y\), then the component of the acceleration in the \(y\) direction is given by [48],
\[a_{y}=\frac{dv_{\parallel}}{dt}=-\frac{1}{2}\frac{v_{\perp}^{2}}{B(y)}\frac{ \Delta B_{y}}{\Delta y}, \tag{23}\]
will become positive (here, \(\frac{\partial B_{y}}{\partial y}\) is a negative term); therefore, particles leaving the lower parts of the footpoints at 100 km s\({}^{-1}\), where the magnetic field is three times greater than in the zones \(y>3\) Mm, would undergo higher accelerations and therefore reach higher speeds (\(\approx 197\) km s\({}^{-1}\)) and thus, in principle, jets would reach greater heights. It is worth mentioning that the conditions under which the particles are found in this plasma do not ideally reflect the ideal behavior of a single particle as described by the theory, particularly alluding to the previous comment. This fact can be seen in Section 4.1. In Fig. 4, the \(Jet_{1}\) can exceed 16.5 Mm, while \(Jet_{2}\) can reach 13.5 Mm, and finally, \(Jet_{3}\) reaches a maximum height of 9.75 Mm. The jets reached their maximum heights at t=300 s, t=250s, and t=190 s, respectively, following the coupling between species throughout evolution. Also, in the right panel of Fig. 4, we see that the temperatures of the cores do not change in the ascent phase. However, only at the tip and peripheries of the jets is it evident the increase in temperature of around \(1-3.6\times 10^{5}\) K. We note an increase of temperature over the tips; in the two-fluid scenario, it could be produced by the velocity pulse that develops into a shock [see, e.g., 31].
Figure 3: **Uniform magnetic field case**: (Left panel) Maximum heights reached by the jets generated at different \(y_{0}\). Temporal evolution of \(log(\rho_{j,n}(x,y))\). Top: ion density. Bottom: density of neutrals. From left to right: jets generated at \(y_{0}\)=1.3 (\(Jet_{1}\)), 1.5 (\(Jet_{2}\)) and 1.8 Mm (\(Jet_{3}\)), at t=300 s, t=270 s, and t= 210 s, respectively. (Right panel) Temperatures reached by the jets. Temporal evolution of \(log(T_{i,n}(x,y))\). Top: ion temperature. Bottom: temperature of neutrals. From left to right: \(Jet_{1}\), \(Jet_{2}\) and \(Jet_{3}\) at t=300 s, t=270 s, and t= 210 s, respectively.
## 4 Discussion
### Maximum height of the jets
We have conducted a series of simulations to recreate the evolution of small-scale solar jets generated at different heights above the photosphere and under two different magnetic field conditions. We use a uniform magnetic field and a flux tube-type field. Initially, the atmosphere was hydrostatically stratified, and we perturbed with velocity pulses [49] to generate the jets. In Figs. 3 and 4, we show the simulations carried out only with \(S_{i,n}\neq 0\), however, and to identify the influence of the neutrals on the ions coupling, we also perform simulations with \(S_{i,n}=0\) in Eqs. 3-4.
The maximum heights reached by the ion and neutral jets whose fluids interact with each other were \(y_{(i,n)max}=17.5,14.7,10.9\) Mm in the uniform field for the \(Jet_{1}\), \(Jet_{2}\), \(Jet_{3}\), respectively. The latter heights match those reached by macrospicules, Type I spicules, and Type II spicules for the respective jets. In the case of the flux tube \(y_{(i,n)max}=16.75,13.5,9.75\) Mm (see Fig. 5), which differed by \(\Delta y_{(i,n)max}=0.75,1.2,1.15\) Mm; reaching greater heights the jets that evolved within the uniform field. We can also see that \(Jet_{1}\), despite having been generated in a zone (\(y_{0}=1.3\) Mm) closer to the photosphere, reaches the highest maximum height compared to \(Jet_{1,2}\). The same is true for the flux tube configuration. The heights of these jets fall within the established heights for what are known as Type-I spicules (\(Jet_{2}\)), Type-II spicules (\(Jet_{3}\)), and macrospicules (\(Jet_{1}\)). The latter happens because \(Jet_{1}\) has had more mass on it than it has been able to drag as commented in [30], where they used a velocity pulse of \(A_{v}=40\) km s\({}^{-1}\) in the case of adiabatic MHD equations. On the other hand, when the collisions between ions and neutrals turn off, the particles are no longer coupled, and the jets of different fluids
Figure 4: _Flux tube case_: (Left panel) Maximum heights reached by the jets generated at different \(y_{0}\). Temporal evolution of \(log(\rho_{i,n}(x,y))\). Top: ion density. Bottom: density of neutrals. From left to right: jets generated at \(y_{0}\)=1.3 (\(Jet_{1}\)), 1.5 (\(Jet_{2}\)) and 1.8 Mm (\(Jet_{3}\)), at t=300 s, t=250 s, and t= 190 s, respectively. (Right panel) Temperatures reached by the jets. Temporal evolution of \(log(T_{i,n}(x,y))\). Top: ion temperature. Bottom: temperature of neutrals. From left to right: \(Jet_{1}\), \(Jet_{2}\) and \(Jet_{3}\) at t=300 s, t=250 s, and t= 190 s, respectively.
present independent dynamics. The charged and neutral particle jets in the two magnetic field conditions have differences between their respective maximum heights up to \(\Delta y_{(i,n)max}=1.25\) Mm for \(Jets_{1,2,3}\) in the uniform magnetic field and up to \(\Delta y_{(i,n)max}=2\) Mm for \(Jets_{1,2,3}\) in the flux tube (see Fig. 5) The latter is to outline the contribution of the neutrals on the ions through momentum transfer.
### Temperature of the jets
At \(t=0\) s (Fig. 1), the temperatures of ions and neutrals are, for \(0\leq y\leq 2\) Mm, \(T_{i}\simeq T_{n}\simeq 10^{4}\) K. The evolution of the jets was within the range of physical time of \(0\leq t_{f}\leq 600\) s. In this period, the \(Jets_{1,2,3}\) reached their own \(y_{max}\)'s, but the orders of magnitude in temperatures remained very similar, for example, in Figs. 3 and 4, we see that the temperature of the cores (along \(x=0\) Mm) throughout their evolution did not vary significantly; this is in agreement with the observations of [10], [50], [51]. However, on the tips of the jets, temperatures reached (\(T\approx 6\times 10^{5}\) K) above those of their bodies (\(T\approx 10^{4}\) K). This result is due to the previous sweep that the shock wave produced by the pulse. At the footpoints of the jets in Figs. 3-4, we see changes in temperature that appear at \(t>160\) s. In the bottom of Fig. 6 we can see that the \(Q_{i}\)'s are slightly greater than \(0\) (\(0.002-0.6\) W m\({}^{-3}\)) just in these zones (\(1.0\leq y\leq 3.5\) for 3.1 case, and \(1.0\leq y\leq 1.5\) for 3.2 case). These minimum heat generation in the lower part of the jets are associated with the proportionality that exists with \((V_{n}-V_{i})^{2}\), which in turn are related to the points where the original disturbance generated. So this heat is not being produced by any event inherent in the natural evolution of jets. In the jets' lateral peripheries, we see an evident increase in temperature that keeps increasing from the beginning of the corona at \(y=2.25\) Mm for \(t>0\) s, up to their \(y_{max}\)'s. The conversion of kinetic energy could produce this increase in temperature into heat [52; 53; 54], but it is not related to the interaction between charged and neutral particles directly, but rather to moving particles entering a relatively static environment (in ideal terms for the simulation purposes) and with temperatures exceeding \(10^{6}\) K.
In this paper, we employ a two-fluid model of simple interaction, where radiation losses and ambipolar effects of the plasma are not considered. However, it mimics the temperature behavior in the regions mentioned above of the jets. Nevertheless, there are more realistic simulations; see, for example, [45; 55], where the authors analyzed the generation of spicule-jets
Figure 5: _(Left) Uniform magnetic field_: Maximum heights (\(y_{max}\)) of \(Jet_{1}\), \(Jet_{2}\) and \(Jet_{3}\) vs. vertical position of the initial velocity pulse in \(y_{0}\) for \(A_{v}=100\) km s\({}^{-1}\). Here, \(\bigcirc\), represents ion jets with \(\mathbf{S}_{in}\neq 0\); \(\ast\), represents neutral jets with \(\mathbf{S}_{in}\neq 0\); \(+\), represents ion jets with \(\mathbf{S}_{in}=0\) and, \(+\), represents neutral jets with \(\mathbf{S}_{in}=0\). _(Right) Flux tube magnetic field_: characters represent the same as in the previous case.
using Cowling's conductivity employing generalized Ohm's Law, ambipolar diffusion, and multiple species with different ionization levels.
### Collisions between ions and neutrals
Throughout the simulation (\(t=[0-600]\) s), both in 3.1 and 3.2 cases, the temperature of a relatively substantial fraction of ions and neutrals inside the jets, when they have reached their \(y_{max}\)'s, remains constant. We calculate the characteristic collision time between fluids [37] with \(T_{i}\simeq T_{n}\simeq 10^{4}\) K, \(\rho_{i}=6.3\times 10^{-11}\) kg m\({}^{-3}\) and \(\rho_{n}=1.08\times 10^{-4}\) kg m\({}^{-3}\), using
\[\tau=\frac{1}{\nu_{ni}+\nu_{in}}, \tag{24}\]
where \(\nu_{ni}\) is the collision frequency between ions and neutrals,
\[\nu_{ni}=\frac{\alpha_{in}}{\rho_{n}}, \tag{25}\]
giving a value of \(\nu_{ni}\simeq 384\) Hz, and since \(\nu_{ni}=\nu_{in}\), then we have \(\tau=1.3\) ms. As we mentioned at the beginning of this subsection, the characteristic time of the system involving the jet itself is around \(6\times 10^{2}\) s, so we can be sure that the collisions between particles and the exchange of momentum can keep the ions and neutrals coupled practically during the entire lifetime of the jet. The peripheries and tip of the jet, being in direct kinetic interaction with the coronal medium, reach higher temperatures of up to \(T_{i}\simeq T_{n}\simeq 3.62\times 10^{5}\) K, with \(\rho_{i}=4\times 10^{-12}\) kg m\({}^{-3}\) and \(\rho_{n}=5.4\times 10^{-5}\) kg m\({}^{-3}\). These regions' collision frequency and characteristic collision time are \(\nu_{ni}\simeq 147\) Hz and \(\tau=3.4\) ms. Even though the temperature is an order of magnitude higher in the outer layers of the jet, we also have an order of magnitude lower in mass densities for the center of the same jet, so the characteristic collision time turns out to be higher in the core. However, in both cases, this collision time turns out to be an order of magnitude of \(10^{-3}\) s.
As we can see in Fig. 6, the coupling of the particles is less effective at the tips of the jets than inside the jets, where the dragging of particles from the corona in relative rest (\(V_{i}\approx 0\) km s\({}^{-1}\)) generates small instabilities. The order of magnitude of the velocity drifts (\(V_{n}-V_{i}\)) is in a range (\(0-4.7\times 10^{-3}\) km s\({}^{-1}\)) for the uniform magnetic field, and (\(0-4.72\times 10^{-3}\) km s\({}^{-1}\)) for the flux tube. Also, we can see a peak in the velocity drift of \(0.14\times 10^{-3}\) km s\({}^{-1}\) in \(y=1.6\) Mm, near the footpoints of \(Jet_{3}\), in the uniform magnetic field, at \(t=210\) s; and another peak of \(0.12\times 10^{-3}\) km s\({}^{-1}\) in \(y=1.5\) Mm of \(Jet_{3}\), in the flux tube at \(t=190\) s. For both magnetic field cases, the highest velocity drift is found at the tip of \(jet_{1}\). The peaks in [\(y=15.5,14.1,11.1\) Mm] for the \(jet_{1},jet_{2}\) and \(jet_{3}\), respectively, in the uniform magnetic field, are of [\(0.35,2.2,4.7\)]\(\times 10^{-3}\) km s\({}^{-1}\), the peaks in [\(y=15.4,13.6,9.75\) Mm] for the \(jet_{1},jet_{2}\) and \(jet_{3}\) in the flux tube are ([\(0.38,1.2,4.72\)]\(\times 10^{-3}\) km s\({}^{-1}\)), not enough to have any effect on temperature.
## 5 Conclusions
In this paper, we have studied the dynamics of small-scale jets in two different magnetic field configurations, using the JOANNA code to solve the Two-fluid MHD equations numerically, i.e., we consider the equations for the continuity of mass, momentum, and energy for ions and neutral particles, separately. Then, we excite the jets by launching velocity pulses at three different vertical locations \(y=1.3,1.5,1.8\) Mm within the chromosphere range (\(0.6\leq y\leq 2.5\) Mm), starting from an atmosphere model in hydrostatic equilibrium. We employ a simple model that does not consider radiation losses, ambipolar diffusion, or recombination between particles. Instead, the ions and neutrals interact merely anchored to friction due to collisions. For completeness, we also study the hypothetical scenario where the collisions between the
ions and neutrals are not present. The general result for this scenario indicates that momentum transfer in the jets is responsible for a relatively small increase in their maximum heights.
To summarize the results of this work, we point out the findings as follows:
* The \(Jets_{1,2,3}\) generated within the uniform magnetic field (11) with \(A_{v}=100\) km s\({}^{-1}\) showed a relationship in their maximum heights as follows, \(y_{max}(Jet_{3})<y_{max}(Jet_{2})<y_{max}(Jet_{1})\), and the same was seen for the case with the flux tube (12). This behavior had already been reported in [30], where they use a velocity pulse with \(A_{v}=40\) km s\({}^{-1}\). It is because the jets that arise from zones closer to the photosphere have a more significant amount of plasma that can be dragged by the pulse that perturbs the hydrostatic equilibrium. In this collective behavior of the plasma and under the solar conditions that we take into account, it was not possible to see any hint of what was predicted by the theory in section 3.2. It is important to emphasize that the jets generated in the uniform magnetic field reached a higher height, with a \(\Delta y_{(i,n)max}=0.75,1.2,1.15\) Mm, concerning their counterparts in the flux tube. This result reveals a kind of braking due to the constriction of the magnetic lines in \(0\leq y\leq 5\) Mm. \(Jet_{1}\) reached heights that have been reported for macrospicules [14], while \(Jet_{2}\) and \(Jet_{3}\) reached heights typical of Type I and Type II
Figure 6: _Top:_ Difference between the neutral and ion speed (in km s\({}^{-1}\)) evaluated along \(x=0\) Mm, for \(Jet_{1}\) (red), \(Jet_{2}\) (green) and \(Jet_{3}\) (blue) in their respective \(y_{max}\)’s. (Left panel: the uniform magnetic field. Right panel: flux tube.) _Bottom:_ Heat generation by the interaction between fluids (\(Q_{i}^{i,n}\), in W m\({}^{-3}\)) evaluated along \(x=0\) Mm, for \(Jet_{1}\) (red), \(Jet_{2}\) (green) and \(Jet_{3}\) (blue) in their respective \(t_{(y_{max})}\)’s. (Left panel: the uniform magnetic field. Right panel: flux tube.)
spicules [13], respectively. The three jets do not show similarities with surges since they are greater, less frequent, and more explosive than spicules. However, this requires a broader study to be able to determine if the \(y_{0}\) at which these jets are generated can be a crucial factor in categorizing the spicules reported in [13; 56; 57], or if their nature of creation such as magnetic reconnection or another phenomenon is the one that best categorizes the jets within the family of spicules already described with current observations.
* The characteristic times of collision between particles were \(\tau=[1.3,3.4]\) ms, for the core and peripheries of the jet, respectively, which guaranteed from \(t>0\) s the coupling between ions and neutrals during the entire lifetime of the jets (\(t_{f}=600\) s), so that, we observe a joint dynamic between the fluids.
* In Figs. 3 and 4, we can see that the jets under the constant magnetic field are slightly thinner than those found in the flux tube configuration. The densities inside the jets remained within a value of \(\rho_{i}\approx 4\times 10^{-12}\) kg m\({}^{-3}\) and \(\rho_{n}\approx 5.4\times 10^{-5}\) kg m\({}^{-3}\)) during the evolution time.
* The velocity drifts between particles were measured at the tips of the jets when they reached their maximum heights, being negligible (\(0-4.72\times 10^{-3}\) km s\({}^{-1}\)) for any heat contribution that the friction between fluids could add to the coronal region. The filamentary regions above the jets with temperatures \(T_{i,n}>6.5\times 10^{5}\) K were generated by the shock wave that propagated towards the corona with velocities \(V>100\) km s\({}^{-1}\). The temperature of the peripheries of the jets exceeded \(3.62\times 10^{5}\) K, arising entirely from friction due to the collisional interaction between the coronal plasma and the jet particles.
Finally, the analysis presented in this paper complement, for example, the works of [30; 37]. In particular, this paper clearly states why the ions and neutrals behave as they are coupled. In a future study, we plan to analyze the evolution of multiple jets considering the three-fluid resistive equations, which makes it possible to be near a more realistic scenario that describe the generation of jets in the solar chromosphere.
**Funding:** The work of JJGA is partially supported by the project CONACYT 319216 (2022), financed by "Consejo Nacional de Ciencia y Tecnologia".
**Data Availability Statement:** Not applicable.
**Acknowledgments:** We thank the anonymous referees for constructive comments and suggestions that significantly improve the clarity of the paper. The authors would like to thank the joint support from the Consejo Nacional de Ciencia y Tecnologia (CONACYT), Comision de Operacion y Fomento de Actividades Academicas del IPN (COFAA), Esttimulo al Desempeo de los Investigadores del IPN (EDI) and Beca de Estimulo Institucional de Formacion de Investigadores del IPN (BEIFI). They would also like to thank the facilities provided by IGUM-UNAM, Campus Morelia, via JJGA., for the computer resources where the simulations were developed. The authors also want to thank the developers of the JOANNA code, which was crucial to this work. JJGA is grateful for Investigadores por Mexico-CONACYT (CONACYT Fellow), CONACYT 319216 (2022), CONACYT LN 315829 (2021), and CONACYT-AEM 2017-01-292684 grants, which partially supported this work, along with the program "investigadores for Mexico," project 1045 sponsor space Weather Service Mexico (SCIESMEX). We finally thank Kris Murawski for sharing the JOANNA code. Darek Wojcik developed this code with contributions from Piotr Woloszkiewicz and Luis Kadowak.
|
2310.00356 | Inference on volatility estimation with missing data: a functional data
approach | This paper aims to investigate nonparametric estimation of the volatility
component in a heteroscedastic scalar-on-function regression model when the
underlying discrete-time process is ergodic and affected by a missing at random
mechanism. First, we introduce a simplified estimator of the regression and
volatility operators based on observed data only. We study their asymptotic
properties, such as almost sure uniform consistency rate and asymptotic
distribution. Then, the simplified estimators are used to impute the missing
data in the original process in order to improve the estimation of the
regression and volatility components. The asymptotic properties of the imputed
estimators are also investigated. A numerical comparison between the estimators
is discussed through simulated data. Finally, a real-data analysis is conducted
to model the volatility of daily Brent crude oil returns using intraday,
1-minute frequency, natural gas returns. | Abdelbasset Djeniah, Mohamed Chaouch, Amina Angelika Bouchentouf | 2023-09-30T12:18:56Z | http://arxiv.org/abs/2310.00356v1 | # Inference on volatility estimation with missing data: A functional data approach
###### Abstract.
This paper aims to investigate nonparametric estimation of the volatility component in a heteroscedastic scalar-on-function regression model when the underlying discrete-time process is ergodic and affected by a missing at random mechanism. First, we introduce a simplified estimator of the regression and volatility operators based on observed data only. We study their asymptotic properties, such as almost sure uniform consistency rate and asymptotic distribution. Then, the simplified estimators are used to impute the missing data in the original process in order to improve the estimation of the regression and volatility components. The asymptotic properties of the imputed estimators are also investigated. A numerical comparison between the estimators is discussed through simulated data. Finally, a real-data analysis is conducted to model the volatility of daily Brent crude oil returns using intraday, 1-minute frequency, natural gas returns.
Key words and phrases:Ergodic processes, Functional time series, Missing at random, Imputation, Volatility 2010 Mathematics Subject Classification: 60F10, 62G07, 62F05
## 1. Introduction
In the last 15 years, the capital markets have seen significant development, introducing high-frequency trading and a shift of market towards high-frequency and algorithm trading. It was always believed that high-frequency trading and automated trading were source price shocks and rising of volatility. Therefore, more interest was recently given in modeling the volatility with high-frequency financial data. Nowadays with the progress in technology devises we do have more access to data at a very fine time scale. High frequency data refers to a type of data that is collected at a very granular level and at frequent intervals of time, typically in sub-daily or intra-day increments. It captures observations or measurements of various variables at a high frequency, often with a time resolution of seconds, minutes, or hours.
From a financial market analysis perspective, understanding and modeling the volatility with high frequency financial data and if possible predict it would be of great interest to investors to take the right decisions. Moreover, financial firms, that trade assets on high-frequency time scale, are not just interested in short-term forecasting of future values of financial assets, but also in measuring the uncertainty associated to such predictions through the volatility component. Two major models of volatility are introduced in the literature which are Generalized Autoregressive Conditionally Heteroskedastic (GARCH) and Stochastic Volatility (SV) models. Univariate ARCH model was firstly introduced by Engle (1982), soon followed by the generalization to GARCH model of Bollerslev (1987). The GARCH models and their later extensions were quickly found to be relevant for the conditional volatility of financial returns observed at a monthly and higher frequency, and thus to the study of the intertemporal relation between risk and expected return. Although early GARCH models have been and are still widely used, a viewpoint slowly emerged, according to which these models may be too rigid for fitting return series, especially over a long span. An alternative to GARCH-type models is the class of SV models, which postulate that volatility is driven by its own stochastic process. The major difference to GARCH models is that, conditional on the information set available upon time \(t-1\), volatility at time \(t\) is not known but rather an unobserved random variable. On the other hand SV models have some advantages compared with GARCH models. For instance, SV models (see the handbook by Andersen et al. (2009)) offer a natural economic interpretation of volatility. They are easier to connect with continuous-time diffusion models with SV, and are often found to be more flexible in the modeling of financial returns.
On the other hand, nonparametric autoregressive models with ARCH-type errors were also introduced in Laib (2005) and Fan and Yao (1998) to relax the parametric assumptions on which the GARCH models depend. Despite their flexibility in modeling nonlinear pattern in the volatility component, nonparametric approaches suffer from the well-known curse of dimensionality. To overcome this issue, one can assume that the functional form of the volatility component can be semiparametrically specified such that it has both parametric and nonparametric components. For instance, additive volatility model, assuming the target volatility function can be written as the sum of functions of covariates. The additive model can effectively reduce the dimensionality of the multivariate regression model and improve the convergence rate of the resulting volatility estimator. Alternatively, single-index volatility model is another approach for modeling conditional variance. For more details about the last two approaches, the reader can be referred to Su et al. (2012).
It is worth noticing that the above mentioned approaches assume that the time series is completely observed and the predictor as well as the response variable are both observed at the same time frequency. In practice, despite the modern technology, which allows to collect the data at a very fine time scale, financial time series can still be missing. For instance, there are some regular holidays, such as Thanksgiving day and Christmas, for which stock price data are missing. There are many other technical reasons, such as breakdown in devices recording data, computer's sudden shutdown,..., that make stretches of data missing. In the literature of financial data analysis, it is commonly assumed that the data are completely observed which is not realistic. Therefore, the problem of missing data arises whenever there is a disturbance in the sequence of the series in terms of observations, hence it is necessary to address this such problem. Moreover, in many situations, asset prices might be recorded at different time frequency. For instance one may be interested in assessing the effect of intraday (1-minute frequency) natural gas returns on the daily (1-day frequency) oil price returns volatility (see Section 5 for more details). Figure 1 shows an example where the response variable (here daily brent oil return) is real-valued whereas the predictor (1-minute frequency) natural gas intraday curve is a functional random variable. In such situation models such as GARCH and SV models cannot be applied and nonparametric functional models become a tool of great interest.
During the last two decades, functional data analysis attracted considerable attention in statistical research owing to its extensive applications in numerous practical areas such as engineering, geology, biology, medicine, chemistry, climatology, economics, and so on. The literature on the subject initially focused on parametric models (cf. Bosq (2000), Ramsay and Silverman (2002, 2005)), nonparametric models (Ferraty and Vieu (2006), Geenens (2011), Ling and Vieu (2018)), and semi-parametric models (see Goia and Vieu (2014) and Vieu (2018) among others). More recently, nonparametric functional models became more and more popular in solving econometric problems. For instance Ferraty and Quintela-del-Rio (2016) estimated two risk measures, namely
Figure 1. (left) Sample of three intraday (1-minute frequency) Natural Gas curves. (right) The stochastic process of daily Brent Oil return and the dots represent the corresponding three preselected days.
the value-at-risk and the expected shortfall, conditionally to a functional variable. Muller et al. (2011) introduced a functional volatility process for modeling volatility trajectories for high frequency observations in financial markets. Hormann et al. (2013) introduced a functional version of the ARCH model to model high-resolution tick data which can be described by continuous-time process. Wang et al. (2014) used functional principle components analysis to find the most relevant patterns in the Shanghai stock exchange 50 index. Caldeira and Torrent (2017) used nonparametric functional data analysis to forecast the US term structure of interest rates. Recently, Chaouch (2019) investigated volatility estimation in scalar-on-function heteroscedastic regression model when data are completely observed.
This paper aims to generalize the work in Chaouch (2019) to the case when the response variable is missing at random, extend results in Perez-Gonzalez et al. (2010) to the case when the predictor is an infinite-dimensional random variable. In contrast to Ling et al. (2015), this paper considers a heteroscedastic functional regression model where two components have to be estimated under missing at random assumption, namely the regression and conditional variance operators. First of all we define the initial estimators based on the available data. In other words missing data do not contribute in calculating the regression and volatility estimators. Then, we make use of the simplified estimators to impute missing data in the original process. Finally, we re-estimate the parameters in our model based on imputed data and compare them with the simplified estimator. The study of the asymptotic properties of the simplified as well as imputed estimators are deeply investigated in this paper. We established the pointwise along with the uniform consistency rate. The asymptotic distribution is also provided and confidence intervals are estimated.
The structure of the paper is as follows. Section 2 introduces the heteroscedastic scalar-on-function regression model as well as the main assumptions defining the framework of our study. In Section 3 we introduce the simplified as well as the nonparametric imputed estimator. Then we focus on the discussion of asymptotic properties of estimators, including the uniform almost-sure convergence (with rates) and the asymptotic distribution of the proposed estimators. To illustrate these asymptotic properties, both simulated and real data analysis are conducted in Section 4 and 5, respectively. In Section 6, we thoroughly discuss our results and offer various perspectives. Finally, technical proofs of the main asymptotic results are detailed in the Appendix.
### Notations
We denote \(o_{a.s.}(v)\) a real random function \(z\) such that \(z(v)/v\) converges to zero almost surely as \(v\to 0\). Similarly, we define \(\mathcal{O}_{a.s.}(v)\) as a real random function \(z\) such that \(z(v)/v\) is almost surely bounded. We denote by \(C\) a positive generic constant that may change value. We use the notation \(\xrightarrow{\mathcal{D}}\) to denote convergence in distribution.
## 2. Settings
In this section we introduce the heteroscedastic scalar-on-function regression model, define the missing at random assumption and present the ergodic assumption that we assume our functional data satisfy.
### Model
Let \((X_{t},Y_{t})_{t=1,\dots,n}\) be a sample of discrete time ergodic random processes taking values in \(\mathcal{E}\times\mathbb{R}\) and distributed as \((X,Y)\). Here, \(Y\) represents the variable of interest, and \(X\) is a functional covariate taking values in an infinite-dimensional space \(\mathcal{E}\) equipped with a semi-metric \(d(\cdot,\cdot)\)1, defining a topology to measure the proximity between two elements of \(\mathcal{E}\) and which is disconnected of the definition of \(X\) in order to avoid measurability problems. We suppose that the data generation process is described by the following heteroscedastic functional regression model:
Footnote 1: A semi-metric (sometimes called pseudo-metric) \(d(\cdot,\cdot)\) is a metric which allows \(d(x_{1},x_{2})=0\) for some \(x_{1}\neq x_{2}\).
\[Y_{t}=m(X_{t})+\sqrt{U(X_{t})}\;\varepsilon_{t},\qquad t=1,\dots,n, \tag{2.1}\]
where \(m(\cdot)=\mathbb{E}(Y|X=\cdot)\) is the regression operator and \(U(\cdot)=\operatorname{var}(Y|X=\cdot)\) is the conditional variance operator which are supposed to be unknown. Here, we assume that the sequence \(\varepsilon_{1},\varepsilon_{2},\dots\) of random variables forms a martingale difference sequence such that
\[\mathbb{E}(\varepsilon_{t}|\mathcal{G}_{t-1})=0\;\;\text{a.s.}\quad\text{and} \quad\operatorname{var}(\varepsilon_{t}|\mathcal{G}_{t-1})=1\;\;\text{a.s.}, \tag{2.2}\]
where \(\mathcal{G}_{t-1}\) is the \(\sigma\)-field generated by \(\{(X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1}),X_{t}\}\). We denote by \(\mathcal{F}_{t-1}\) the \(\sigma\)-field generated by \(\{(X_{1},Y_{1}),\ldots,(X_{t-1},Y_{t-1})\}\).
It is worth noting that model (2.1), where the response is real-valued and the covariate is infinite dimensional, encompasses several interesting volatility models studied in the literature. In the following, we discuss some particular cases.
_Case 1 - Parametric autoregressive model with ARCH errors_: let \(\mathcal{E}=\mathbb{R}^{d}\), where \(d\geq 1\), and consider \(X_{t-1}\equiv(Y_{t-1},\ldots,Y_{t-d})^{\top}\), then model (2.1) becomes
\[Y_{t}=m(Y_{t-1},\ldots,Y_{t-d})+U(Y_{t-1},\ldots,Y_{t-d})\ \varepsilon_{t}.\]
Moreover, when \(m(X_{t-1})\equiv m(X_{t-1};\boldsymbol{\alpha})=\sum_{j=1}^{q}\alpha_{j}Y_{t-j}\) and \(U(X_{t-1};\boldsymbol{\beta})=1+\sum_{j=1}^{d}\beta_{j}Y_{t-j}^{2}\), model (2.1) becomes the AR-ARCH model introduced in Borkovec (2001).
_Case 2 - Nonparametric autoregressive model with ARCH errors_: in order to avoid any misspecification of the parametric form assumed on the regression and volatility functions in the AR-ARCH model above, Fan and Yao (1998) introduced a local linear estimator of the conditional variance function in a time series regression model where \(Y\in\mathbb{R}\) and \(\mathcal{E}=\mathbb{R}\). Then, Laib (2005) investigated a local constant estimator of the parameters in model (2.1) when \(Y\in\mathbb{R}\) and \(\mathcal{E}=\mathbb{R}^{d}\).
_Case 3 - Stochastic volatility model_: another interesting reason to study model (2.1) is that it includes continuous-time stochastic models which are used to model diffusion processes. Several financial assets, say \(X\), are modeled using diffusion processes solution of the following stochastic differential equation:
\[dX_{t}=\mu(X_{t})dt+\sigma(X_{t})dW_{t},\qquad t>0, \tag{2.3}\]
where \(W_{t}\) is a standard Brownian motion. The drift \(\mu(\cdot)\) and the diffusion \(\sigma^{2}(\cdot)\) are in general unknown functions. Several well-known models in financial econometrics can be written under the form (2.3) with a specific form of drift and diffusion functions. In practice, a diffusion process \(\{X_{t}\}\) cannot be observed continuously over time. It is rather observed at instants \(\{t=i\Delta|i=0,\ldots,n\}\), where \(\Delta>0\) is very small. For instance the series could be observed hourly, daily, weekly or monthly. High-frequency financial data are usually daily or intradaily series. Following Euler discretization scheme, one gets a discretized version of (2.3). That is
\[X_{t+\Delta}-X_{t}=\mu(X_{t})\Delta+\sigma(X_{t})\Delta^{1/2}\varepsilon_{t}, \tag{2.4}\]
where \(\{\varepsilon_{t}\}\) is a sequence of independent and identically distributed standard normal random variables. Taking \(Y_{t}=X_{t+\Delta}-X_{t}\), \(\mu(X_{t})\Delta=:m(X_{t})\), and \(\sigma(X_{t})\Delta^{1/2}=:\sqrt{U(X_{t})}\), model (2.4) can be viewed as a special case of model (2.1).
When the response variable as well as the predictor are completely observed, Chaouch (2019) introduced nonparametric estimators of \(m(\cdot)\) and \(U(\cdot)\). That is, for any \(x\in\mathcal{E}\), we have
\[m_{n,c}(x)=\frac{\sum_{t=1}^{n}Y_{t}K\left\{\frac{d_{1}(x,X_{t})}{h_{n,1}} \right\}}{\sum_{t=1}^{n}K\left\{\frac{d_{1}(x,X_{t})}{h_{n,1}} \right\}}\quad\text{and}\quad U_{n,c}(x)=\frac{\sum_{t=1}^{n} \big{\{}Y_{t}-m_{n}(X_{t})\big{\}}^{2}W\left\{\frac{d_{2}(x,X_{t})}{h_{n,2}} \right\}}{\sum_{t=1}^{n}W\left\{\frac{d_{2}(x,X_{t})}{h_{n,2}} \right\}}, \tag{2.5}\]
where \(K\) and \(W\) are kernel functions. Sequences \(h_{n,1}=h_{1}\) and \(h_{n,2}=h_{2}\) consist of positive real numbers and decrease to zero as \(n\to\infty\). Here, we consider two different semi-metrics \(d_{1}\) and \(d_{2}\), respectively for the regression and the conditional variance estimators, to measure the similarity between two functional random variables \(X_{t}\) and \(X_{s}\) in \(\mathcal{E}\), for \(t\neq s\).
In practice, the response variable may be missing at random due to several reasons, such as data loss, non-response, or data entry errors. Ling et al. (2015) investigated the estimation of the regression operator \(m(\cdot)\) when the response variable is missing at random, the predictor is a completely observed functional random variable and the error term in the model has a constant variance. Crambes and Henchiri (2019) studied the estimation of the regression operator under homoscedatic functional linear model. To the best of our knowledge nothing has been done for the estimation of the variance operator when the response is missing at random. In finite-dimensional
case (i.e. \(\mathcal{E}=\mathbb{R}^{d}\), where \(d\geq 1\)) Perez-Gonzalez et al. (2010) discussed nonparametric estimation of the conditional variance function when response is missing at random and completely observed fixed design predictor.
### Missing at random assumption
In practice it is very frequent that data are not completely observed. Here, we allow the response variable \(Y_{t}\) to be missing at random at any instant \(t=1,\ldots,n\), whereas the predictor \(X\) is completely observed. In order to check whether an observation is complete or missing, an indicator function \(\delta\) is introduced. Thus, \(\delta_{t}=1\) if \(Y_{t}\) is observed, and zero if \(Y_{t}\) is missing, for any \(t=1,\ldots,n.\) We suppose that the Bernoulli random variable \(\delta\) satisfies
\[\mathbb{P}(\delta=1|X=x,Y=y)=\mathbb{P}(\delta=1|X=x)=:\pi(x). \tag{2.6}\]
Here, \(\pi:\mathcal{E}\rightarrow[0,1]\) is the conditional probability of observing the response variable and is usually unknown. This assumption allows to conclude that \(\delta\) and \(Y\) are conditionally independent given \(X.\) Note that assumption (2.6) says that the response variable does not provide additional information, on top of that given by the explanatory variable, to predict whether an individual will present a missing response.
### Ergodicity condition
Let \(\{Z_{n},n\in\mathbb{Z}\}\) be a stationary sequence. Consider the backward field \(\mathcal{B}_{n}=\sigma(Z_{k};k\leq n)\) and the forward field \(\mathcal{A}_{m}=\sigma(Z_{k};k\geq m)\). The sequence is said strongly mixing if \(\sup_{A\in\mathcal{B}_{0},B\in\mathcal{A}_{n}}\left|\mathbb{P}(A\cap B)- \mathbb{P}(A)\mathbb{P}(B)\right|=\varphi(n)\to 0\quad\text{as}\quad n \rightarrow\infty.\) The sequence is called ergodic if \(\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{t=0}^{n-1}\left|\mathbb{P}\left(A \cap\tau^{-t}B\right)-\mathbb{P}(A)\mathbb{P}(B)\right|=0\), where \(\tau\) is the time-evolution or shift transformation. The naming of strong mixing in the above definition is a more stringent condition than what is ordinarily referred (when using the vocabulary of measure-preserving dynamical systems) to as strong mixing, namely that \(\lim_{n\rightarrow\infty}\mathbb{P}(A\cap\tau^{-n}B)=\mathbb{P}(A)\mathbb{P}(B),\) for any two measurable sets \(A,B\) (see Rosenblatt (1972)). Hence, strong mixing implies ergodicity. However, the converse is not true: there exist ergodic sequences which are not strong mixing. The ergodicity condition is then a natural condition and less restrictive than any type of mixing for which usual nonparametric estimators (density, regression,...) are convergent. It seems to be a condition of obtaining large numbers of law, since it is well known from the ergodic theorem that, for a stationary ergodic process \(Z\), we have \(\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{t=1}^{n}Z_{t}=\mathbb{E}(Z_{1}),\) almost surely (a.s.).We refer to the book of Krengel (1985) for an account of details and results on the ergodic theory.
## 3. Main results
In this section we define the simplified as well as the imputed estimator of the regression and conditional variance operators. Then, we investigate their asymptotic properties.
### Simplified estimator
Observe that multiplying (2.1) by \(\delta_{t}\) one gets \(\delta_{t}Y_{t}=\delta_{t}m(X_{t})+\delta_{t}U^{1/2}(X_{t})\varepsilon_{t}.\) Taking the conditional expectation, given \(X_{t}=x,\) from both sides and making use of (2.6) one gets
\[\mathbb{E}\bigg{\{}\delta_{t}Y_{t}|X_{t}=x\bigg{\}} = \mathbb{E}\bigg{\{}\delta_{t}m(X_{t})|X_{t}=x\bigg{\}}+\mathbb{E} \bigg{\{}\delta_{t}U^{1/2}(X_{t})\varepsilon_{t}|X_{t}=x\bigg{\}}\] \[= m(x)\mathbb{E}(\delta_{t}|X_{t}=x).\]
Therefore, the regression operator when the response variable is missing at random can be written, for any \(x\in\mathcal{E},\) as
\[m(x)=\frac{\mathbb{E}\left(\delta_{t}Y_{t}|X_{t}=x\right)}{\mathbb{E}(\delta_ {t}|X_{t}=x)}. \tag{3.1}\]
Similarly, one can write \(\delta_{t}\left(Y_{t}-m(X_{t})\right)^{2}=\delta_{t}U(X_{t})\,\varepsilon_{t} ^{2}.\) Then, taking the conditional expectation given \(X_{t}=x,\) for both sides of the above equality one gets
\[\mathbb{E}\left(\delta_{t}\left(Y_{t}-m(X_{t})\right)^{2}|X_{t}=x\right)=U(x) \,\mathbb{E}(\delta_{t}|X_{t}=x).\]
Consequently, the variance operator is defined, for any \(x\in\mathcal{E}\), as
\[U(x)=\frac{\mathbb{E}\left(\delta_{t}\left(Y_{t}-m(X_{t})\right)^{2}\left|X_{t}=x \right.\right)}{\mathbb{E}(\delta_{t}|X_{t}=x)}. \tag{3.2}\]
Given a random sample \((X_{t},Y_{t},\delta_{t})_{t=1,\ldots,n}\) we define a simplified estimator of \(m(x)\) and \(U(x)\) defined in (3.1) and (3.2), respectively, as:
\[m_{n,0}(x)=\frac{\sum_{t=1}^{n}\delta_{t}\,Y_{t}K\left\{\frac{d_{1 }(x,X_{t})}{h_{n,1}}\right\}}{\sum_{t=1}^{n}\delta_{t}\,K\left\{ \frac{d_{1}(x,X_{t})}{h_{n,1}}\right\}}\quad\text{and}\quad U_{n,0}(x)=\frac{ \sum_{t=1}^{n}\delta_{t}\left(Y_{t}-m_{n,0}(X_{t})\right)^{2}W \left\{\frac{d_{2}(x,X_{t})}{h_{n,2}}\right\}}{\sum_{t=1}^{n} \delta_{t}\,W\left\{\frac{d_{2}(x,X_{t})}{h_{n,2}}\right\}}.\]
### Nonparametric imputed estimator
To build a nonparametric imputed estimator of the regression operator we first need to impute the missing values in the original response process using the initial estimator of the regression. That is \(\widehat{Y}_{t}=\delta_{t}Y_{t}+(1-\delta_{t})m_{n,0}(X_{t}).\) Based on the sample \((X_{t},\widehat{Y}_{t})_{t=1,\ldots,n}\), a nonparametric imputed estimator of \(m(x)\) is obtained by replacing, in (2.5), \(Y_{t}\) by \(\widehat{Y}_{t}.\) Thus, one gets, for any fixed \(x\in\mathcal{E}\),
\[m_{n,1}(x)=\frac{\sum_{t=1}^{n}\widehat{Y}_{t}K\left\{\frac{d_{1 }(x,X_{t})}{h_{n,1}}\right\}}{\sum_{t=1}^{n}K\left\{\frac{d_{1 }(x,X_{t})}{h_{n,1}}\right\}}. \tag{3.3}\]
On the other side, an imputed estimator of the conditional variance could be obtained in two steps: first the missing residuals are imputed nonparametrically such that \(\widehat{r}_{t}=\delta_{t}r_{t}+(1-\delta_{t})U_{n,0}(X_{t}),\) where \(r_{t}=(Y_{t}-m_{n,0}(X_{t}))^{2}\) if \(\delta_{t}=1\) and is not observed if \(\delta_{t}=0\). Then, based on the sample \((X_{t},\widehat{r}_{t})_{t=1,\ldots,n}\), we defined nonparametric imputed estimator of \(U(x)\), for any \(x\in\mathcal{E}\), as follows:
\[U_{n,1}(x)=\frac{\sum_{t=1}^{n}\widehat{r}_{t}W\left\{\frac{d_{ 2}(x,X_{t})}{h_{n,2}}\right\}}{\sum_{t=1}^{n}W\left\{\frac{d_{2}(x,X_{t})}{h_{ n,2}}\right\}}. \tag{3.4}\]
For \(k\in\left\{1,2\right\},\) let \(F_{x,k}(u)=\mathbb{P}(d_{k}(x,X)\leq u)=\mathbb{P}(X\in B(x,u))\) and \(F_{x,k}^{\mathcal{F}_{t-1}}(u)=\mathbb{P}\left(d_{k}(x,X)\leq u\mid\mathcal{F }_{t-1}\right)=\mathbb{P}\left(X\in B(x,u)\mid\mathcal{F}_{t-1}\right)\) be, respectively, the marginal distribution and the conditional marginal distribution of \(X\) given the \(\sigma\)-field \(\mathcal{F}_{t-1}\).
Our main results consist in establishing the almost sure uniform consistency with rate of the simplified and the imputed estimator of the regression operator and the conditional variance. For this purpose, let us denote by \(\mathcal{C}\) a class of elements in the functional space \(\mathcal{E}\) and consider, for any \(\eta>0\),
\[N\left(\eta,\mathcal{C},d_{\mathcal{C}}\right)=\min\{n:\begin{array}{l}\text{ there exist}\quad c_{1},\ldots,c_{n}\in\mathcal{C}\quad\text{such that}\quad\forall x\in\mathcal{C}\\ \text{there exists}\quad k\in\left\{1,\ldots,n\right\}\quad\text{such that}\quad d_{ \mathcal{C}}\left(x,c_{k}\right)<\eta\},\end{array}\]
a number which measures how full is the class \(\mathcal{C}.\) Our results are established under the following assumptions listed below. In the sequel, we use \(\mathcal{K}\) to either denote the kernel \(K\) or \(W.\)
* \(\mathcal{K}\) is a nonnegative bounded kernel of class \(\mathcal{C}^{1}\) over its support \([0,1]\) with \(\mathcal{K}(1)>0\). The derivative \(\mathcal{K}^{\prime}\) exists on \([0,1)\) and satisfies the condition \(\mathcal{K}^{\prime}(v)<0\) for all \(v\in[0,1)\) and \[\left|\int_{0}^{1}\left(\mathcal{K}^{j}\right)^{\prime}(v)dv\right|<\infty\text{ for }j=1,2.\]
* \(\mathcal{K}\) is a Holder function of order \(\gamma\) with a constant \(a_{0}.\)
* There exist constants \(a_{1}\) and \(a_{2}\) such that \(0<a_{1}\leq\mathcal{K}(v)\leq a_{2}<\infty\) for all \(v\in\mathcal{C}.\)
* For \(x\in\mathcal{E}\), there exists a sequence of nonnegative random variables \((f_{t,1})_{i\geq 1}\) almost surely bounded by a sequence of deterministic quantities \((b_{t}(x))_{t\geq 1}\), a sequence of random functions \((\psi_{t,x})_{t\geq 1}\), a deterministic nonnegative bounded function \(f_{1}\) and a nonnegative real function \(\phi(.)\) that tend to zero, as its argument tends to \(0\),
* \(F_{x,k}(u)=\phi_{k}(u)f_{1}(x)+o(\phi_{k}(u))\) as \(u\to 0\) and \(k\in\{1,2\}\), where \(o(\phi_{k}(u))\) is uniform in \(x\).
* For \(k\in\{1,2\}\) and \(\forall t\in N,F_{x,k}^{\mathcal{F}_{t-1}}(u)=\phi_{k}(u)f_{t,1}(x)+\psi_{t,x} (u)\) with \(\psi_{t,x}(u)=o_{a.s}(\phi_{k}(u))\) as \(u\to 0\), \(\frac{\psi_{t,x}(u)}{\phi_{k}(u)}\) almost surely bounded for any \(x\in\mathcal{C}\) and \(n^{-1}\sum_{t=1}^{n}\psi_{t.x}(u)=o_{a.s}\left(\phi_{k}(u)\right)\) as \(n\to\infty\) and \(u\to 0\). where \(o_{a.s}\left(\phi_{k}^{j}(u)\right)\) is uniform in \(x\).
* For \(k\in\{1,2\}\), there exists a nondecreasing bounded function \(\tau_{0,k}\) such that, uniformly in \(s\in[0,1]\), \(\frac{\phi_{k}(hu)}{\phi_{k}(h)}=\tau_{0,k}(u)+o(1)\), as \(h\downarrow 0\), and we have \[\int_{0}^{1}\left(\mathcal{K}^{j}(u)\right)^{\prime}\tau_{0,k}(u)dt<\infty\text { for }j\geq 1.\]
* \(n^{-1}\sum_{t=1}^{n}b_{t}(x)\to D(x)<\infty\) as \(n\to\infty\), and \(0<\underset{x\in\mathcal{C}}{\sup}D(x)<\infty\).
* \(0<\theta_{0}\leq\underset{x\in\mathcal{C}}{\inf}f_{1}(x)\leq\underset{x\in \mathcal{C}}{\sup}f_{1}(x)<\infty\) for some nonnegative real number \(\theta_{0}\).
* \(\underset{x\in\mathcal{C}}{\inf}\pi(x)>\theta_{1}\) for some positive real number \(\theta_{1}\in[0,1]\).
* There exists \(\rho>1\) such that \(\mathbb{E}\{U(X_{t})^{\rho^{2}/2(\rho-1)}\}<\infty\) and \(\underset{1\leq t\leq n}{\max}\mathbb{E}(|\varepsilon_{t}|^{\rho}|\mathcal{G }_{t-1})<\infty\).
* \(\mathbb{E}\{(\varepsilon_{t}^{2}-1)^{2}|X_{t})=\mathbb{E}\{(\varepsilon_{t}^{2 }-1)^{2}|\mathcal{G}_{t-1}\}=\omega(X_{t})\text{ is continuous in a neighborhood of }x\) as \(h\to 0\), that is \[\sup_{\{ud(x,u)\leq h\}}|\omega(u)-\omega(x)|=o(1).\]
* For some \(\kappa>0\), \(\underset{1\leq t\leq n}{\max}\mathbb{E}\{(\varepsilon_{t}^{2}-1)^{2+\kappa}| \mathcal{G}_{t-1}\}<\infty\).
* \(\mathbb{E}\{|U(X_{t})|^{\rho^{2}/(\rho-1)}\}<\infty\) and \(\underset{1\leq t\leq n}{\max}\mathbb{E}(|\varepsilon_{t}^{2}-1|^{\rho}| \mathcal{G}_{t-1})<\infty\).
* For any \(t\geq 1\), \(\mathbb{E}(\delta_{t}|\mathcal{G}_{t-1})=\mathbb{E}(\delta_{t}|X_{t})=\pi(X_{ t})\) is continuous in a neighborhood of \(x\) as \(h\to 0\), that is \[\sup_{\{u:d(x,u)\leq h\}}|\pi(u)-\pi(x)|=o(1),\text{where o(1) is uniformly in }x.\]
* For any \((u,v)\in\mathcal{E}^{2}\), \(|m(u)-m(v)|<c_{1}d_{1}^{\alpha}(u,v)\), for some constant \(c_{1}>0\) and \(\alpha>0\),
* For any \((u,v)\in\mathcal{E}^{2}\), \(|U(u)-U(v)|<c_{2}d_{2}^{\beta}(u,v)\), for some constant \(c_{2}>0\) and \(\beta>0\).
* \(U^{\kappa+2}(.)\) is continuous in a neighborhood of \(x\) as \(h\to 0\), that is \[\sup_{\{ud:(x,u)\leq h\}}|U^{\kappa+2}(u)-U^{\kappa+2}(x)|=o(1).\]
Assumption (A1)(i) is related to the choice of the kernel \(\mathcal{K}\), which is very usual in nonparametric functional estimation. Notice that Parzen symmetric kernel is not adequate in this context since the random process \(d(x,X_{t})\) is positive, therefore we consider \(\mathcal{K}\) with support \([0,\,1]\). This is a natural generalization of the assumption usually made on the kernel in the multivariate case where \(\mathcal{K}\) is supposed to be a spherically symmetric density function. The assumption \(\mathcal{K}(1)>0\) and \(\mathcal{K}^{\prime}<0\) guarantee that \(M_{1,W,2}>0\) for all limit functions \(\tau_{0}.\) The condition \(\mathcal{K}(1)>0\) is needed to define the moments \(M_{j,W,2}\) which are, in this case, determined by the value \(\mathcal{K}(1).\) Assumption (A1)(ii) is a Holder-type that requires a certain smoothness of the kernels and assumption (A1)(iii) requires that the kernel \(\mathcal{K}\) has to be bounded away from \(0\) which is relatively usual in nonparametric functional data estimation.
Conditions (A2)(i)-(ii) reflect the ergodicity property assumed on the discrte-time functional process. It plays an important role in studying the asymptotic properties of the estimator. The
functions \(f_{t,1}\) and \(f_{1}\) play the same role as the conditional and unconditional densities in finite dimensional case, whereas \(\phi(u)\) characterizes the impact of the radius \(u\) on the small ball probability as \(u\) goes to \(0\). Several examples of processes to satisfy these conditions are given in Laib and Louani (2010). Conditions (A2)(iii) and (A2)(v) are basically established to meet the ergodic Theorem. Assumptions (A3)(i), (A3)(iii) and (A3)(iv) are more technical and require boundness of high order moments of the errors and conditional variance. (A3)(ii) and (A3)(v) assume the continuity of the operators \(\pi(\cdot)\) and \(\omega(\cdot).\) Finally, assumption (A4) imposes some smoothness of the regression and conditional variance operators.
### Asymptotic properties of the simplified estimator
In this subsection we investigate asymptotic properties of the simplified estimators. That includes a uniform consistency rate, asymptotic distribution and confidence intervals.
#### 3.3.1. Uniform consistency
**Theorem 3.1**.: _Suppose that assumptions (A1), (A2), (A3)(i),(v), (A4)(i) hold true and the following conditions_
\[\lim_{n\to\infty}n\phi_{1}\left(h_{1}\right)=\infty\;\;\text{and}\;\;\lim_{n \to\infty}\left\{n\phi_{1}\left(h_{1}\right)\right\}^{-1}\log n=0 \tag{3.5}\]
_are satisfied. In addition, for a sequence of positive real numbers \(\lambda_{n}\) tending to zero, as \(n\to\infty\) and \(\eta=\eta_{n}=o(h_{1})\), assume that, we have_
\[\lim_{n\to\infty}\frac{\log N\left(\eta,\mathcal{C},d_{\mathcal{C}}\right)}{n \lambda_{n}^{2}\phi_{1}\left(h_{1}\right)\ell_{n}^{-2}}=0\;\;\text{and}\;\; \sum_{n\geq 1}\exp\left[-\lambda_{n}^{2}\ell_{n}^{-2}\mathcal{O}\left\{n\phi_{1 }\left(h_{1}\right)\right\}\right]<\infty\text{,} \tag{3.6}\]
_where \(\ell_{n}\) is a sequence of positive numbers that tends to infinity as \(n\to\infty\) defined by_
\(\ell_{n}=\left(\frac{\log n}{\lambda_{n}\phi_{1}\left(h_{1}\right)^{\left(\rho -1\right)/\rho}}\right)^{1/\left(\rho-1\right)},\rho\) _is given in (A3)(i). Then, one gets_
\[\sup_{x\in\mathcal{C}}|m_{n,0}(x)-m(x)|=\mathcal{O}_{a.s.}\left(h_{1}^{\alpha }\right)+\mathcal{O}_{a.s.}\left(\lambda_{n}\right),\]
_where \(\alpha\) is given in (A4)(i)._
_Remark 3.1_.: The uniform consistency rate for the regression operator is identical to the one derived in Chaouch (2019) when the response variable is completely observed. In addition, the same rate has been achieved in Laib and Louani (2010) for the homoscedastic scalar-on-function regression model. Furthermore, note that if \(\lambda_{n}=\mathcal{O}\left(\sqrt{\log n/(n\phi_{1}(h_{1}))}\right)\), condition (3.6) is met, and hence the uniform consistency rate for \(m_{n,0}(x)\) is \(\mathcal{O}(h_{1}^{\alpha})+\mathcal{O}\left(\sqrt{\log n/(n\phi_{1}(h_{1}))}\right)\) which is the same rate derived in Ferraty and Vieu (2006) when the data are independent.
**Theorem 3.2**.: _Assume that assumptions (A1), (A2), (A3)(i),(iv)-(v), (A4)(i)-(ii), conditions (3.5)-(3.6), and the following conditions are satisfied_
\[\lim_{n\to\infty}n\phi_{2}\left(h_{2}\right)=\infty\;\;\text{and}\;\;\lim_{n \to\infty}\left\{n\phi_{2}\left(h_{2}\right)\right\}^{-1}\log n=0. \tag{3.7}\]
_Moreover, for a sequence of positive real numbers \(\lambda_{n}^{\prime}\) tending to zero as \(n\to\infty\), suppose that we have_
\[\lim_{n\to\infty}\frac{\log N\left(\eta,\mathcal{C},d_{\mathcal{C}}\right)}{n \left(\lambda_{n}^{\prime}\right)^{2}\phi_{2}\left(h_{2}\right)\left(\ell_{n}^ {\prime}\right)^{-2}}=0\;\;\;\text{and}\;\;\;\sum_{n\geq 1}\exp\left[- \left(\lambda_{n}^{\prime}\right)^{2}\left(\ell_{n}^{\prime}\right)^{-2} \mathcal{O}\left\{n\phi_{2}\left(h_{2}\right)\right\}\right]<\infty, \tag{3.8}\]
_where \(\ell_{n}^{\prime}\) is a sequence of positive numbers that goes to \(\infty\), as \(n\to\infty\) defined by_
\(\ell_{n}^{\prime}=\left(\frac{\log n}{\lambda_{n}^{\prime}\phi_{2}\left(h_{2} \right)^{\left(\rho-1\right)/\rho}}\right)^{1/\left(\rho-1\right)},\rho\) _is given in (A3)(i). Then, one has_
\[\sup_{x\in\mathcal{C}}|U_{n,0}(x)-U(x)|=\mathcal{O}_{a.s.}\left(h_{1}^{2\alpha }+h_{2}^{\beta}\right)+\mathcal{O}_{a.s.}\left(\lambda_{n}^{\prime}+\lambda_{n }^{2}\right),\]
_where \(\beta\) is given in (A4)(ii)._
_Remark 3.2_.: The initial estimator \(U_{n,0}(x)\) has the same uniform almost sure consistency rate obtained in Chaouch (2019) where response variable is completely observed. Moreover, note that by considering \(h_{1}=h_{2}=h\), \(d_{1}(\cdot,\cdot)=d_{2}(\cdot,\cdot)\), \(\alpha=\beta\), \(\phi_{1}(\cdot)=\phi_{2}(\cdot)=\phi(\cdot)\), and \(\lambda_{n}=\lambda^{\prime}_{n}=\mathcal{O}\left(\sqrt{\log n/(n\phi(h))}\right)\), condition (3.7) is satisfied. Therefore, the uniform convergence rate of \(U_{n,0}(x)\) is \(\mathcal{O}(h^{\beta})+\mathcal{O}\left(\sqrt{\log n/(n\phi(h))}\right)\). Furthermore, in the finite-dimensional case (i.e. \(\mathcal{E}=\mathbb{R}^{d}\)), the rate becomes \(\mathcal{O}(h^{\beta})+\mathcal{O}\left(\sqrt{\log n/n\hbar^{d}}\right)\) which matches with the rate obtained in Laib (2005) for a nonlinear autoregressive model with ARCH errors.
#### 3.3.2. Asymptotic distribution
**Theorem 3.3**.: _Assume that assumptions (A1)-(A4) and conditions (3.5)-(3.8) hold true then_
\[\sqrt{n\phi_{2}\left(h_{2}\right)}\left\{U_{n,0}(x)-U(x)\right\}\overset{ \mathcal{D}}{\longrightarrow}\mathcal{N}\left(0,\sigma_{0}^{2}(x)\right),\]
_where \(\mathcal{N}(\cdot,\cdot)\) denotes the normal distribution and \(\sigma_{0}^{2}(x)=\frac{M_{2,W,2}U^{2}(x)\omega(x)}{M_{1,W,2}^{2}\pi(x)f_{1}( x)}\)_
_with_
\[M_{j,W,2}=W^{j}(1)-\int_{0}^{1}\left(W^{j}\right)^{\prime}(u)\tau_{0,2}(u)du, \;\;for\;\;j=1,2. \tag{3.9}\]
Theorem 3.3 represents an extension of the asymptotic distribution established in Theorem 3 in Chaouch (2019) where data are completely observed. Note that the asymptotic conditional variance depends on the conditional probability of observing data \(\pi(x)\). Higher (resp. smaller) is the MAR rate (i.e. smaller (resp. higher) is \(\pi(x)\)) higher (resp. smaller) will be \(\sigma_{0}^{2}(x)\). Therefore, less (resp. more) efficient will be the initial estimator. If the data are completely observed (i.e. \(\pi(x)=1,\forall x\in\mathcal{E}\)), then \(\sigma_{0}^{2}(x)\) coincides with the asymptotic conditional variance obtained in Chaouch (2019).
#### 3.3.3. Asymptotic confidence intervals
Our purpose here is to build asymptotic confidence intervals for \(U(x)\) for any fixed curve \(x\in\mathcal{E}\) based on a normal approximation. Note that the asymptotic variance in Theorem 3.3 contains the unknown quantities which are \(\pi(\cdot)\), \(\omega(\cdot)\), \(U(\cdot)\), \(M_{1,W,2}\), \(M_{2,W,2}\) and \(\tau_{0,2}(u).\) Each of these parameters is replaced by its empirical version. We replace \(U(x)\) by \(U_{n,0}(x)\) and \(\omega(x)\) and \(\pi(x)\) are estimated as follows:
\[\omega_{n}(x)=\frac{\sum_{t=1}^{n}(\varepsilon_{n,t}^{2}-1)^{2}H \left(\frac{d_{3}(x,X_{t})}{h_{3}}\right)}{\sum_{t=1}^{n}H \left(\frac{d_{3}(x,X_{t})}{h_{3}}\right)},\]
where \(\varepsilon_{n,t}=[Y_{t}-m_{n,0}(X_{t})]/[U_{n,0}(X_{t})]^{1/2}\) and
\[\pi_{n}(x)=\frac{\sum_{t=1}^{n}\delta_{t}\tilde{H}\left(\frac{d_{4}(x,X_{t})}{ h_{4}}\right)}{\sum_{t=1}^{n}\tilde{H}\left(\frac{d_{4}(x,X_{t})}{h_{4}} \right)},\]
where \(H\) and \(\tilde{H}\) are kernel functions, \(d_{3}(\cdot,\cdot)\) and \(d_{4}(\cdot,\cdot)\) are semi-metrics, and \(h_{3}\) and \(h_{4}\) are bandwidths used to estimate \(\omega(x)\) and \(\pi(x)\). Moreover, by assumptions (A2)(i) and (A2)(iv), we can define the estimator of \(\tau_{0,2}\) as \(\widehat{\tau}_{0,2}(u)=\widehat{F}_{x,2}(uh_{2})/\widehat{F}_{x,2}(u)\), where \(\widehat{F}_{x,2}(u)=n^{-1}\sum_{t=1}^{n}\mathds{1}_{\{d_{2}(x,X_{t})\leq u\}}\). Further, we replace \(\tau_{0,2}\) by its estimator \(\widehat{\tau}_{0,2}(u)\) in (3.9) to obtain a plug-in estimator \(\widehat{M}_{j,W,2}\) of \(M_{j,W,2}\), for \(j=\{1,2\}\).
**Corollary 3.1**.: _Under the conditions of Theorem 3.3, we have_
\[\left\{\frac{\widehat{M}_{1,W,2}}{\sqrt{\widehat{M}_{2,W,2}}}\sqrt{\frac{n \widehat{F}_{x,2}(h)\pi_{n}(x)}{\omega_{n}(x)U_{n,0}^{2}(x)}}\right\}\left\{U_{n,0}(x)-U(x)\right\}\stackrel{{\mathcal{D}}}{{\longrightarrow}} \mathcal{N}\left(0,1\right),\ \ \text{as}\ \ n\to\infty. \tag{3.10}\]
This corollary plays a key role in estimating confidence intervals of \(U(x)\). More precisely, equation (3.10) leads to obtain the following asymptotic \(100(1-\nu)\%\) confidence interval for the conditional variance \(U(x)\)
\[\text{CI}_{\nu}^{S}=U_{n,0}(x)\left\{1\pm q_{\nu/2}\,\frac{\sqrt{\widehat{M}_{ 2,W,2}}}{\widehat{M}_{1,W,2}}\sqrt{\frac{\omega_{n}(x)}{n\widehat{F}_{x,2}(h) \pi_{n}(x)}}\right\}, \tag{3.11}\]
where \(q_{\nu/2}\) is the upper \(\nu/2\) quantile of the standard normal distribution.
The confidence interval displayed in (3.11) reveals that higher is the missing rate (\(\pi(x)\to 0\)) wider will be the confidence interval and less accurate is the interval estimation.
### Asymptotic properties of the nonparametric imputed estimator
In this subsection we investigate asymptotic properties of the nonparametric imputed estimator of the regression and conditional variance operators.
#### 3.4.1. Uniform consistency
**Theorem 3.4**.: _Under the same conditions of Theorem 3.1, we have_
\[\sup_{x\in\mathbb{C}}|m_{n,1}(x)-m(x)|=\mathcal{O}_{a.s.}\left(h_{1}^{\alpha} \right)+\mathcal{O}_{a.s.}\left(\lambda_{n}\right),\]
_where \(\alpha\) is given in (A4)(i)._
**Theorem 3.5**.: _Under the same conditions of Theorem 3.2, we obtain_
\[\sup_{x\in\mathbb{C}}|U_{n,1}(x)-U(x)|=\mathcal{O}_{a.s.}\left(h_{1}^{2\alpha} +h_{2}^{\beta}\right)+\mathcal{O}_{a.s.}\left(\lambda_{n}^{\prime}+\lambda_{n} ^{2}\right),\]
_where \(\beta\) is given in (A4)(ii)._
_Remark 3.3_.: The almost sure uniform convergence rates of the estimators \(m_{n,1}(x)\) and \(U_{n,1}(x)\) remain the same as those obtained in Theorems 3.1 and 3.2, respectively.
#### 3.4.2. Asymptotic distribution
**Theorem 3.6**.: _Suppose that conditions of Theorem 3.3 are satisfied. Then_
\[\sqrt{n\phi_{2}\left(h_{2}\right)}\left\{U_{n,1}(x)-U(x)\right\}\stackrel{{ \mathcal{D}}}{{\longrightarrow}}\mathcal{N}\left(0,\sigma^{{}^{\prime}2}(x) \right),\]
_where \(\sigma^{{}^{\prime}2}(x)=\frac{M_{2,W,2}U^{2}(x)\omega(x)\pi(x)}{M_{1,W,2}^{2} f_{1}(x)}=\sigma_{0}^{2}(x)+\frac{M_{2,W,2}U^{2}(x)\pi(x^{2}(x)-1)\omega(x)}{ \pi(x)M_{1,W,2}^{2}f_{1}(x)}.\)_
_Note that when \(\pi(x)\to 1\) (low MAR rate) \(\sigma^{{}^{\prime}2}(x)\) and \(\sigma_{0}^{2}(x)\) will converge to the asymptotic conditional variance obtained in Chaouch (2019) in the case of complete data. On the other side, for high MAR rate (\(\pi(x)\to 0\)) one observes that \(\sigma_{0}^{2}(x)\to\infty\) while \(\sigma^{{}^{\prime}2}(x)\to 0.\) In other words nonparametric imputed estimator is more efficient that the simplified one when the missing rate is high._
#### 3.4.3. Asymptotic confidence intervals
Similar to the simplified estimator, we use the following corollary to build asymptotic confidence intervals for \(U(x)\).
**Corollary 3.2**.: _Under the conditions of Theorem 3.6, we have_
\[\left\{\frac{\widehat{M}_{1,W,2}}{\sqrt{\widehat{M}_{2,W,2}}}\sqrt{\frac{n \widehat{F}_{x,2}(h)}{\omega_{n}(x)\pi_{n}(x)U_{n,1}^{2}(x)}}\right\}\left\{U_ {n,1}(x)-U(x)\right\}\stackrel{{\mathcal{D}}}{{\longrightarrow }}\mathcal{N}\left(0,1\right),\ \ \text{as}\ \ n\to\infty. \tag{3.12}\]
Then, an asymptotic \(100(1-\nu)\%\) confidence interval for \(U(x)\) is
\[\text{CI}_{\nu}^{NI}=U_{n,1}(x)\left\{1\pm q_{\nu/2}\frac{\sqrt{\widehat{M}_{2,W,2 }}}{\widehat{M}_{1,W,2}}\sqrt{\frac{\omega_{n}(x)\pi_{n}(x)}{n\widehat{F}_{x,2 }(h)}}\right\}, \tag{3.13}\]
where \(q_{\nu/2}\) is the upper \(\nu/2\) quantile of the of the standard normal distribution.
A comparison between the simplified-based and the nonparametric imputation-based confidence intervals given in equations (3.11) and (3.13), we can observe that higher is the missing rate (\(\pi(x)\to 0\)), larger will be \(\text{CI}_{\nu}^{S}\) compared to \(\text{CI}_{\nu}^{NI}.\) In other words, missing data imputation allows to obtain more accurate interval estimation of \(U(x).\)
## 4. Finite sample properties
In this section we carry out a simulation study to assess the quality of the proposed estimation methods. Let us consider \((X_{t},Y_{t},\delta_{t})_{t=1,\cdots,n}\) to be a strict stationary process such that, for any \(t=1,\ldots,n\), the functional covariate \(\{X_{t}(\lambda):\lambda\in[-1,1]\}\) is sampled at \(100\) equally spaced points in \([-1,1]\), and generated as follows:
\[X_{t}(\lambda)=A(2-\cos(\pi\lambda\omega))+(1-A)\cos(\pi\lambda\omega),\]
where \(\omega\sim\mathcal{N}(0,1)\), \(A\sim\text{Bernoulli}\bigg{(}\frac{1}{2}\bigg{)}.\) A sample of \(n=100\) simulated curves is displayed in Figure 2.
The response variable is generated according to the following heteroscedastic functional regression model:
\[Y_{t}=m(X_{t})+\sqrt{U(X_{t})}\varepsilon_{t},\]
where the regression and variance operators are defined, at any fixed point \(x\), as
\[m(x)=\int_{-1}^{1}\lambda x(\lambda)d\lambda,\quad U(x)=\int_{-1}^{1}|\lambda |x^{2}(\lambda)d\lambda. \tag{4.1}\]
Regarding the errors \(\varepsilon_{t}\) we suppose that they are generated according to one of the following models:
**Model 1:**: The \(\varepsilon_{t}\)'s are i.i.d, distributed according to \(\mathcal{N}(0,1).\)
**Model 2:**: \(\varepsilon_{t}=\frac{1}{2}\varepsilon_{t-1}+\xi_{t},\) where \(\xi_{t}\sim\mathcal{N}(0,1).\)
**Model 3:**: \(\varepsilon_{t}=-\frac{1}{2}\varepsilon_{t-1}+\xi_{t},\) where \(\xi_{t}\sim\mathcal{N}(0,1).\)
**Model 4:**: \(\varepsilon_{t}=\frac{1}{2}\varepsilon_{t-1}+\xi_{t},\) where \(\xi_{t}\sim\texttt{Bernoulli}\bigg{(}\frac{1}{2}\bigg{)}.\)
Figure 2. A sample of simulated curves \(X_{t}(\lambda).\)
We consider here four different models corresponding to three different dependence structures. Indeed, Model 1 corresponds to the case where data are independent and identically distributed. Models 1 and 2 cover the case where the process is \(\alpha\)-mixing. Finally, Model 4 is an example of ergodic but not mixing process. Figure 3 shows the generated response variable for each of the four models above.
We suppose that missing at random observations in the response variable \(Y\) are generated according to the following conditional probability distribution:
\[\pi(x)=\mathbb{P}(\delta=1|X=x)=\text{expit}\bigg{(}2\eta\int_{-1}^{1}x^{2}( \lambda)d\lambda\bigg{)},\]
where \(\text{expit}(u)=\dfrac{e^{u}}{1+e^{u}}\) and \(\eta\in\{0.2,0.8\}\).
Observe that, according to the value of \(\eta\), one gets a different MAR rate. Higher is the value of \(\eta\), higher will be \(\pi(x)\). Therefore, the smaller will be the missing data rate. Indeed, when \(\eta=0.8\) the MAR rate will be \(20\%\) and \(60\%\) for \(\eta=0.2\). Figure 4 shows an example of the process \(Y_{t}\) that is affected by MAR mechanism for \(\eta=0.2\) and \(0.8\), respectively.
Regarding the tuning parameters used to calculate the simplified estimator we consider the quadratic kernel \(K(u)=\frac{3}{2}(1-u^{2})\mathbb{I}_{[0,1]}(u).\) The choice of the bandwidth is based on the cross-validation criterion based on the \(\kappa\)-nearest neighbors as detailed in Ferraty and Vieu (2006). Because of the smoothness of the curves \(X_{t}(\lambda)\), we consider a semi-metric, for the regression and the conditional variance functions, as the usual \(L_{2}\)-norm of the first derivatives of the curves, which is defined as follows:
\[d(X_{t},X_{s})=\bigg{[}\int_{-1}^{1}\bigg{\{}X_{t}^{(1)}(\lambda)-X_{s}^{(1)} (\lambda)\bigg{\}}^{2}d\lambda\bigg{]}^{1/2},\quad\forall t\neq s.\]
Our purpose is to estimate the conditional variance at a fixed curve
\[x_{0}(\lambda)=\cos(\pi\lambda/4)\qquad\text{for}\quad\lambda\in[-1,1].\]
Figure 3. The generated process \(Y_{t}\) for Model 1 (a), Model 2 (b), Model 3 (c) and Model 4 (d).
To assess the consistency of the estimator, we consider \(B=500\) replications and for each replication we estimate the conditional variance and evaluate the square error. That is, at each replication \(b\in\{1,\ldots B\}\), we calculate
\[\text{SE}_{b}=\left(\mathcal{U}_{n,b}(x_{0})-U(x_{0})\right)^{2},\]
where \(\mathcal{U}_{n,b}(x_{0})\) denotes either the complete, simplified, or nonparametric imputation conditional variance estimator obtained at replication \(b\).
Tables 1 and 2 display some summary statistics of the square errors obtained for each model with missing at random rate of 20% and 60%, respectively. We can see that the estimator obtained after missing data imputation provides better results. Moreover, one can observe that the higher the MAR rate, the lower the quality of estimation. Finally, one can notice that the dependence structure in the data plays an important role in determining the quality of estimation. Indeed, small errors are obtained in model 1 (corresponding to the i.i.d case), then the higher is the dependence structure in the data the higher will be the estimation errors. Absolute errors obtained for model 2 and 3 (corresponding to \(\alpha\)-mixing processes) are higher than the i.i.d. case. However, the highest errors are obtained for the non-mixing but ergodic process given in model 4.
## 5. Application to high-frequency financial data
In this section we are interested in estimating and forecasting the volatility of the daily log return of Brent crude oil closing price given the intraday (1-minute frequency) curve of natural gas closing price.
The relationship between Brent crude oil and natural gas prices has been a topic of interest to researchers and market practitioners for many years. In particular, the volatility of oil prices has been a major concern for investors and market participants. A number of studies have investigated the impact of natural gas prices on the volatility of oil prices, with the aim of developing models that can better capture this relationship. Liu et al. (2013) used Vector AutoRegressive (VAR) model to examine the relationship between crude oil and natural gas prices in North America. Another study by Chen et al. (2019) used a BEKK-GARCH model to investigate the volatility spillovers between crude oil and natural gas prices in the United States. Moreover, Aloui et al. (2013) used a copula-GARCH model to examine the relationship between Brent oil and natural gas prices. The authors found that the volatility of Brent oil prices was indeed affected by natural gas prices, and that their model outperformed other traditional GARCH models in terms of forecasting accuracy. The authors found that the relationship between the two prices varies depending on the market regime, and suggested that this could be important for risk management purposes.
### Data preliminary analysis
The data cover trading days from February 14, 2020 to February 14, 2023. Figure 5 displays a 1-day frequency time series of Brent crude oil and natural gas closing prices. One can see that overall there is a correlation between the two prices. Figure 6 shows that the most likely prices for the Brent oil and the natural gas do not in general exceed $80 and $5, respectively. While Brent crude oil closing price, say \(P_{t}^{o}\), is observed at a daily frequency from February 14, 2020 to February 14, 2023, the natural gas closing price, say \(P_{m}^{g}\), is observed every minute over the same period. Returns of Brent crude oil are calculated as follows: \(r_{t}^{o}:=\log(P_{t}^{o}/P_{t-1}^{o})\), where \(P_{t}^{o}\) is the daily closing price at day \(t\) of the Brent crude oil. Similarly, the 1-minute frequency returns of natural gas are obtained according to the following formula: \(r_{m}^{g}:=\log(P_{m}^{g}/P_{m-1}^{g})\), where \(P_{m}^{g}\) is the price of natural gas observed at a minute \(m\).
### Random sample construction
Our sample here is denoted as follows: \((X_{t},Y_{t})_{t=1,\cdots,1096}\), where the sample size \(n=1096\) is the total number of trading days from February 14, 2020 to February 14, 2023. Note that \(Y_{t}=r_{t}^{o}\) and the functional-valued process is build as follows:
\[X_{t}(m)=r_{t}^{g}\left(m+(t-1)\times 1439\right),\qquad\text{for }t=1,\ldots,n, \text{ and }\forall m\in[1,1439].\]
Figure 5. Closing Price for Brent Crude Oil and Natural Gas.
Figure 7(a) displays a sample of three 1-minute frequency curve of the natural gas. Figure 7(b) shows all intraday (1-minute frequency) curves from February 14, 2020 to February 14, 2023. One can observe that the price of natural gas reached its highest values since the beginning of the Russo-Ukrainian war in February 2022.
Observe that the data are initially completely observed. Therefore, we artificially generate missing observations in order to validate our methodology. We assume that the missing at random mechanism is generated according to the following probability distribution:
\[\pi(x)=\mathbb{P}(\delta=1\mid X=x)=\text{expit}\left(\int_{1}^{1439}x^{2}(m) dm\right),\]
This choice of probability distribution leads to 12% of missing observations in the oil price return process.
### Daily Brent oil return volatility estimation and forecasting
Our purpose is to estimate and forecast the daily volatility of Brent Oil price log-return using as predictor the intraday (1-minute) frequency log return of natural gas. We split the original sample into training (In-Sample) and testing (Out-Of-Sample) subsamples. The training subsample is selected from February 14, 2020 to June 30, 2022. However, the remaining period from July 1, 2022 to February 13, 2023 will be
Figure 6. Joint density estimation of daily Brent oil and Natural Gas prices.
Figure 7. (a) Sample of three intraday (1-minute frequency) Natural Gas price curves. (b) All historical intraday (1-minute frequency) Natural Gas price curves.
used to evaluate forecasts of the daily volatility of Brent Oil price return. For the tuning parameters, we considered here the quadratic kernel, the bandwidth is chosen using cross-validation technique. For the semi-metric, because the curves of natural gas return are not smooth we use the PCA-semi-metric, say \(d_{4}^{\text{PCA}}(\cdot,\cdot)\), based on the projection on the four eigenfunctions, \(v_{1}(\cdot),\ldots,v_{4}(\cdot)\), associated to the four largest eigenvalues of the empirical covariance operator of the functional predictor \(X\):
\[d_{4}^{\text{PCA}}\left(X_{t},X_{s}\right)=\sqrt{\sum_{k=1}^{4}\left(\int_{1} ^{1439}\left(X_{t}(m)-X_{s}(m)\right)v_{k}(m)dm\right)^{2}}. \tag{5.1}\]
Note that the term "Volatility" refers to a latent variable, which cannot be observed directly but can be approximated from other variables that can be observed. To evaluate the estimation and forecast of the volatility, one considers the so-called _realized volatility_ (see Merton (1980)) computed based on the 1-hour frequency of Brent oil returns over the same period. Thus, the realized volatility is considered as an approximation of the true value of volatility that we can refer to in order to assess the performance of our estimators. Given 1-hour frequency one calculates the realized volatility on a specific day \(t\) as follows:
\[RV_{t}=\sum_{h=1}^{24}r_{t,h}^{2},\]
where \(r_{t,h}\) is the value of the log return of the Brent observed at hour \(h\) on the day \(t\).
Figure 8 shows the In-Sample and Out-of-Sample (the green-shaded area) sets for the realized and estimated volatility of the Brent returns. Figure 8(a) shows that the most volatile period for the Brent is the year 2020. This is due to an important event named "the 2020 Russia-Saudi Arabia conflict, COVID-19", which is a combination of two factors that led to a significant drop in oil prices, including a supply-demand oil conflict between Russia and Saudi Arabia, and another supply-demand disruption from the COVID-19 pandemic, which has clearly impacted oil supply-demand because of the lockdowns around the world. The confinements and shutdowns of economic activity lowered demand, resulting in the collapse of oil prices. Besides, Vienna, a major oil producer, were unable to reach an agreement to reduce oil production in response to the COVID-19 pandemic, and immediately after that, Saudi Arabia and Russia began a pricing war that significantly lowered the price of oil. However, the period of lowest volatility is between 2021 and 2022, displayed in Figure 8(a). That is because in January 2021 oil prices started to rise due to demand outside Europe and supply oil reductions made OPEC countries. It can also be noted that the volatility increased steeply during the starting year 2022. Reaching a new peak after 2020, indicating another major high-risk event which is the war between Russia and Ukraine and its impact on the global economy after the COVID-19 epidemic. However, the Out-of-Sample set (green-shaded area) shows that the forecast of the realized volatility of the Brent return is getting stable, indicating that the volatility levels are low starting from July 1, 2022 to February 13, 2023. Figure 8(b) seems to indicate that the estimated volatility of the Brent return fits well with the true values of volatility. Figure 8(c) and (d) present the estimated volatility of the Brent return at a 12% MAR rate. It is clear that the four graphs retain the same pattern. As a criterion for measuring estimators' accuracy in estimating (resp. forecasting) the In-Sample (resp. Out-of-Sample) volatility, we calculate the daily absolute error defined as
\[AE_{t}:=|\mathcal{U}_{t}(X_{t})-RV_{t}|,\]
where \(\mathcal{U}_{t}(X_{t})\) denotes the volatility estimation/forecast obtained either with complete data, missing data or imputed data.
Table 3 provides the distribution (through the calculation of the first, second, third quartiles as well as the mean) of the absolute error for each estimator at a 12% MAR rate. Results show that for the In-Sample (resp. Out-of-Sample) the estimation (resp. forecast) of the volatility of brent returns is accurately estimated (resp. forecasted) with complete data. On the other side, one observes that volatiltiy estimation (resp. forecast), after missing data imputation, is more accurate that the one obtained with simplified estimator.
## 6. Discussion
In this paper we introduced nonparametric estimation of regression and variance operators when the data generating process is assumed to be a nonlinear heteroscedastic functional regression model and observations are affected by a missing at random mechanism. The simplified estimator (calculated from observed data only) is then used to impute the missing data in the original process. Finally, a nonparametric imputed estimator is calculated making use of both observed and imputed observations. We investigated several asymptotic properties of both simplified and nonparametric imputed estimators. That includes pointwise and uniform almost sure consistency rate, identification of the asymptotic distribution and estimation of asymptotic confidence intervals. To assess and compare the performance of volatility estimators, we conducted a numerical analysis. First, we assessed the quality of estimators using simulated data. Then, an application to high-frequency
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Estimators & \multicolumn{2}{l}{In-Sample (IS)\(\times 10^{-4}\)} & \multicolumn{4}{l}{Out-of-Sample (OoS)\(\times 10^{-4}\)} \\ \cline{2-9} & \(Q_{25\%}\) & \(Q_{50\%}\) & \(Q_{75\%}\) & Mean & \(Q_{25\%}\) & \(Q_{50\%}\) & \(Q_{75\%}\) & Mean \\ \hline Complete & 0.0496 & 0.4954 & 7.3966 & 7.5302 & 1.41353 & 6.12136 & 7.75438 & 6.29141 \\ Simplified & 0.0401 & 0.8101 & 7.9516 & 7.5359 & 1.59297 & 6.24067 & 7.87776 & 6.36422 \\ NP Imp. & 0.0416 & 0.7807 & 7.6034 & 7.5697 & 1.27913 & 6.13609 & 7.68484 & 6.30530 \\ \hline \end{tabular}
\end{table}
Table 3. Summary Statistics of the AE obtained for each estimator when MAR=12%.
Figure 8. (a) Realized daily Volatility Brent log-returns. (b) Daily Volatility based on complete data. (c) Daily Volatility based on Simplified estimator. (d) Daily volatility based on nonparametric Imputed estimator. Green-shaded area represents the out-of-sample period.
financial data was discussed. The daily volatility of the Brent Oil price log-return has been estimated and forecasted using the intraday (one-minute) frequency log-return of natural gas as a predictor. The results showed that the nonparametric imputed estimator demonstrates superior performance compared to the simplified estimator.
This work can be extended from different perspectives. First, one can think of reducing the dimensionality of the predictor by using a single functional index model (SFIM) to estimate the volatility. SFIM has shown its efficiency in improving the consistency of the regression operator estimator. However, to the best of our knowledge, there is no work for the conditional variance. Moreover, in several situations, in finance or economics, the volatility does not only depend on one functional predictor. Other real-valued covariates that may describe some economic or geopolitical circumstances may be useful to explain high volatile periods. In such framework, it would be worthwhile to extend the obtained results to the semi-functional partial linear regression model ( see Perez and Vieu (2006)) to the case where errors are not homoscedatic.
## Proof of main results
In order to prove the main results of this paper, we introduce additional notations and necessary Lemmas. Let \(j\in\{1,2\}.\) Let
\[m_{n,0}^{[j]}(x) = \frac{1}{n\mathbb{E}(K_{1}(x))}\sum_{t=1}^{n}Y_{t}^{j-1}\delta_{ t}K_{t}(x),\] \[\overline{m}_{n,0}^{[j]}(x) = \frac{1}{n\mathbb{E}(K_{1}(x))}\sum_{t=1}^{n}\mathbb{E}[Y_{t}^{j -1}\delta_{t}K_{t}(x)|\mathcal{F}_{t-1}], \tag{6.1}\]
where \(K_{t}(x)=K\left(\frac{d_{1}(x,X_{t})}{h_{1}}\right)\),
and
\[V_{n,0}(x)=\frac{1}{n\mathbb{E}\left[K_{1}(x)\right]}\sum_{t=1}^{n}\delta_{t} \sqrt{U(X_{t})}\varepsilon_{t}K_{t}(x). \tag{6.2}\]
**Lemma 6.1** (Laib and Louani (2016)).: _Let \((M_{n})_{n\geq 1}\) be a sequence of martingale differences with respect to the sequence of \(\sigma\)-fields \(\mathcal{F}_{n}=\{\sigma(M_{1},\ldots,M_{n}):n\geq 1\}\), where \(\sigma(M_{1},\ldots,M_{n})\) is the \(\sigma\)-field generated by the random variables \(M_{1},\ldots,M_{n}\). Set \(S_{n}=\sum_{t=1}^{n}M_{t}\). Assume that there exist for any \(t\in\mathbb{N}\) some nonnegative constants \(C\) and \(d_{t}\) such that \(|M_{t}|\leq C\) almost surely, and \(\mathbb{E}(M_{t}^{2}|\mathcal{F}_{t-1})\leq d_{t}^{2}\) almost surely._
_Then, for any \(\varepsilon>0\),_
\[\mathbb{P}(|S_{n}|>\varepsilon)\leq 2\exp\left\{\frac{-\varepsilon^{2}}{(4D_{n}+ 2C\varepsilon)}\right\},\]
_where \(D_{n}=\sum_{t=1}^{n}d_{t}^{2}\)._
**Lemma 6.2** (Laib and Louani (2010)).: _Suppose that assumptions (A1)(i), (A2)(i)-(ii) and (A2)(iv) hold true. For \(k\in\{1,2,3\}\), let \(\mathcal{K}_{t,k}(x)=\mathcal{K}\left(\frac{d_{k}(x,X_{t})}{h_{n,k}}\right)\) and \(M_{j,\mathcal{K},k}=\mathcal{K}^{j}(1)-\int_{0}^{1}(\mathcal{K}^{j})^{\prime} \tau_{0,k}(u)du\), where \(1\leq j\leq 2+\kappa\), with \(\kappa>0\), we have_
\((i)\)**:**:**: \((\phi_{k}(h_{n,k}))^{-1}\mathbb{E}(\mathcal{K}^{j}_{t,k}(x)|\mathcal{F}_{t-1} )=M_{j,\mathcal{K},k}f_{t,1}(x)+\mathcal{O}_{a.s.}\left(\psi_{t,x}(h_{n,k})/ \phi_{k}(h_{n,k})\right)\)_;_
\((ii)\)**:**: \((\phi_{k}(h_{n,k}))^{-1}\mathbb{E}(\mathcal{K}^{j}_{1,k}(x))=M_{j,\mathcal{K},k}f_{1}(x)+o(1)\)_._
**Lemma 6.3**.: _Suppose that assumptions (A1),(A2)(i)-(vi),(A3)(v), and condition (3.6) is satisfied. Then,_
\[\lim_{n\to\infty}\sup_{x\in\mathcal{C}}\left|m_{n,0}^{[1]}(x)-\pi(x)\right|=0 \quad a.s. \tag{6.3}\]
Proof.: Note that
\[\sup_{x\in\mathcal{C}}|m_{n,0}^{[1]}(x)|\leq\sup_{x\in\mathcal{C}}|m_{n,0}^{[1]}(x) -\overline{m}_{n,0}^{[1]}(x)|+\sup_{x\in\mathcal{C}}|\overline{m}_{n,0}^{[1]}(x )|. \tag{6.4}\]
Then, the first term in right-hand side of (6.4) could be decomposed as follows. For \(\eta>0\) and \(B(c_{k},\eta):=\{x\in\mathcal{C}:d_{\mathcal{C}}(x,c_{k})<\eta\}\), we have
\[\sup_{x\in\mathcal{C}}\left|m_{n,0}^{[1]}(x)-\overline{m}_{n,0}^{ [1]}(x)\right| \leq \max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{C}})}\sup_{x\in B (c_{k},\eta)}\left|m_{n,0}^{[1]}(x)-\overline{m}_{n,0}^{[1]}(x)\right|\] \[\leq \max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{C}})}\sup_{x\in B (c_{k},\eta)}\left|m_{n,0}^{[1]}(x)-m_{n,0}^{[1]}(c_{k})\right|+\max_{1\leq k \leq N(\eta,\mathcal{C},d_{\mathcal{C}})}\left|m_{n,0}^{[1]}(c_{k})-\overline {m}_{n,0}^{[1]}(c_{k})\right|\] \[\quad+\max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{C}})}\sup_ {x\in B(c_{k},\eta)}\left|\overline{m}_{n,0}^{[1]}(x)-\overline{m}_{n,0}^{[1]} (c_{k})\right|\] \[=: \mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{3}.\]
We start with studying the term \(\mathcal{H}_{1}.\) Observe that, for any \(x\in B(c_{k},\,\eta)\) and making use of the definition of \(m_{n,0}^{[1]}(x)\) in (6.1), one gets
\[m_{n,0}^{[1]}(x)-m_{n,0}^{[1]}(c_{k}) =\frac{1}{n\mathbb{E}(K_{1}(x))\mathbb{E}(K_{1}(c_{k}))}\overset{ n}{\underset{t=1}{\sum}}\delta_{t}\left[K_{t}(x)\mathbb{E}(K_{1}(c_{k}))-K_{t}(c_{k}) \mathbb{E}(K_{1}(x)\right]\] \[=\frac{1}{n\mathbb{E}(K_{1}(x))}\overset{ n}{\underset{t=1}{\sum}}\delta_{t}\left[K_{t}(x)-K_{t}(c_{k})\right]\] \[\quad+\frac{1}{n\mathbb{E}(K_{1}(x))\mathbb{E}(K_{1}(c_{k}))} \overset{ n}{\underset{t=1}{\sum}}\delta_{t}K_{t}(c_{k})\left[\mathbb{E}(K_{1}(c_{k}))- \mathbb{E}(K_{1}(x))\right]\] \[=\mathcal{H}_{1,1}+\mathcal{H}_{1,2}.\]
Condition (A1)(ii) and the fact that \(|\delta_{t}|<1\) almost surely, one can bound the term \(\mathcal{H}_{1,1}\) from above as follows:
\[|\mathcal{H}_{1,1}| \leq\frac{1}{n\left|\mathbb{E}(K_{1}(x))\right|}\overset{ n}{\underset{t=1}{\sum}}\left|\delta_{t}\right|\left|K_{t}(x)-K_{t}(c_{k})\right|\] \[\leq\frac{1}{n\left|\mathbb{E}(K_{1}(x))\right|}\overset{ n}{\underset{t=1}{\sum}}\left|K_{t}(x)-K_{t}(c_{k})\right|\] \[\leq\frac{1}{n\left|\mathbb{E}(K_{1}(x))\right|}\left(a_{0}n \left|\frac{\eta}{h_{1}}\right|^{\gamma}\right)\] \[\leq\frac{\eta^{\gamma}a_{0}}{h_{1}^{\gamma}|\mathbb{E}(K_{1}(x) )|}.\]
On the other hand, by condition (A1)(iii) and Lemma 6.2, we have \(\frac{K_{t}(x)}{|\mathbb{E}(K_{1}(x))|}\leq\frac{a_{2}}{a_{1}}=:a_{3},\ \forall x\in \mathcal{E}.\) Hence
\[|\mathcal{H}_{1,2}|\leq\frac{1}{n\mathbb{E}(K_{1}(x))}\overset{ n}{\underset{t=1}{\sum}}\left|\delta_{t}\right|\left|\frac{K_{t}(c_{k})}{ \mathbb{E}(K_{1}(c_{k}))}\right|\left|\mathbb{E}(K_{1}(c_{k}))-\mathbb{E}(K_{1 }(x))\right|.\]
Using condition (A1)(ii) and the almost sure boundedness of \(\delta\), we get \(|\mathcal{H}_{1,2}|\leq\frac{a_{3}\eta^{\gamma}a_{0}}{h_{1}^{\gamma}|\mathbb{E} (K_{1}(x))|}.\) Therefore, for any \(1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{C}})\), we have \(|m_{n,0}^{[1]}(x)-m_{n,0}^{[1]}(c_{k})|\leq\frac{(1+a_{3})a_{0}\eta^{\gamma}}{h_ {1}^{\gamma}a_{1}}.\) Then, by taking \(\eta=\eta_{n}=o(h_{1})\), one gets
\[\mathcal{H}_{1}=\max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{C}})}\sup_{x \in B(c_{k},\eta)}\left|m_{n,0}^{[1]}(x)-m_{n,0}^{[1]}(c_{k})\right|=o(1). \tag{6.6}\]
Similar to \(\mathcal{H}_{1}\), one can show that when (A3)(v) holds true, the term \(\mathcal{H}_{3}\) in (6.5) is negligible. That is
\[\mathcal{H}_{3}=\max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{C}})}\sup_{x\in B (c_{k},\eta)}\left|\overline{m}_{n,0}^{[1]}(x)-\overline{m}_{n,0}^{[1]}(c_{k}) \right|=o(1). \tag{6.7}\]
Now we turn our attention to the study of the term \(\mathcal{H}_{2}.\) Note that
\[m_{n,0}^{[1]}(c_{k})-\overline{m}_{n,0}^{[1]}(c_{k})=\frac{1}{n\mathbb{E}(K_{1 }(c_{k}))}\sum_{t=1}^{n}L_{n,t}(c_{k}),\]
where \(L_{n,t}(c_{k})=\delta_{t}K_{t}(c_{k})-\mathbb{E}(\delta_{t}K_{t}(c_{k})| \mathcal{F}_{t-1})\) is a martingale difference with respect to the \(\sigma\)-field \(\mathcal{F}_{t-1}\), for \(t=1,2,\ldots,n.\) Then, we can use Lemma 6.1 to handle the convergence of \(m_{n,0}^{[1]}(c_{k})-\overline{m}_{n,0}^{[1]}(c_{k})\). To be able to use this Lemma, we need first to check its conditions.
First, since \(|\delta_{t}|<1\) almost surely and by using assumptions (A1)(iii), it follows that
\[|L_{n,t}(c_{k})| \leq 2\left|\delta_{t}K_{t}(c_{k})\right|\] \[\leq 2a_{2}=:C.\]
In addition, making use of \(C_{r}\)-inequality, assumptions (A1)(i), (A2)(iv), (A3)(v), Lemma 6.2(i) and the fact that \(f_{t,1}\) is bounded by a deterministic quantity \(b_{t}(x)\), one gets
\[\mathbb{E}(L_{n,t}^{2}(c_{k})|\mathcal{F}_{t-1}) \leq 2\mathbb{E}(\delta_{t}K_{t}^{2}(c_{k})|\mathcal{F}_{t-1})\] \[\leq 2(\pi(x)+o(1))\mathbb{E}(K_{t}^{2}(c_{k})|\mathcal{F}_{t-1})\] \[\leq(2\pi(x)+o(1))\phi_{1}(h_{1})(M_{2,K_{1}b_{t}}(x)+1)=:d_{t}^{ 2}.\]
Then, by assumption (A2)(v), we can write \(n^{-1}D_{n}=(2\pi(x)+o(1))\phi_{1}(h_{1})[M_{2,K,1}D(x)+o_{a.s.}(1)]\), which means that \(D_{n}=\mathcal{O}(n\phi_{1}(h_{1}))\). Thus, by applying Lemma 6.1, we have
\[\mathbb{P}\left(\max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{ C}})}\left|m_{n,0}^{[1]}(c_{k})-\overline{m}_{n,0}^{[1]}(c_{k})\right|> \overline{\lambda}_{n}\right) \leq\sum_{k=1}^{N(\eta,\mathcal{C},d_{\mathcal{C}})}\mathbb{P} \left(\left|m_{n,0}^{[1]}(c_{k})-\overline{m}_{n,0}^{[1]}(c_{k})\right|> \overline{\lambda}_{n}\right)\] \[\leq\sum_{k=1}^{N(\eta,\mathcal{C},d_{\mathcal{C}})}\mathbb{P} \left(\left|\sum_{t=1}^{n}L_{n,t}(c_{k})\right|>\overline{\lambda}_{n}n \mathbb{E}[K_{1}(c_{k})]\right)\] \[\leq 2N(\eta,\mathcal{C},d_{\mathcal{C}})\exp\left(-\frac{( \overline{\lambda}_{n}n\mathbb{E}[K_{1}(c_{k})])^{2}}{4D_{n}+2C\overline{ \lambda}_{n}n\mathbb{E}[K_{1}(c_{k})]}\right)\] \[\leq 2N(\eta,\mathcal{C},d_{\mathcal{C}})\exp\left(-\frac{ \mathcal{O}(n\phi_{1}(h_{1}))^{2}\overline{\lambda}_{n}^{2}}{\mathcal{O}(n \phi_{1}(h_{1}))+\overline{\lambda}_{n}\mathcal{O}(n\phi_{1}(h_{1}))}\right)\] \[\leq 2N(\eta,\mathcal{C},d_{\mathcal{C}})\exp\left(-\frac{\mathcal{ O}(n\phi_{1}(h_{1}))\overline{\lambda}_{n}^{2}}{1+\overline{\lambda}_{n}}\right)\] \[\leq 2\exp\left(-\mathcal{O}(n\phi_{1}(h_{1}))\overline{\lambda}_{ n}^{2}\left[1-\frac{\log N(\eta,\mathcal{C},d_{\mathcal{C}})}{\mathcal{O}(n \phi_{1}(h_{1}))\overline{\lambda}_{n}^{2}}\right]\right),\]
where \(\overline{\lambda}_{n}=\lambda_{n}\ell_{n}^{-1}\), \(\lambda_{n}\) and \(\ell_{n}\) are defined in Theorem 3.1. Hence, using condition (3.6), one gets
\[\sum_{n\geq 1}\mathbb{P}\left(\max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{ C}})}\left|m_{n,0}^{[1]}(c_{k})-\overline{m}_{n,0}^{[1]}(c_{k})\right|> \overline{\lambda}_{n}\right)<\infty.\]
Then, using Borel-Cantelli Lemma we have
\[\max_{1\leq k\leq N(\eta,\mathcal{C},d_{\mathcal{C}})}\left|m_{n,0}^{[1]}(c_{k} )-\overline{m}_{n,0}^{[1]}(c_{k})\right|=\mathcal{O}_{a.s.}(\overline{\lambda}_ {n})=o_{a.s.}(1). \tag{6.8}\]
Combining (6.5) with (6.6), (6.7) and (6.8), one deduces that the first term in the right-hand side of inequality (6.4) converges to zero as \(n\) goes to infinity. Regarding the second term \(\sup_{x\in\mathcal{C}}|\overline{m}_{n,0}^{[1]}(x)|\)
note that assumption(A3)(v), along with a double conditioning with respect to \(\mathcal{G}_{t-1}\), allows to obtain
\[\overline{m}_{n,0}^{[1]}(x)-\pi(x) =\frac{1}{n\mathbb{E}(K_{1}(x))}\sum_{t=1}^{n}\mathbb{E}\left\{ \mathbb{E}\left[\delta_{t}K_{t}(x)\mid\mathcal{G}_{t-1}\right]\mid\mathcal{F}_ {t-1}\right\}-\pi(x)\] \[=\frac{1}{n\mathbb{E}(K_{1}(x))}\sum_{t=1}^{n}\mathbb{E}[(\pi(x) +o(1))K_{t}(x)\mid\mathcal{F}_{t-1}]-\pi(x)\] \[=(\pi(x)+o(1))\frac{1}{n\mathbb{E}(K_{1}(x))}\sum_{t=1}^{n} \mathbb{E}\left(K_{t}(x)\mid\mathcal{F}_{t-1}\right)-\pi(x)\] \[=\pi(x)\left[\frac{1}{n\mathbb{E}(K_{1}(x))}\sum_{t=1}^{n} \mathbb{E}\left(K_{t}(x)\mid\mathcal{F}_{t-1}\right)-1\right]+o(1),\]
where \(o(1)\) is uniformly in \(x\). Finally, Lemma 7 in Laib and Louani (2011) allows to conclude the proof of this Lemma.
**Lemma 6.4**.: _Suppose that assumptions (A1), (A2)(i)-(ii),(iv)-(vi), (A3)(i),(v), and conditions (3.5) and (3.6) are satisfied. Then, we have_
\[\sup_{x\in\mathcal{C}}|V_{n,0}(x)|=\mathcal{O}_{a.s.}\left(\lambda_{n}\right) \ \text{as}\ \ \ n\to\infty.\]
Proof.: Let
\[V_{n,0}(x)=\frac{1}{n\mathbb{E}\left[K_{1}(x)\right]}\sum_{t=1}^{n}\mathcal{L }_{t}K_{t}(x)=\left(V_{n,0}(x)-V_{n,0}^{\top}(x)\right)+\widetilde{V}_{n,0}(x) +V_{n,0}^{-}(x), \tag{6.9}\]
where
\[V_{n,0}^{\top}(x) =\frac{1}{n\mathbb{E}\left[K_{1}(x)\right]}\sum_{t=1}^{n} \mathcal{L}_{t}\text{1}_{(|\mathcal{L}_{t}|\leq\ell_{n})}K_{t}(x),\quad \widetilde{V}_{n,0}(x)=\frac{1}{n\mathbb{E}\left[K_{1}(x)\right]}\sum_{t=1}^{ n}\mathbb{E}(\mathcal{L}_{t}\text{1}_{(|\mathcal{L}_{t}|\leq\ell_{n})}\mid \mathcal{F}_{t-1})K_{t}(x),\] \[V_{n,0}^{-}(x) =\frac{1}{n\mathbb{E}\left[K_{1}(x)\right]}\sum_{t=1}^{n}\left( \mathcal{L}_{t}\text{1}_{(|\mathcal{L}_{t}|\leq\ell_{n})}-\mathbb{E}( \mathcal{L}_{t}\text{1}_{(|\mathcal{L}_{t}|\leq\ell_{n})}\mid\mathcal{F}_{t-1 })\right)K_{t}(x),\]
such that \(\mathcal{L}_{t}=\delta_{t}\sqrt{U(X_{t})}\varepsilon_{t}\) and \(\ell_{n}\) is as defined in Theorem 3.1.
Lemma 6.8 allows to conclude that the first term in (6.9) equals zero almost surely as \(n\to\infty.\) Moreover, making use of Lemma 6.5 and Lemma 6.6 below, the second term is \(\mathcal{O}\left(\{\ell_{n}^{\rho-1}\phi_{1}(h_{1})^{\rho-1/\rho}\}^{-1}\right)\) and the third term is \(\mathcal{O}_{a.s.}\left(\lambda_{n}\right),\) respectively. Finally, since \((\lambda_{n}\ell_{n}^{\rho-1}\phi_{1}(h_{1})^{\rho-1/\rho})^{-1}=o(1)\) for n large enough, the proof of this lemma is achieved.
**Lemma 6.5**.: _Suppose that assumptions (A1)(i), (A2)(i)-(ii),(iv),(vi), (A3)(i),(v) and condition (3.5) are satisfied. Then_
\[\sup_{x\in\mathcal{C}}|\widetilde{V}_{n,0}(x)|=\mathcal{O}\left(\{\ell_{n}^{ \rho-1}\phi_{1}(h_{1})^{\rho-1/\rho}\}^{-1}\right).\]
The proof of this Lemma is detailed in the Appendix.
**Lemma 6.6**.: _Under assumptions (A1), (A2)(i)-(ii),(iv)-(vi), (A3)(i),(v) and condition (3.6), we have, as \(n\to\infty,\)_
\[\sup_{x\in\mathcal{C}}|V_{n,0}^{-}(x)|=\mathcal{O}_{a.s.}(\lambda_{n}).\]
Proof.: The proof is similar to the proof of Lemma C in Chaouch (2019).
**Proof of Theorem 3.1.** By Model (2.1), we have
\[m_{n,0}(x)-m(x)=\frac{1}{m_{n,0}^{[1]}}\left[\frac{1}{n\mathbb{E}(K_{1}(x))} \sum_{t=1}^{n}\delta_{t}\{m(X_{t})-m(x)+\sqrt{U(X_{t})}\varepsilon_{t}\}K_{t}( x)\right].\]
Then,
\[\sup_{x\in\mathcal{C}}\left|m_{n,0}(x)-m(x)\right|\leq\sup_{u\in B(x,h_{1})}\left|m (u)-m(x)\right|+\left\{\underset{x\in\mathcal{C}}{\inf}m_{n,0}^{[1]}(x)\right\}^ {-1}\times\underset{x\in\mathcal{C}}{\sup}|V_{n,0}(x)|. \tag{6.10}\]
Further, observe that
\[\underset{x\in\mathcal{C}}{\inf}\left|m_{n,0}^{[1]}(x)\right|>\underset{x\in \mathcal{C}}{\inf}\left|\pi(x)\right|-\underset{x\in\mathcal{C}}{\sup}\left|m_ {n,0}^{[1]}(x)-\pi(x)\right|.\]
Thus, making use of Lemma 6.3 and assumption (A2)(vii), we get
\[\underset{x\in\mathcal{C}}{\inf}\left|m_{n,0}^{[1]}(x)\right|>\theta_{1}\;\; a.s., \tag{6.11}\]
where \(\theta_{1}\) is defined in assumption (A2)(vii).
Finally, using Lemma 6.4, combined with assumption (A4)(i), equation (6.10) and equation (6.11), one concludes the proof of this theorem.
**Proof of Theorem 3.2.** Let
\[U_{n,0}(x)-U(x)=\left(\vartheta_{n,0}^{[1]}(x)+\vartheta_{n,0}^{[2]}(x)+ \vartheta_{n,0}^{[3]}(x)+\vartheta_{n,0}^{[4]}(x)\right)/U_{n,0}^{[1]}(x),\]
where
\[\vartheta_{n,0}^{[1]}(x) = \frac{1}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n}\delta_{t }\left(m\left(X_{t}\right)-m_{n,0}\left(X_{t}\right)\right)^{2}W_{t}(x), \tag{6.13}\] \[\vartheta_{n,0}^{[2]}(x) = \frac{2}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n}\delta_{t }\left(m\left(X_{t}\right)-m_{n,0}\left(X_{t}\right)\right)\sqrt{U\left(X_{t} \right)}\varepsilon_{t}W_{t}(x),\] (6.14) \[\vartheta_{n,0}^{[3]}(x) = \frac{1}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n}\delta_{t }U\left(X_{t}\right)W_{t}(x)\left(\varepsilon_{t}^{2}-1\right),\] \[\vartheta_{n,0}^{[4]}(x) = \frac{1}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n}\delta_{t }\left(U\left(X_{t}\right)-U(x)\right)W_{t}(x),\] \[U_{n,0}^{[1]}(x) = \frac{1}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n}\delta_{t }W_{t}(x), \tag{6.12}\]
with \(W_{t}(x)=W\left(\frac{d_{2}(x,X)}{h_{2}}\right).\) Our main objective is to establish the uniform consistency rate, with respect to \(x,\) of \(U_{n,0}(x).\) For this purpose, let us consider
\[\underset{x\in\mathcal{C}}{\sup}\left|U_{n,0}(x)-U(x)\right|\leq\left\{ \underset{x\in\mathcal{C}}{\sup}|\vartheta_{n,0}^{[1]}(x)|+\underset{x\in \mathcal{C}}{\sup}|\vartheta_{n,0}^{[2]}(x)|+\underset{x\in\mathcal{C}}{\sup} |\vartheta_{n,0}^{[3]}(x)|+\underset{x\in\mathcal{C}}{\sup}|\vartheta_{n,0}^{[ 4]}(x)|\right\}\times\left\{\underset{x\in\mathcal{C}}{\inf}|U_{n,0}^{[1]}(x)| \right\}^{-1}. \tag{6.15}\]
Note that \(U_{n,0}^{[1]}(x)\) has similar form as \(m_{n,0}^{[1]}(x)\) when \(K\) is replaced by \(W\). Thus, similar to the proof of (6.3) and (6.11), one can show that, under assumptions (A1), (A2), (A3)(v) and condition (3.8), one gets
\[\underset{n\rightarrow\infty}{\lim}\underset{x\in\mathcal{C}}{\sup}\left|U_{n, 0}^{[1]}(x)-\pi(x)\right|=0\quad a.s., \tag{6.16}\]
and
\[\underset{x\in\mathcal{C}}{\inf}\left|U_{n,0}^{[1]}(x)\right|>\theta_{1}\quad a.s. \tag{6.17}\]
_Study of the term \(\vartheta_{n,0}^{[1]}(x)\)._ We have
\[\left|\vartheta_{n,0}^{[1]}(x)\right|\leq\frac{1}{n\mathbb{E}\left[W_{1}(x) \right]}\sum_{t=1}^{n}\delta_{t}\left|m\left(X_{t}\right)-m_{n,0}\left(X_{t} \right)\right|^{2}W_{t}(x)\leq\underset{x\in\mathcal{C}}{\sup}\left|m_{n,0}(x )-m(x)\right|^{2}\times\frac{1}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n }\delta_{t}W_{t}(x).\]
Then, in view of Theorem 3.1 and equation (6.16), we get
\[\sup_{x\in\mathcal{C}}\left|\vartheta_{n,0}^{[1]}(x)\right|=\mathcal{O}_{a.s.}(h_ {1}^{2\alpha})+\mathcal{O}_{a.s.}\left(\lambda_{n}^{2}\right). \tag{6.18}\]
_Study of the term \(\vartheta_{n,0}^{[2]}(x).\)_ Observe that
\[\vartheta_{n,0}^{[2]}(x)\leq 2\sup_{x\in\mathcal{C}}\left|m_{n,0}(x)-m(x) \right|\times\overline{V}_{n,0}(x),\]
where
\[\overline{V}_{n,0}(x)=\frac{1}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n }\delta_{t}\sqrt{U(X_{t})}\varepsilon_{t}W_{t}(x).\]
In addition, note that \(\overline{V}_{n,0}(x)\) has similar form as \(V_{n,0}(x)\) when \(K\) is replaced by \(W\). Therefore, by Lemma 6.4, it follows that
\[\sup_{x\in\mathcal{C}}\left|\overline{V}_{n,0}(x)\right|=\mathcal{O}_{a.s.} \left(\lambda_{n}^{\prime}\right), \tag{6.19}\]
where \(\lambda_{n}^{\prime}\) is defined in Theorem 3.2.
Making use of Theorem 3.1 and equation (6.19), we find
\[\sup_{x\in\mathcal{C}}\left|\vartheta_{n,0}^{[2]}(x)\right|=\mathcal{O}_{a.s. }\left\{\left(h_{1}^{\alpha}+\lambda_{n}\right)\lambda_{n}^{\prime}\right\}. \tag{6.20}\]
_Study of the term \(\vartheta_{n,0}^{[3]}(x).\)_ Observe that \(\vartheta_{n,0}^{[3]}(x)\) has the same form as \(\overline{V}_{n,0}(x)\) when \(\varepsilon\) and \(\sqrt{U}\) are replaced by \(\varepsilon^{2}-1\) and \(U,\) respectively. Similar to the proof of Lemma 6.4, under assumptions (A1), (A2)(i)-(ii), (iv)-(vi), (A3)(iv)-(v) and condition (3.8), we obtain
\[\sup_{x\in\mathcal{C}}\left|\vartheta_{n,0}^{[3]}(x)\right|=\mathcal{O}_{a.s. }\left(\lambda_{n}^{\prime}\right). \tag{6.21}\]
_Study of the term \(\vartheta_{n,0}^{[4]}(x).\)_ One can easily show, using assumption (A4)(ii) and equation (6.16), that
\[\sup_{x\in\mathcal{C}}\left|\vartheta_{n,0}^{[4]}(x)\right|=\mathcal{O}_{a.s. }\left(h_{2}^{\beta}\right). \tag{6.22}\]
Finally, using equation (6.15) combining with the results obtained in (6.17)-(6.18) and (6.20)-(6.22), we conclude the proof of the theorem.The following Lemma gives the asymptotic normality of \(\vartheta_{n,0}^{[3]}(x)\) which is needed to prove Theorem 3.3.
**Lemma 6.7**.: _Suppose that assumptions (A1)(i), (A2)(i)-(iv), (vi), (A3)(ii)-(iii), (v), (A4)(ii)-(iii) and condition (3.7) are satisfied. Then, as \(n\rightarrow\infty\), we have,_
\[\sqrt{n\phi_{2}(h_{2})}\vartheta_{n,0}^{[3]}(x)\overset{\mathcal{D}}{ \longrightarrow}\mathcal{N}\left(0,\sigma^{{}^{\prime}2}(x)\right).\]
_where \(\sigma^{{}^{\prime}2}(x)\) is defined in Theorem 3.6._
Proof.: Define \(\xi_{n,t}=\left\{\sqrt{\phi_{2}\left(h_{2}\right)/n}\right\}\left[\delta_{t} U\left(X_{t}\right)W_{t}(x)\left(\varepsilon_{t}^{2}-1\right)/\mathbb{E} \left\{W_{1}(x)\right\}\right],\)\(t=1,...,n.\) Further, observe that
\[\sqrt{n\phi_{2}\left(h_{2}\right)}\vartheta_{n,0}^{[3]}(x)=\sum_{i=1}^{n} \xi_{n,t}, \tag{6.23}\]
where, for any \(x\in\mathcal{E},\) the summands in (6.23) form a triangular array of stationary martingale differences with respect to the \(\sigma\)-field \(\mathcal{F}_{t-1}\). Then, similar to the proof of Lemma 4 in Laib and Louani (2010), we apply the Central Limit Theorem for discrete-time arrays of real-valued martingales to provide the asymptotic normality of \(\vartheta_{n,0}^{[3]}(x)\) (see Hall and Heyde (1980)). For that, we have to establish the following statements:
1. \(\lim_{n\rightarrow\infty}\sum_{t=1}^{n}\mathbb{E}\left(\xi_{n,t}^{2}\mid \mathcal{F}_{t-1}\right)\overset{\mathbb{P}}{=}\sigma^{{}^{\prime}2}(x),\)
2. \(n\mathbb{E}\left\{\xi_{n,t}^{2}\Pi(\left|\xi_{(n,t}\mid>\zeta)\right.\right\} =o(1)\) holds for any \(\zeta>0.\)
**Proof of part (i).** Notice that
\[\sum_{t=1}^{n}\mathbb{E}\left(\xi_{n,t}^{2}\mid\mathcal{F}_{t-1}\right)= \frac{\phi_{2}\left(h_{2}\right)}{n\left[\mathbb{E}\left(W_{1}(x) \right)\right]^{2}}\sum_{t=1}^{n}\mathbb{E}\left[\delta_{t}\left\{U\left(X_{t} \right)-U(x)\right\}^{2}W_{t}^{2}(x)\left(\varepsilon_{t}^{2}-1\right)^{2} \mid\mathcal{F}_{t-1}\right]\] \[+\frac{\phi_{2}\left(h_{2}\right)}{n\left[\mathbb{E}\left(W_{1}(x )\right)\right]^{2}}\sum_{t=1}^{n}\mathbb{E}\left[\delta_{t}U^{2}(x)W_{t}^{2}( x)\left(\varepsilon_{t}^{2}-1\right)^{2}\mid\mathcal{F}_{t-1}\right]\] \[+\frac{2\phi_{2}\left(h_{2}\right)}{n\left[\mathbb{E}\left(W_{1}( x)\right)\right]^{2}}\sum_{t=1}^{n}\mathbb{E}\left[\delta_{t}\left\{U\left(X_{t} \right)-U(x)\right\}U(x)W_{t}^{2}(x)\left(\varepsilon_{t}^{2}-1\right)^{2} \mid\mathcal{F}_{t-1}\right]\] \[\equiv\Upsilon_{n,1}+\Upsilon_{n,2}+\Upsilon_{n,3}.\]
_Regarding the term \(\Upsilon_{n,1}.\)_ Due to (2.6), \(\delta_{t}\) and \(\varepsilon_{t}\) are independent. Then, by using Lemma 6.2 and assumptions (A1)(i), (A2)(iii)-(iv), (vi), (A3)(ii), (v), and (A4)(ii), we can show that:
\[\left|\Upsilon_{n,1}\right| =\mathcal{O}_{a.s.}(h_{2}^{2\beta})\times\left\{\frac{\phi_{2} \left(h_{2}\right)}{n\left[\mathbb{E}\left\{W_{1}(x)\right\}\right]^{2}}\sum_ {t=1}^{n}\mathbb{E}\left[\mathbb{E}\left\{\delta_{t}W_{t}^{2}(x)\left( \varepsilon_{t}^{2}-1\right)^{2}\mid\mathcal{G}_{t-1}\right\}\mid\mathcal{F}_ {t-1}\right]\right\}\] \[\leq\mathcal{O}_{a.s.}(h_{2}^{2\beta})\times\left\{\pi(x)+\sup_{u \in B(x,h)}\left|\pi(u)-\pi(x)\right|\right\}\left\{\frac{M_{2,W,2}}{M_{1,W,2} ^{2}}\frac{1}{f_{1}(x)}+o_{a.s.}(1)\right\}\] \[\times\left\{\omega(x)+\sup_{u\in B(x,h)}\left|\omega(u)-\omega(x )\right|\right\}\longrightarrow 0\,\,\text{as}\,\,\,n\rightarrow\infty.\]
_Regarding the term \(\Upsilon_{n,2}.\)_ By considering the independence of \(\delta\) and \(\varepsilon_{t},\) using Lemma 6.2 and assumptions (A1)(i), (A2)(iii)-(iv),(vi), (A3)(ii),(v), we obtain
\[\left|\Upsilon_{n,2}\right| =U^{2}(x)\times\left\{\frac{\phi_{2}\left(h_{2}\right)}{n\left[ \mathbb{E}\left\{W_{1}(x)\right\}\right]^{2}}\sum_{t=1}^{n}\mathbb{E}\left[ \mathbb{E}\left\{\delta_{t}W_{t}^{2}(x)\left(\varepsilon_{t}^{2}-1\right)^{2} \mid\mathcal{G}_{t-1}\right\}\mid\mathcal{F}_{t-1}\right]\right\}\] \[\leq U^{2}(x)\times\left\{\pi(x)+\sup_{u\in B(x,h)}\left|\pi(u)- \pi(x)\right|\right\}\left\{\frac{M_{2,W,2}}{M_{1,W,2}^{2}}\frac{1}{f_{1}(x)} +o_{a.s.}(1)\right\}\left\{\omega(x)+\sup_{u\in B(x,h)}\left|\omega(u)-\omega(x )\right|\right\}\] \[\longrightarrow U^{2}(x)\pi(x)\left\{\frac{M_{2,W,2}}{M_{1,W,2}^{ 2}}\frac{1}{f_{1}(x)}\right\}\omega(x)\,\,\text{as}\,\,\,n\rightarrow\infty.\]
_Regarding the term \(\Upsilon_{n,3}.\)_ Similarly as for \(\Upsilon_{n,1},\) using assumptions (A1)(i), (A2)(iii)-(iv),(vi), (A3)(ii),(v), and (A4)(ii) together with Lemma 6.2, we get
\[\left|\Upsilon_{n,3}\right| =\mathcal{O}_{a.s.}(h_{2}^{\beta})\times U(x)\left\{\frac{\phi_{ 2}\left(h_{2}\right)}{n\left[\mathbb{E}\left\{W_{1}(x)\right\}\right]^{2}}\sum_ {t=1}^{n}\mathbb{E}\left[\mathbb{E}\left\{\delta_{t}W_{t}^{2}(x)\left( \varepsilon_{t}^{2}-1\right)^{2}\mid\mathcal{G}_{t-1}\right\}\mid\mathcal{F}_ {t-1}\right]\right\}\] \[\leq\mathcal{O}_{a.s.}(h_{2}^{\beta})\times U(x)\left\{\pi(x)+ \sup_{u\in B(x,h)}\left|\pi(u)-\pi(x)\right|\right\}\left\{\frac{M_{2,W,2}}{M_ {1,W,2}^{2}}\frac{1}{f_{1}(x)}+o_{a.s.}(1)\right\}\] \[\times\left\{\omega(x)+\sup_{u\in B(x,h)}\left|\omega(u)-\omega(x )\right|\right\}\longrightarrow 0\,\,\text{as}\,\,\,n\rightarrow\infty.\]
Finally, we get
\[\lim_{n\longrightarrow\infty}\sum_{t=1}^{n}\mathbb{E}\left(\xi_{n,t}^{2}\mid \mathcal{F}_{t-1}\right)=\lim_{n\longrightarrow\infty}\left(\Upsilon_{n,1}+ \Upsilon_{n,2}+\Upsilon_{n,3}\right)=\frac{M_{2,W,2}U^{2}(x)\pi(x)\omega(x)}{M_ {1,W,2}^{2}f_{1}(x)}=\sigma^{{}^{\prime}2}(x)\,\,\,\text{a.s.},\]
whenever \(f_{1}(x)>0.\)
**Proof of part (ii).** Consider \(a>1\) and \(b>1\) such that \(\frac{1}{a}+\frac{1}{b}=1,\) by using Holder's and Markov's inequalities one can have, for all \(\zeta>0,\)
\[\mathbb{E}\left\{\xi_{n,t}^{2}\text{1}_{(|\xi_{n,t}|>\zeta)}\right\}\leq\frac{ \mathbb{E}|\xi_{n,t}|^{2a}}{\zeta^{2a/b}}.\]
Taking \(2a=\kappa+2\) (where \(\kappa\) is given in assumption (A3)(iii)) and let \(C_{0}\) be a positive constant. Under assumptions (A1)(i), (A2)(iii)-(iv),(vi), (A3)(iii), (v), (A4)(iii) and by using Lemma 6.2, we
obtain
\[n\mathbb{E}\left\{\xi_{n,t}^{2}\text{1l}_{(|\xi_{n,t}|>\zeta)}\right\} \leq C_{0}\left\{\phi_{2}\left(h_{2}\right)/n\right\}^{(2+\kappa)/ 2}\times\frac{n}{\left[\mathbb{E}\left(W_{1}(x)\right)\right]^{2+\kappa} \mathbb{E}}\left\{\delta_{t}U^{2+\kappa}\left(X_{t}\right)W_{t}^{2+\kappa}(x) \left(\varepsilon_{t}^{2}-1\right)^{2+\kappa}\right\}\] \[\leq C_{0}\left\{n\phi_{2}\left(h_{2}\right)\right\}^{-\kappa/2} \frac{M_{2+\kappa,W,2}f_{1}(x)+o(1)}{M_{1,W,2}^{2+\kappa}f_{1}^{2+\kappa}(x)+o (1)}\left\{U^{2+\kappa}(x)+o(1)\right\}\left\{\pi(x)+o(1)\right\}\] \[=\mathcal{O}\left[\left\{n\phi_{2}\left(h_{2}\right)\right\}^{- \kappa/2}\right].\]
Finally, since \(n\phi_{2}(h_{2})\longrightarrow\infty\) as \(n\longrightarrow\infty\), we get
\[n\mathbb{E}\left\{\xi_{n,t}^{2}\text{1l}_{(|\xi_{n,t}|>\zeta)}\right\}=o(1).\]
Therefore, Lemma 6.7 holds.
**Proof of Theorem 3.3.** Note that statements in (6.18), (6.20) and (6.22), allow to say that \(\vartheta_{n,0}^{[1]}(x)\), \(\vartheta_{n,0}^{[2]}(x)\), \(\vartheta_{n,0}^{[4]}(x)\) are negligible as \(n\) goes to \(\infty\). Moreover, by using result given in (6.16), we can conclude that \(U_{n,0}^{[1]}(x)\) converges almost surely to \(\pi(x)\) as \(n\rightarrow\infty\). Hence, the asymptotic normality of \(U_{n,0}(x)\) is achieved by the application of Lemma 6.7 along with Slutsky's theorem.
**Proof of Corollary 3.1.** Let us consider
\[\frac{\widehat{M}_{1,W,2}}{\sqrt{\widehat{M}_{2,W,2}}}\sqrt{\frac {n\widehat{F}_{x,2}(h)\pi_{n}(x)}{\omega_{n}(x)U_{n,0}^{2}(x)}}(U_{n,0}(x)-U(x)) \\ =\frac{\widehat{M}_{1,W,2}\sqrt{M_{2,W,2}}}{\sqrt{\widehat{M}_{2,W,2}}M_{1,W,2}}\sqrt{\frac{n\widehat{F}_{x,2}(h)\pi_{n}(x)U^{2}(x)\omega(x)}{ \omega_{n}(x)\pi(x)U_{n,0}^{2}(x)n\phi_{2}(h_{2})f_{1}(x)}}\frac{M_{1,W,2}}{ \sqrt{M_{2,W,2}}}\sqrt{\frac{n\phi_{2}(h_{2})f_{1}(x)\pi(x)}{U^{2}(x)\omega(x) }}(U_{n,0}(x)-U(x)).\]
By Theorem 3.3, we have
\[\frac{M_{1,W,2}}{\sqrt{M_{2,W,2}}}\sqrt{\frac{n\phi_{2}(h_{2})f_{1}(x)\pi(x)} {U^{2}(x)\omega(x)}}(U_{n,0}(x)-U(x))\stackrel{{\mathcal{D}}}{{ \longrightarrow}}\mathcal{N}\left(0,1\right).\]
Then, by (A1)(i), (A2)(i),(iv), and following the same steps as the proof of Corollary 1 in Laib and Louani (2010), we get
\[\widehat{M}_{1,W,2}\stackrel{{\mathbb{P}}}{{\rightarrow}}M_{1,W,2},\quad\widehat{M}_{2,W,2}\stackrel{{\mathbb{P}}}{{\rightarrow}}M_ {2,W,2},\quad\frac{\widehat{F}_{x,2}(h)}{\phi_{2}(h_{2})f_{1}(x)}\stackrel{{ \mathbb{P}}}{{\rightarrow}}1\quad\text{as}\quad n\rightarrow\infty. \tag{6.24}\]
In addition, from Theorem 3.2, we have \(U_{n,0}(x)\stackrel{{\mathbb{P}}}{{\rightarrow}}U(x)\) and by equation (5.25) in Ling et al. (2015)(p. 86), we have
\[\pi_{n}(x)\stackrel{{\mathbb{P}}}{{\rightarrow}}\pi(x)\quad \text{as}\quad n\rightarrow\infty. \tag{6.25}\]
Then, it remains to prove that
\[\omega_{n}(x)\stackrel{{\mathbb{P}}}{{\rightarrow}}\omega(x) \quad\text{as}\quad n\rightarrow\infty. \tag{6.26}\]
Observe that
\[\omega_{n}(x) = \frac{1}{G_{n}(x)}\frac{1}{n\mathbb{E}(H_{1}(x)}\sum_{t=1}^{n} \left(\frac{(Y_{t}-m_{n,0}(X_{t}))^{2}-U_{n,0}(X_{t})}{U_{n,0}(X_{t})}\right)^{ 2}H_{t}(x),\]
where \(G_{n}(x)=\left(n\mathbb{E}(H_{1}(x))\right)^{-1}\sum_{t=1}^{n}H_{t}(x)\).
Making use of Lemma 3 in Laib and Louani (2011), we get
\[G_{n}(x)\stackrel{{\mathbb{P}}}{{\rightarrow}}1\quad\text{as} \quad n\rightarrow\infty. \tag{6.27}\]
Further, let \(\mathfrak{L}_{t}\) be defined as
\[\mathfrak{L}_{t}=\left(\frac{(Y_{t}-m_{n,0}(X_{t}))^{2}-U_{n,0}(X_{t})}{U_{n,0}(X _{t})}\right)^{2}H_{t}(x),\quad\text{for}\quad t\in\{1,\ldots,n\}.\]
Then
\[\frac{1}{n\mathbb{E}(H_{1}(x)}\sum_{t=1}^{n}\mathfrak{L}_{t}=\mathfrak{T}_{1,n }(x)+\mathfrak{T}_{2,n}(x),\]
where
\[\mathfrak{T}_{1,n}(x)=\frac{1}{n\mathbb{E}(H_{1}(x)}\sum_{t=1}^{n}\left\{ \mathfrak{L}_{t}-\mathbb{E}(\mathfrak{L}_{t}|\mathcal{F}_{t-1})\right\}, \quad\mathfrak{T}_{2,n}(x)=\frac{1}{n\mathbb{E}(H_{1}(x)}\sum_{i=1}^{n} \mathbb{E}(\mathfrak{L}_{t}|\mathcal{F}_{t-1}).\]
Let us turn our attention to the study of the second term \(\mathfrak{T}_{2,n}(x)\). By employing a double conditioning with respect to the \(\sigma\)-field \(\mathcal{G}_{t-1}\) and the \(C_{r}\)-inequality, we derive
\[\mathfrak{T}_{2,n}(x)=\frac{1}{n\mathbb{E}(H_{1}(x)}\sum_{t=1}^{n}\mathbb{E }\left(H_{t}(x)\,\mathbb{E}\left(\left\{\frac{(Y_{t}-m_{n,0}(X_{t}))^{2}-U_{n,0}(X_{t})}{U_{n,0}(X_{t})}\right\}^{2}\left|\mathcal{G}_{t-1}\right)\right| \mathcal{F}_{t-1}\right). \tag{6.28}\]
Now, it is worth noting that Equation (2.1) provides:
\[\mathbb{E}\left[\left\{\frac{(Y_{t}-m_{n,0}(X_{t}))^{2}-U_{n,0}(X_{t})}{U_{n, 0}(X_{t})}\right\}^{2}\left|\mathcal{G}_{t-1}\right]=\mathbb{E}\left[( \mathcal{A}+\mathcal{B})^{2}\left|\mathcal{G}_{t-1}\right],\]
where \(\mathcal{A}=\frac{\{m(X_{t})-m_{n,0}(X_{t})\}^{2}+\{U(X_{t})-U_{n,0}(X_{t})\} \varepsilon_{t}^{2}+2\{m(X_{t})-m_{n,0}(X_{t})\}\sqrt{U(X_{t})}\varepsilon_{t}} {U_{n,0}(X_{t})}\), and \(\mathcal{B}=\varepsilon_{t}^{2}-1\)
Using assumption (A3)(i), the first part of condition (2.2), and Theorems 3.1 and 3.2, we have
\[\mathbb{E}\left(\mathcal{A}^{2}|\mathcal{G}_{t-1}\right)=o_{\mathbb{P}}(1).\]
Similarly, condition (2.2), assumption (A3)(i), and Theorems 3.1 and 3.2 imply
\[\mathbb{E}(\mathcal{A}\mathcal{B}|\mathcal{G}_{t-1})=o_{\mathbb{P}}(1).\]
Finally, assumption (A3)(ii) ensures that, as \(n\to\infty\), we get
\[\mathbb{E}\left[\left\{\frac{(Y_{t}-m_{n,0}(X_{t}))^{2}-U_{n,0}(X_{t})}{U_{n,0} (X_{t})}\right\}^{2}\left|\mathcal{G}_{t-1}\right]=\omega(x)+o(1). \tag{6.29}\]
From (6.28) with (6.29), we obtain that, as \(n\to\infty\),
\[\mathfrak{T}_{2,n}(x)=(\omega(x)+o(1))\frac{1}{n\mathbb{E}(H_{1}(x))}\sum_{t=1 }^{n}\mathbb{E}\left(H_{t}(x)|\mathcal{F}_{t-1}\right).\]
Then, by using Lemma 7 in Laib and Louani (2011), under assumptions (A1)(i) and (A2)(i)-(vi), it follows that
\[\frac{1}{n\mathbb{E}(H_{1}(x))}\sum_{t=1}^{n}\mathbb{E}\left(H_{t}(x)| \mathcal{F}_{t-1}\right)\longrightarrow 1\text{ \ a.s. \ as \ \ }n\to\infty.\]
Hence, we obtain
\[\mathfrak{T}_{2,n}(x)\stackrel{{\mathbb{P}}}{{\longrightarrow}} \omega(x)\ \ \text{ as }\ \ n\to\infty. \tag{6.4}\]
Now, we need to show that \(\mathfrak{T}_{1,n}(x)\) goes to zero in probability as \(n\to\infty\). Using Markov's, Burkholder's, and Jensen's inequalities, one obtains, for any \(\eta>0\),
\[\mathbb{P}\left\{|\mathfrak{T}_{1,n}(x)|>\eta\right\}\leq\frac{C\mathbb{E}( \mathfrak{L}_{1}^{2})}{n\eta^{2}\left(\mathbb{E}(H_{1}(x))\right)^{2}},\]
where \(C\) is a generic positive constant. Using the \(C_{r}\)-inequality, we have
\[\mathbb{E}(\mathfrak{L}_{t}^{2}|X_{t}))\leq 8\left(\frac{1}{U_{n,0}^{4}(X_{t})} \mathbb{E}\left\{\mathbb{E}\left[\left(m(X_{t})-m_{n}(X_{t})+\sqrt{U(X^{t})} \varepsilon^{t}\right)^{8}\mid X_{t}\right]\right\}+\mathbb{E}\left((-1)^{4} \mid X_{t}\right)\right).\]
By using Binomial Theorem, we have
\[\begin{array}{rl}\left((m(X_{t})-m_{n,0}(X_{t}))+\sqrt{U(X_{t})}\varepsilon_{t} \right)^{8}&=\sum_{k=0}^{8}\binom{8}{k}(m(X_{t})-m_{n,0}(X_{t}))^{k}(\sqrt{U(X_ {t})}\varepsilon_{t})^{8-k}\\ &\qquad=(m(X_{t})-m_{n,0}(X_{t}))^{8}+U^{4}(X_{t})\varepsilon_{t}^{8}\\ &\qquad+8\left((m(X_{t})-m_{n,0}(X_{t}))^{7}(\sqrt{U(X_{t})}\varepsilon_{t}) \right)\\ &\qquad+28\left((m(X_{t})-m_{n,0}(X_{t}))^{6}U(X_{t})\varepsilon_{t}^{2} \right)\\ &\qquad+56\left((m(X_{t})-m_{n,0}(X_{t}))^{5}\sqrt{U(X_{t})}U(X_{t}) \varepsilon_{t}^{3}\right)\\ &\qquad+70\left((m(X_{t})-m_{n,0}(X_{t}))^{4}U^{2}(X_{t})\varepsilon_{t}^{4} \right]\\ &\qquad+56\left((m(X_{t})-m_{n,0}(X_{t}))^{3}\sqrt{U(X_{t})}U^{2}(X_{t}) \varepsilon_{t}^{5}\right)\\ &\qquad+28\left((m(X_{t})-m_{n,0}(X_{t}))^{2}U^{3}(X_{t})\varepsilon_{t}^{6} \right)\\ &\qquad+8\left((m(X_{t})-m_{n,0}(X_{t}))\sqrt{U(X_{t})}U^{3}(X_{t}) \varepsilon_{t}^{7}\right).\end{array}\]
Making use of Theorems 3.1 and 3.2, and assumptions (A3)(i),(iv), (A4)(ii), we can prove that
\[\mathbb{E}\left[\left(m(X_{t})-m_{n}(X_{t})+\sqrt{U(X_{t})}\varepsilon_{t} \right)^{8}\mid X_{t}\right]<\infty.\]
Therefore, By using Theorem 3.2 and assumption (A4)(ii), one gets
\[\mathbb{E}(\mathbb{E}(\mathfrak{L}_{1}^{2}|X_{1}))<\infty.\]
Thus, by Lemma 6.2, we have
\[P\left\{\left|\mathfrak{T}_{1,n}(x)\right|>\eta\right\}\frac{C\mathbb{E}(H_{1 }^{2}(x))}{n\eta^{2}E^{2}(H_{1}(x))}\leq\frac{C(M_{2,H,3}f_{1}(x)+o(1))}{n\eta ^{2}(M_{1,H,3}^{2}f_{1}^{2}(x)+o(1))},\]
which goes to zero as \(n\to\infty\). The proof of equation (6.26) is complete.
Finally, we obtain
\[\frac{\widehat{M}_{1,W,2}}{\sqrt{\widehat{M}_{2,W,2}}}\frac{\sqrt{M_{2,W,2}}} {M_{1,W,2}}\sqrt{\frac{n\widehat{F}_{x,2}(h)\pi_{n}(x)U^{2}(x)\omega(x)}{ \omega_{n}(x)U_{n,0}^{2}(x)\pi(x)n\phi_{2}(h_{2})f_{1}(x)}}\stackrel{{ \mathbb{P}}}{{\to}}1\quad\text{as}\quad n\to\infty.\]
Hence, the proof of Corollary 3.1 is achieved.
**Proof of Theorem 3.4.** By using Model (2.1), we have
\[\begin{array}{rl}m_{n,1}(x)-m(x)&=\frac{1}{G_{n}^{\prime}(x)} \left[\frac{1}{n\mathbb{E}(K_{1}(x))}\sum_{t=1}^{n}[\delta_{t}\{m(X_{t})+\sqrt {U(X_{t})}\varepsilon_{t}\}+(1-\delta_{t})m_{n,0}(X_{t})-m(x)]K_{t}(x)\right] \\ &=\frac{1}{G_{n}^{\prime}(x)}\left[\mathcal{I}_{n,1}(x)+\mathcal{I}_{n,2}(x) +V_{n,0}(x)\right],\end{array}\]
where \(V_{n,0}(x)\) is defined in (6.2) and
\[\begin{array}{rl}\mathcal{I}_{n,1}(x)&=\frac{1}{n\mathbb{E} \left[K_{1}(x)\right]}\sum_{t=1}^{n}(1-\delta_{t})\left[m_{n,0}\left(X_{t} \right)-m\left(X_{t}\right)\right]K_{t}(x),\\ \mathcal{I}_{n,2}(x)&=\frac{1}{n\mathbb{E}\left[K_{1}(x)\right]} \sum_{t=1}^{n}\left(m\left(X_{t}\right)-m(x)\right)K_{t}(x),\\ &\qquad G_{n}^{\prime}(x)=\frac{1}{n\mathbb{E}\left[K_{1}(x)\right]} \sum_{t=1}^{n}K_{t}(x).\end{array}\]
Then, we find
\[\sup_{x\in\mathcal{C}}|m_{n,1}(x)-m(x)|\leq\frac{\sup_{x\in\mathcal{C}}|\mathcal{I} _{n,1}(x)|+\sup_{x\in\mathcal{C}}|\mathcal{I}_{n,2}(x)|+\sup_{x\in\mathcal{C}}| V_{n,0}(x)|}{\inf_{x\in\mathcal{C}}\big{|}G^{\prime}_{n}(x)\big{|}}. \tag{6.30}\]
Laib and Louani (2011) (in p. 371) have proved that, as \(n\to\infty\),
\[\inf_{x\in\mathcal{C}}\big{|}G^{\prime}_{n}(x)\big{|}>1\quad\text{a.s.} \tag{6.31}\]
Next, by using assumption (A4)(i) and the almost sure convergence of \(G^{\prime}_{n}(x)\) to \(1\) uniformly in \(x\) (see Lemma 7 in Laib and Louani (2011)), it follows that
\[\sup_{x\in\mathcal{C}}|\mathcal{I}_{n,2}(x)|=\mathcal{O}_{a.s.}(h_{1}^{\alpha}). \tag{6.32}\]
Furthermore, by the use of Theorem 3.1, the almost sure boundedness of \(\delta\) by \(1\) and the almost sure uniform convergence of \(G^{\prime}_{n}(x)\) to \(1\), we get
\[\sup_{x\in\mathcal{C}}|\mathcal{I}_{n,1}(x)|=\mathcal{O}_{a.s.}(h_{1}^{\alpha })+\mathcal{O}_{a.s.}\left(\lambda_{n}\right). \tag{6.33}\]
Finally, using equation (6.30) combining with Lemma 6.4 and results (6.31), (6.32), (6.33), we obtain
\[\sup_{x\in\mathcal{C}}|m_{n,1}(x)-m(x)|=\mathcal{O}_{a.s.}(h_{1}^{\alpha})+ \mathcal{O}_{a.s.}\left(\lambda_{n}\right).\]
**Proof of Theorem 3.5.** Let
\[\begin{split} U_{n,1}(x)-U(x)&=\frac{1}{\sum_{t=1}^ {n}W_{t}(x)}\underset{t=1}{\sum_{t=1}^{n}[\delta_{t}(Y_{t}-m_{n,0}(X_{t}))^{2 }+(1-\delta_{t})U_{n,0}(X_{t})]W_{t}(x)-U(x)}\\ &=\frac{1}{G^{\prime\prime}_{n}(x)}\{\vartheta^{[1]}_{n,0}(x)+ \vartheta^{[2]}_{n,0}(x)+\vartheta^{[3]}_{n,0}(x)+\Theta_{n,1}(x)+\Theta_{n,2} (x)\},\end{split} \tag{6.34}\]
where \(\vartheta^{[1]}_{n,0}(x),\vartheta^{[2]}_{n,0}(x),\vartheta^{[3]}_{n,0}(x)\) are defined in (6.12), (6.13), (6.14), respectively and
\[\Theta_{n,1}(x) = \frac{1}{n\mathbb{E}(W_{1}(x))}\sum_{t=1}^{n}(1-\delta_{t})[U_{n, 0}(X_{t})-U(X_{t})]W_{t}(x),\] \[\Theta_{n,2}(x) = \frac{1}{n\mathbb{E}(W_{1}(x))}\sum_{t=1}^{n}[U(X_{t})-U(x)]W_{t}( x),\] \[G^{\prime\prime}_{n}(x) = \frac{1}{n\mathbb{E}\left[W_{1}(x)\right]}\sum_{t=1}^{n}W_{t}(x).\]
Then,
\[\sup_{x\in\mathcal{C}}|U_{n,1}(x)-U(x)|\leq\{\sup_{x\in\mathcal{C}}\Big{|} \vartheta^{[1]}_{n,0}(x)\Big{|}+\sup_{x\in\mathcal{C}}\Big{|}\vartheta^{[2]}_ {n,0}(x)\Big{|}+\sup_{x\in\mathcal{C}}\Big{|}\vartheta^{[3]}_{n,0}(x)\Big{|}+ \sup_{x\in\mathcal{C}}|\Theta_{n,1}(x)|+\sup_{x\in\mathcal{C}}|\Theta_{n,2} (x)|\}\times\{\inf_{x\in\mathcal{C}}\big{|}G^{\prime\prime}_{n}(x)\big{|}\} \tag{6.35}\]
According to Lemma 7 in Laib and Louani (2011), we can deduce that:
\[\lim_{n\to\infty}\sup_{x\in\mathcal{C}}|G^{\prime\prime}_{n}(x)-1|=0\quad\text {a.s.} \tag{6.36}\]
Similar as for (6.31), one has
\[\inf_{x\in\mathcal{C}}|G^{\prime\prime}_{n}(x)|>1\quad\text{a.s.}\quad\text{ as}\quad n\to\infty \tag{6.37}\]
Making use of Theorem 3.2, equation (6.36) and the almost sure boundedness of \(\delta\), one gets
\[\sup_{x\in\mathcal{C}}|\Theta_{n,1}(x)|=\mathcal{O}_{a.s.}(h_{1}^{2\alpha}+h_{2 }^{\beta})+\mathcal{O}_{a.s.}(\lambda_{n}^{\prime}+\lambda_{n}^{2}). \tag{6.38}\]
In addition, by using assumption (A4)(ii) and equation (6.36), we obtain
\[\sup_{x\in\mathcal{C}}|\Theta_{n,2}(x)|=\mathcal{O}_{a.s.}(h_{2}^{\beta}). \tag{6.39}\]
Finally, decomposition (6.35), combined with equations (6.18), (6.20), (6.21), (6.37),(6.38), (6.39) allows to conclude that
\[\sup_{x\in\mathcal{C}}\lvert U_{n,1}(x)-U(x)\rvert=\mathcal{O}_{a.s.}(h_{1}^{2 \alpha}+h_{2}^{\beta})+\mathcal{O}_{a.s.}(\lambda_{n}^{\prime}+\lambda_{n}^{2 }).\]
**Proof of Theorem 3.6.** Using decomposition in (6.34), equations (6.18), (6.20) and (6.38), (6.39) allow to conclude that \(\vartheta_{n,0}^{[1]}(x)\), \(\vartheta_{n,0}^{[2]}(x)\), \(\Theta_{n,1}(x)\) and \(\Theta_{n,2}(x)\) are negligible as \(n\to\infty.\) Furthermore, according to equation (6.36), \(G_{n}^{\prime\prime}(x)\) converges almost surely to \(1\). Finally, we can conclude that the asymptotic distribution of the nonparametric imputed conditional variance is determined by the asymptotic variance of the term \(\vartheta_{n,0}^{[3]}(x)\) which is specified in Lemma 6.7.
**Proof of Corollary 3.2.** First, we note that
\[\frac{\widehat{M}_{1,W,2}}{\sqrt{\widehat{M}_{2,W,2}}}\sqrt{\frac {n\widehat{F}_{x,2}(h)}{\omega_{n}(x)\pi_{n}(x)U_{n,1}^{2}(x)}}(U_{n,1}(x)-U( x))\] \[=\frac{\widehat{M}_{1,W,2}\sqrt{M_{2,W,2}}}{\sqrt{\widehat{M}_{ 2,W,2}}M_{1,W,2}}\sqrt{\frac{n\widehat{F}_{x,2}(h)\omega(x)\pi(x)U^{2}(x)}{ \omega_{n}(x)\pi_{n}(x)U_{n,1}^{2}(x)n\phi_{2}(h_{2})f_{1}(x)}}\frac{M_{1,W,2} }{\sqrt{M_{2,W,2}}}\sqrt{\frac{n\phi_{2}(h_{2})f_{1}(x)}{\omega(x)\pi(x)U^{2} (x)}}(U_{n,1}(x)-U(x)).\]
By Theorem 3.6, we find
\[\frac{M_{1,W,2}}{\sqrt{M_{2,W,2}}}\sqrt{\frac{n\phi_{2}(h_{2})f_{1}(x)}{ \omega(x)\pi(x)U^{2}(x)}}(U_{n,1}(x)-U(x))\xrightarrow{\mathcal{D}}\mathcal{ N}\left(0,1\right).\]
Then, in view of Theorem 3.5 and equations (6.24), (6.25) and (6.26), we obtain
\[\frac{\widehat{M}_{1,W,2}}{\sqrt{\widehat{M}_{2,W,2}}}\frac{\sqrt{M_{2,W,2}} }{M_{1,W,2}}\sqrt{\frac{n\widehat{F}_{x,2}(h)\omega(x)\pi(x)U^{2}(x)}{\omega_ {n}(x)\pi_{n}(x)U_{n,1}^{2}(x)n\phi_{2}(h_{2})f_{1}(x)}}\xrightarrow{\mathbb{ P}}1\quad\text{as}\quad n\to\infty.\]
Therefore, the proof of Corollary 3.2 is completed.
## Appendix
**Lemma 6.8**.: _Assume that \((X_{t},\varepsilon_{t})\) is a strictly stationary ergodic process and suppose that assumptions (A3)(i),(v) hold. Then, for each \(\omega\) outside a null set \(D\), there exists a positive integer \(n_{0}(\omega)\) such that \(V_{n,0}(x)=V_{n,0}^{\top}(x)\) for \(n\geq n_{0}(\omega)\) and \(x\in\mathcal{E}\)._
Proof.: Recall that \(\mathcal{L}_{t}=\delta_{t}\sqrt{U(X_{t})}\varepsilon_{t}.\) Then, for every \(\eta>0\) and by using Markov's inequality, one has
\[\mathbb{P}\left(\left|\frac{\mathcal{L}_{t}}{\ell_{n}}\right|> \eta\right)=\mathbb{P}\left(\left|\mathcal{L}_{t}\right|>\eta\ell_{n}\right) \leq\frac{\mathbb{E}(\left|\mathcal{L}_{t}\right|)}{\eta\ell_{n}}\] \[\leq\frac{\mathbb{E}(\delta_{t}|\sqrt{U(X_{t})}\varepsilon_{t}|)} {\eta\ell_{n}}.\]
Let \(a>1\) and \(b>1\) be real numbers such that \(1/a+1/b=1\). By using the Holder's inequality, it follows that
\[\mathbb{P}\left(\left|\frac{\mathcal{L}_{t}}{\ell_{n}}\right|>\eta\right)\leq \frac{1}{\eta\ell_{n}}\mathbb{E}^{1/a}\left(\delta_{t}\left|\sqrt{U(X_{t})} \right|^{a}\right)\times\mathbb{E}^{1/b}(\left|\varepsilon_{t}\right|^{b})\quad.\]
By a double conditioning with respect to the \(\sigma\)-field \(\mathcal{G}_{t-1}\) and by taking \(a=\rho/(\rho-1)\) and \(b=\rho\), we get
\[\mathbb{P}\left(\left|\frac{\mathcal{L}_{t}}{\ell_{n}}\right|>\eta\right)\leq \frac{1}{\eta\ell_{n}}\mathbb{E}^{(\rho-1)/\rho}\left(\delta_{t}\left|\sqrt{U(X_ {t})}\right|^{\rho/(\rho-1)}\right)\times\mathbb{E}^{1/\rho}[\mathbb{E}(\left| \varepsilon_{t}\right|^{\rho}\left|\mathcal{G}_{t-1}\right)].\]
Making use the second part of assumption (A3)(i), one finds
\[\mathbb{P}\left(\left|\frac{\mathcal{L}_{t}}{\ell_{n}}\right|>\eta\right)\leq \frac{C}{\eta\ell_{n}}\mathbb{E}^{(\rho-1)/\rho}\left(\delta_{t}\left|\sqrt{U(X_ {t})}\right|^{\rho/(\rho-1)}\right).\]
Note that another use of the Holder inequality allows us to bound the quantity \(\mathbb{E}^{(\rho-1)/\rho}\left(\delta_{t}\left|\sqrt{U(X_{t})}\right|^{\rho/( \rho-1)}\right)\) as follows:
\[\left|\mathbb{E}^{(\rho-1)/\rho}\left(\delta_{t}\left|\sqrt{U(X_{t})}\right|^{ \rho/(\rho-1)}\right)\right|\leq\mathbb{E}^{(\rho-1)/b\rho}\left(\left|\sqrt{U (X_{t})}\right|^{b\rho/(\rho-1)}\right)\times\mathbb{E}^{(\rho-1)/a\rho}\left( \left|\delta_{t}\right|^{a}\right).\]
By a double conditioning with respect to the \(\sigma\)-field \(\mathcal{G}_{t-1}\) and taking the same values of a and b with the use of assumption (A3)(v), it follows that
\[\left|\mathbb{E}^{(\rho-1)/\rho}\left(\delta_{t}\left|\sqrt{U(X_{ t})}\right|^{\rho/(\rho-1)}\right)\right| \leq\mathbb{E}^{(\rho-1)/b\rho}\left(\left|\sqrt{U(X_{t})}\right| ^{b\rho/(\rho-1)}\right)\times\mathbb{E}^{(\rho-1)/a\rho}[\mathbb{E}\left( \delta_{t}|\mathcal{G}_{t-1}\right)]\] \[\leq\mathbb{E}^{(\rho-1)/b\rho}\left(\left|\sqrt{U(X_{t})}\right| ^{b\rho/(\rho-1)}\right)\times(\pi(x)+o(1))^{(\rho-1)/a\rho}.\]
By putting \(M=(\pi(x)+o(1))^{(\rho-1)/a\rho}\), one has
\[\left|\mathbb{E}^{(\rho-1)/\rho}\left(\delta_{t}\left|\sqrt{U(X_{t})}\right|^{ \rho/(\rho-1)}\right)\right|\leq M\mathbb{E}^{(\rho-1)/\rho^{2}}\left(\left| \sqrt{U(X_{t})}\right|^{\rho^{2}/(\rho-1)}\right).\]
which is finite by the first part of assumption (A3)(i).
Thus,
\[\mathbb{P}\left(\left|\frac{\mathcal{L}_{t}}{\ell_{n}}\right|>\eta\right)\leq \frac{\hat{C}}{\eta\ell_{n}}.\]
Since \(0<\ell_{n}\uparrow\infty\) and \(\eta>0\), we get
\[\mathcal{L}_{t}/\ell_{n}\longrightarrow 0\quad\text{a.s.}\]
Therefore, for \(\omega\in D^{c}\) with \(\mathbb{P}(D^{c})=1\) and some positive integer \(n_{0}(\omega)\), it follows that
\[\left|\mathcal{L}_{t}(\omega)\right|<\ell_{n},\qquad t=1\ldots n,\quad n\geq n _{0}(\omega).\]
Finally, \(V_{n,0}(x)=V_{n,0}^{\top}(x)\).
**Proof of Lemma 6.5**
Let \(a>1\) and \(b>1\) be real numbers such that \(1/a+1/b=1\). By using assumption A3(v) and the Holder's and Markov's inequalities, we can get
\[\left|\mathbb{E}\left\{\mathcal{L}_{t}\text{1}\hskip-1.422638pt \text{l}_{(\left|\mathcal{L}_{t}\right|\leq\ell_{n})}\mid\mathcal{F}_{t-1}\right\}\right| \leq\mathbb{E}^{1/a}\left(\left|\mathcal{L}_{t}\right|^{a}\mid \mathcal{F}_{t-1}\right)\times\mathbb{E}^{1/b}\left\{\text{1}\hskip-1.422638pt \text{l}_{(\left|\mathcal{L}_{t}\right|\leq\ell_{n})}\mid\mathcal{F}_{t-1}\right\}\] \[\leq\mathbb{E}^{1/a}\left(\left|\mathcal{L}_{t}\right|^{a}\mid \mathcal{F}_{t-1}\right)\times\frac{\mathbb{E}^{1/b}\left(\left|\mathcal{L}_{ t}\right|\mid\mathcal{F}_{t-1}\right)}{\ell_{n}^{1/b}}\] \[\leq(\pi(x)+o(1))\frac{\mathbb{E}\left(\left|\sqrt{U\left(X_{t} \right)}\right|^{a}\left|\varepsilon_{t}\right|^{a}\mid\mathcal{F}_{t-1}\right) }{\ell_{n}^{a/b}}.\]
Then, by using Lemma 6.2, assumptions (A1), (A2)(i), (iv), (vi) and (A3)(i), along with the first part of condition (3.5) and by following similar steps as the proof of Lemma B in Chaouch (2019) with specific values for a and b (\(a=\rho\) and \(b=\rho/(\rho-1)\)), we obtain
\[\sup_{x\in\mathbb{C}}\left|\widetilde{V}_{n,0}(x)\right|\leq\frac{C}{\ell_{n}^{ \rho-1}\phi_{1}\left(h_{1}\right)^{(\rho-1)/\rho}}\text{ almost surely as }\,n\to\infty.\] |
2309.12631 | Learning the eigenstructure of quantum dynamics using classical shadows | Learning dynamics from repeated observation of the time evolution of an open
quantum system, namely, the problem of quantum process tomography is an
important task. This task is difficult in general, but, with some additional
constraints could be tractable. This motivates us to look at the problem of
Lindblad operator discovery from observations. We point out that for moderate
size Hilbert spaces, low Kraus rank of the channel, and short time steps, the
eigenvalues of the Choi matrix corresponding to the channel have a special
structure. We use the least-square method for the estimation of a channel
where, for fixed inputs, we estimate the outputs by classical shadows. The
resultant noisy estimate of the channel can then be denoised by diagonalizing
the nominal Choi matrix, truncating some eigenvalues, and altering it to a
genuine Choi matrix. This processed Choi matrix is then compared to the
original one. We see that as the number of samples increases, our
reconstruction becomes more accurate. We also use tools from random matrix
theory to understand the effect of estimation noise in the eigenspectrum of the
estimated Choi matrix. | Atithi Acharya, Siddhartha Saha, Shagesh Sridharan, Yanis Bahroun, Anirvan M. Sengupta | 2023-09-22T05:56:58Z | http://arxiv.org/abs/2309.12631v1 | # Learning the eigenstructure of quantum dynamics using classical shadows
###### Abstract
Learning dynamics from repeated observation of the time evolution of an open quantum system, namely, the problem of quantum process tomography is an important task. This task is difficult in general, but, with some additional constraints could be tractable. This motivates us to look at the problem of Lindblad operator discovery from observations. We point out that for moderate size Hilbert spaces, low Kraus rank of the channel and short time steps, the eigenvalues of the Choi matrix corresponding to the channel have a special structure. We use the least-square method for the estimation of a channel where, for fixed inputs, we estimate the outputs by classical shadows. The resultant noisy estimate of the channel can then be denoised by diagonalizing the nominal Choi matrix, truncating some eigenvalues, and altering it to a genuine Choi matrix. This processed Choi matrix is then compared to the original one. We see that as the number of samples increases, our reconstruction becomes more accurate. We also use tools from random matrix theory to understand the effect of estimation noise in the eigenspectrum of the estimated Choi matrix.
## I Introduction
Learning dynamics from observations is an important task in many fields. For open quantum systems, this problem is called Quantum Process Tomography (QPT). In standard QPT, the process is an unknown Completely Positive Trace-Preserving (CPTP) map on operators associated with a \(d\)-dimensional Hilbert space. The CPTP map requires \(d^{4}-d^{2}\) real numbers to be completely characterized [21]. For \(n\)-qubit sytems, \(d=2^{n}\), so the number of parameters is \(O(2^{4n})\). Thus, even for a 10-qubit system, QPT formally requires estimating about \(10^{12}\) parameters well, necessitating a large number of observations. However, if we have prior information that the map is close to identity, with the nontrivial action due to a small number of Lindblad operators [21], we might make some progress. This paper explores the conditions under which such progress is possible.
For quantum state tomography (QST), shadow tomography [1] aims at predicting a power law number of observations in the number of qubits, \(n\), from \(O(n)\) copies of the density matrix \(\rho\). The authors in [13] have constructed such a description of low sample complexity via the so-called classical shadows. [2] extended this method to generalized measurements.
Recently, the work done in [16] and [15] used Choi-Jamiolkowski correspondence between channels and states to apply the classical shadows technique to QPT. However, since classical shadows do not produce the state, and therefore in the QPT context do not give you the channel, it is not clear how to perform general dynamical prediction over longer time scales. This motivates us to look at the problem of Lindblad generator discovery [5; 12] while using classical shadow tomography for state estimation.
A loosely related subject in machine learning is the Lie generator [19; 22], which has seen a resurgence of interest [3; 10]. In subsequent studies, such as [3; 10], the spectral gap is crucial to identify the number of truly active generators. We also observe a comparable significance of the spectral gap in our work.
Recently, least square methods have been used for QST with classical shadows [20]. Our approach utilizes least square method for the estimation of a channel where, for fixed inputs, we estimate the output by classical shadows. The resultant noisy estimate of the channel can then be diagonalized/factorized. Under appropriate circumstances, a spectral gap allows us to truncate the factorized version and essentially denoise the estimate.
## II Quantum channel as a linear map and its factorization
Let the quantum channel be defined as a map \(\mathcal{E}:\mathbb{C}^{d\times d}\rightarrow\mathbb{C}^{d\times d}\) with the input and output density matrix related by \(\rho^{out}=\mathcal{E}(\rho^{in})\). Explicitly, in terms of components, we have:
\[\rho_{ij}^{out}=\sum_{kl}\mathcal{E}_{ijkl}\rho_{kl}^{in}. \tag{1}\]
Jamiolkowski-Choi correspondence [6; 14] creates a density matrix, the Choi matrix, on \(\mathbb{C}^{d}\otimes\mathbb{C}^{d}\):
\[\mathbf{C}_{\mathcal{E}}=\sum_{ij}\ket{i}\bra{j}\otimes\mathcal{E}(\ket{i} \bra{j}). \tag{2}\]
Note that the Choi matrix elements are
\[(\mathbf{C}_{\mathcal{E}})_{(ik),(jl)}=\bra{k}\bra{i}\mathbf{C}_{\mathcal{E}} \ket{l}\ket{j}=\bra{k}\mathcal{E}(\ket{i}\bra{j})\ket{l}=\mathcal{E}_{klij}.\]
The Choi matrix is positive semidefinite and has an eigendecomposition of the form \(\mathbf{C}_{\mathcal{E}}=\sum_{\alpha}\lambda_{\alpha}\mu_{\alpha}\mu_{\alpha} ^{\dagger}\) with \(\mu_{\alpha}\in\mathbb{C}^{d}\otimes\mathbb{C}^{d}\) and \(\lambda_{\alpha}\geq 0\) for all \(\alpha\). Defining \(M_{ik}^{\alpha}=\sqrt{\lambda_{\alpha}}(\mu_{\alpha})_{ki}\), we get a factorization of our the channel [12]\(\mathcal{E}\) :
\[\mathcal{E}_{ijkl}=\sum_{\alpha}M_{ik}^{\alpha}(M_{jl}^{\alpha})^{*}. \tag{3}\]
The number of nonzero eigenvalues of the Choi matrix is the Kraus rank of the channel. This factorization is the basis of the operator-sum representation [21]: \(\rho^{out}=\sum_{\alpha}M^{\alpha}\rho^{in}M^{\alpha\dagger}\). For systems with low Kraus rank, we can use this factorization to denoise estimated channels.
We proceed by defining a straightforward loss function:
\[\mathcal{L}=\sum_{t=1}^{T}\left(\rho^{out}_{ij}(t)-\sum_{kl}\mathcal{E}_{ijkl} \rho^{in}_{kl}(t)\right)^{2} \tag{4}\]
where \(t\) serves as a sample index. By setting the derivative, \(\frac{\partial\mathcal{E}}{\partial\mathcal{E}_{ijcd}}\), to zero, we derive the optimal channel estimate:
\[\sum_{t=1}^{T}\rho^{out}_{ij}(t)\rho^{in}_{cd}(t)=\sum_{kl}\mathcal{\hat{E}}_{ ijkl}\sum_{t=1}^{T}\rho^{in}_{kl}(t)\rho^{in}_{cd}(t) \tag{5}\]
This representation elucidates the terms as non-centered covariance expressions. For example, we define the first covariance expression that captures the overlap between the input and output states, i.e. \(C^{out,in}\), and the second covariance expression that captures the overlap between the different input states, i.e. \(C^{in,in}\) as
\[C^{out,in}_{ij,cd}=\frac{1}{T}\sum_{t=1}^{T}\rho^{out}_{ij}(t) \rho^{in}_{cd}(t) \tag{6a}\] \[C^{in,in}_{kl,cd}=\frac{1}{T}\sum_{t=1}^{T}\rho^{in}_{kl}(t)\rho^{in}_{cd}(t) \tag{6b}\]
We can further expand the first term in terms of the second, \(C^{out,in}_{ij,cd}=\sum_{kl}\epsilon_{ij,kl}C^{in,in}_{kl,cd}\).
To simplify the notation, we group the indices as follows: \(ij\rightarrow\alpha\), \(cd\rightarrow\beta\), and \(kl\rightarrow\gamma\). This results in:
\[C^{out,in}_{\alpha,\beta}=\sum_{\gamma}\mathcal{\hat{E}}_{\alpha\gamma}C^{in, in}_{\gamma\beta} \tag{7}\]
Interpreting this in the form of a matrix equation aids in determining the quantum channel estimate: \(\mathbf{C}^{out,in}=\hat{\mathcal{E}}\mathbf{C}^{in,in}\). Under the assumption that \(\mathbf{C}^{in,in}\) is invertible, we arrive at the following factorization:
\[\mathcal{\hat{E}}=\mathbf{C}^{out,in}(\mathbf{C}^{in,in})^{-1}. \tag{8}\]
## III Eigenstructure of low rank channels
For open quantum systems, the dynamics of a quantum system is described by Lindblad master equation [9; 17],
\[\dot{\rho}(t)=-i[\mathcal{H},\rho(t)]+\sum_{k=1}^{N}\biggl{(}L_{k }\rho(t)L_{k}^{\dagger}-\frac{1}{2}L_{k}^{\dagger}L_{k}\rho(t)-\] \[\frac{1}{2}\rho(t)L_{k}^{\dagger}L_{k}\biggr{)} \tag{9}\]
We also summarize further details in VI.1. We note that for the channel \(\rho^{out}=\mathcal{E}(\rho^{in})=\sum_{\alpha}M_{\alpha}\rho^{in}M_{\alpha}^ {\dagger}\), we have the following:
\[\mathcal{E}_{ijkl}=\sum_{\alpha}M_{ik}^{\alpha}M_{jl}^{\ast} \tag{10}\]
To relate this to the discrete-time channel description, we note that the appropriate factorized structure in the four-index tensor \(\mathcal{E}_{ijkl}\) is manifested when the indices \((i,k)\) and \((j,l)\) are clubbed together. Eigendecomposition of \(\mathbf{C}_{\mathcal{E}(i,k),(j,l)}\) gives us the Kraus operators \(M_{\alpha}\) and the Kraus rank denotes the number of nonzero eigenvalues of the Choi matrix. For a discrete-time version of the Lindblad master equation we consider the following channel:
\[\rho^{out}=\mathcal{E}(\rho^{in})=(1-p)UM\rho^{in}M^{\dagger}U^{\dagger}+p \sum_{\alpha}L_{\alpha}\rho^{in}L_{\alpha}^{\dagger} \tag{11}\]
where
\[M=\left(\frac{\mathbf{I}-p\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha}}{1-p} \right)^{\frac{1}{2}} \tag{12}\]
and \(p=\lambda\Delta t\) (\(0<p<<1\)) in order to satisfy the CPTP conditions of quantum channels. In the Lindblad limit, we observe that:
\[\rho^{out}=(\mathbf{I}-i\Delta tH+\lambda\Delta tK)\rho^{in}(\mathbf{I} \lambda+\Delta tK+i\Delta tH)\] \[+\lambda\Delta t\sum_{\alpha}L_{\alpha}\rho^{in}L_{\alpha}^{\dagger} \tag{13}\]
from which one obtains:
\[\mathcal{E}_{ijkl}=\delta_{ik}\delta_{lj}-i\Delta tH_{ik}\delta_{lj }+i\Delta t\delta_{ik}H_{lj}+\lambda\Delta t(K_{ik}\delta_{lj}\] \[+\delta_{ik}K_{lj})+\lambda\Delta t\sum_{\alpha}(L^{\alpha})_{ik} (L^{\alpha})_{jl}^{\ast} \tag{14}\]
For a system with Kraus rank \(N+1<<d^{2}\), the eigen-decomposition of \(\mathbf{C}_{\mathcal{E}(i,k),(j,l)}\), close to the Lindblad limit, will lead to the following eigenspectrum: one relatively large eigenvalue of order 1 whose eigenvector corresponds to the Kraus operator for Hamiltonian evolution and the overall effect of dissipation, some intermediate non-zero eigenvalues of the order \(\Delta t\) whose eigenvectors correspond to the Lindblad operators and rest of the eigenvalues will be zero representing the kernel. Since the Kraus
Figure 1: Expressing the quantum channel (8) in terms of the covariance expressions \(C^{out,in}\) and inverse of \(C^{in,in}\). This factorization is explained in detail in the Sec.II.
operator containing the Hamiltonian evolution is given by \((\mathbf{I}-i\Delta tH+\lambda\Delta tK)\) one can obtain an estimate of the Hamiltonian \(H\) by looking at the antisymmetric part of the eigenvector corresponding to the top eigenvalue of \(\mathcal{E}_{(i,k),(j,l)}\). The Lindblad operators \(L_{\alpha}\) can be identified from the eigenvectors corresponding to the non-zero intermediate eigenvalues.
One can obtain an estimate of \(K=-\frac{1}{2}\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha}\). Having obtained estimates of \(H\), \(L_{\alpha}\), and \(K\) one can then write down the GKSL generator \(G_{ijkl}\). We can further simulate (see Sec. (V), and compare the estimated generators with the actual generator. The generator is given by:
\[G_{ijkl}=\lim_{\Delta t\to 0}\frac{\mathcal{E}_{ijkl}-\delta_{ik} \delta_{jl}}{\Delta t}\] \[=(-iH_{ik}\delta_{jl}+i\delta_{ik}H_{jl})+\lambda(K_{ik}\delta_{ jl}+\delta_{ik}K_{jl}) \tag{15}\] \[+\lambda\sum_{\alpha}(L^{\alpha})_{ik}(L^{\alpha})_{jl}^{*}\]
Additionally, the behavioral changes in the structure of the eigenspace, when we have our least square estimated channel \(\hat{\mathcal{E}}\) instead of the true low-rank channel \(\mathcal{E}\) are discussed further. However, if we form the equivalent of the estimated Choi matrix, with a large enough sample, we will find three different classes of eigenvalues, the largest one closest to 1, \(N\) intermediate eigenvalues, and \(d^{2}-N-1\) non-zero eigenvalues. The last group is a finite sample effect, replacing the zero eigenvalues of the ideal channel. Ideally, this group needs to be well below the intermediate group of eigenvalues.
## IV Estimation noise using random matrix theory
In order to understand the effect of noise on the eigenspectrum of the estimated channel \(\hat{\Phi}_{ik,jl}\) we bring in some tools from Random Matrix Theory. We can write the estimate of our channel as follows:
\[\hat{\Phi}_{ik,jl} =\Phi_{ik,jl}+X_{ik,jl} \tag{16}\] \[\hat{\Phi}_{I,J} =\Phi_{I,J}+X_{I,J} \tag{17}\]
Note that we use the notation \(\Phi\) to denote a quantum channel to help differentiate between the index notation \((ij,kl)\) and \((ik,jl)\). When using the input-output index notation i.e. \((ij,kl)\) we choose \(\mathcal{E}\) to represent a quantum channel, and with \((ik,jl)\), we alternatively use \(\Phi\). In the Eqn.(16), \(X_{I,J}\) denotes the noise in the estimation process. We focus on qudit systems of dimension \(d\) and suppose that the rank of our channel is \(k\). Thus, the actual \(\Phi_{I,J}\) has \(d^{2}-k\) zero eigenvalues in its spectrum. We consider the projection of \(X_{I,J}\) into this \(D=d^{2}-k\) dimensional kernel subspace - due to noise in estimation the estimated eigenvalues will not be exactly zero but will be distributed about the zero eigenvalue with a certain characteristic width. We think of \(X_{I,J}\) as a random matrix with each of the \(D^{2}\) elements \(X_{I,J}\sim N(0,\frac{a^{2}}{n})\) where \(a^{2}\) is a pre-factor for the variance and \(n\) denotes the total number of samples. Let the normalized frequency distribution of the eigenvalues of \(X_{I,J}\) be denoted by \(\rho(\lambda)\) with the variance being \(\sigma_{\lambda}^{2}\). We observe the following:
\[\mathbb{E}[\sum_{\alpha=1}^{D}\lambda_{\alpha}^{2}]=D\sigma_{ \lambda}^{2} \tag{18}\] \[\mathbb{E}[\sum_{\alpha=1}^{D}\lambda_{\alpha}^{2}]=\mathbb{E}[ \mathrm{Tr}(X^{2})]=\mathbb{E}[\sum_{I}\sum_{J}X_{I,J}X_{J,I}]=D^{2}\frac{a^{2 }}{n}\] (19) \[D\sigma_{\lambda}^{2}=D^{2}\frac{a^{2}}{n}\implies\sigma_{ \lambda}^{2}=\frac{D}{n}a^{2} \tag{20}\]
This indicates that the estimated eigenvalues corresponding to the kernel subspace will be distributed about the actual zero eigenvalues with a characteristic width of the order \(\sigma_{\lambda}=\sqrt{\frac{D}{n}}a\) and hence these eigenvalues can mix with the intermediate eigenvalues of order \(\Delta t\) corresponding to the Lindblad operators. Thus, one needs a certain optimal number of samples \(n\) to actually differentiate between the estimates of the trivial eigenvalues and the non-trivial ones. One important thing to note is that the intermediate eigenvalues will be of the order \(||L_{\alpha}||_{F}^{2}\Delta t\) if \(L_{\alpha}\) is not normalized. In order to get the normalized Frobenius norm for the Lindblad operators, one can sample the elements of the matrices \(L_{\alpha}\) from Gaussian distribution and then divide by a factor of \(\sqrt{d}\) so that the Frobenius norm is 1 in expectation. In that case, the gap between the zero eigenvalues and the intermediate ones will be of the order \(\Delta t\). To differentiate between the estimates of trivial and nontrivial eigenvalues in the spectrum one needs to ensure \(\sigma_{\lambda}=\sqrt{\frac{D}{n}}a<<\Delta t\) ie. \(n>>\frac{Da^{2}}{\Delta t^{2}}\).
## V Numerical experiments
_Measurement procedure_: We use ideas from state tomography to obtain an estimate of \(\mathcal{E}_{ijkl}\). We prepare multiple copies of a set of informationally complete quantum states for qudit-like systems and aim to estimate the output state after the action of a channel by using tools from shadow tomography. As input states we use \(d^{2}\) pure qudit states each of dimension \(d\) such that the projectors of these input states form an informationally complete basis. After evolving each state through the channel we use shadow tomography protocol to estimate the output state - we apply a Haar random unitary conjugation on the output state, then measure using the projectors of the computational basis for \(\mathbb{C}^{d}\) and form a shadow like estimate by averaging over many measurement outcomes for each input state. The motivation behind following this protocol is that the inverse of the measure
ment channel is analytically tractable using the averaging properties of Haar random unitaries. A practical way to implement this is using SIC POVMs, MUBs or Clifford circuits which have the 2-design property. Another measurement protocol is choosing a set of informationally complete POVMs and building shadow estimates based on the measurement outcomes although in this case the measurement channel inverse might not be analytically tractable and has to be implemented numerically. Since the regression estimate of \(\mathcal{E}\) from the loss function discussed previously is given by \(\mathcal{E}=\mathbf{C}^{out,in}(\mathbf{C}^{in,in})^{-1}\) we see that the estimation error in \(\mathcal{E}\) appears due to noise in the estimation of \(\mathbf{C}^{out,in}\). As long as we use a set of informationally complete basis states as our input, \((\mathbf{C}^{in,in})^{-1}\) exists and is already known. Thus, the randomness/noise in our estimation procedure is manifested through the error in the estimation of the output quantum state.
_Experiments_: As one of the simple examples we consider a system of 3 qubits acted upon by the following channel:
\[\mathcal{E}(\rho)=(1-p)U\rho U^{\dagger}+\sum_{\alpha=1}^{3}\frac{p}{3}L^{ \alpha}\rho L^{\alpha\dagger} \tag{21}\]
We choose \(U\) to correspond to evolution by Hamiltonian of the form \(\sigma_{z}\otimes I\otimes I\) and \(L_{i}\) denotes \(\sigma_{x}\) acting only on the \(i\)th qubit. It is easy to see that \(U\) and \(L_{\alpha}\) are unitaries that are orthogonal under the trace inner product. Thus this is a channel of rank 4. Since \(p\) is very small we expect one large eigenvalue, three intermediate eigenvalues, and rest of the eigenvalues to be close to zero in the estimated eigenspectrum. We demonstrate the results as a histogram as seen in Fig. 2.
For our experiments, we consider a qudit of dimension \(d=8\) and a channel of rank 6 composed of Hamiltonian evolution and 5 Lindblad operators. The Hamiltonian and Lindblad operators are generated randomly. In fig. 3 we show plots of the eigenvalues obtained from the eigendecomposition of the original Choi matrix and the estimated Choi matrix. In fig. 4 we show how the error in estimation of the processed Choi matrix behaves as we increase the sample size. The size of the sampling noise induced eigenvalues are expected to be \(O\big{(}\sqrt{\frac{d^{2}-N}{I}}\big{)}\), explaining the improvement of reconstruction error with growing sample size.
Principal component analysis [11], in particular, and
Figure 3: Gaps between the highest, intermediate non-trivial eigenvalues and the noise-induced eigenvalues around zero. The system considered is a qudit of dimension \(d=8\) and a channel of rank 4 composed of Hamiltonian evolution and 5 random Lindblad operators. The evolution is for \(p=dt=0.1\) and we have chosen 7000 sample size, so the total number of samples is \(64\times 7000\).
Figure 2: The histogram of eigenvalues for a system of 3-qubits acted upon by a channel of rank 4. With a high probability, the system is evolved according to a Hamiltonian and with the remaining, it is acted by a Pauli channel Eq. (21). Here we show that with 10000 shadows, we can recover the true rank of the channel from the shadow-based estimates. The highest eigenvalue is marked in green, intermediate eigenvalues are marked with blue and the noise-induced eigenvalues around zero are marked with orange. We also explain the distribution of the low-lying eigenvalues using the Wigner semicircle law (marked in red). The simulation has been run on the IBM-Q QASM simulator.
truncated singular value decomposition (TSVD) based denoising [8], in general, are widely-used methods that have led to many theoretical discussions about recovery of low rank-signal from noisy matrices. Tools from the theory of random Wishart matrices played a major role in this discussion [23; 4; 7; 18]. Similar sophisticated tools from random matrix theory could be developed for this problem to discover Lindblad operators, while dealing with sampling noise.
## VI Appendix
### The Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation
The Kraus operator summation can be used to write the evolution of \(\rho\) from \(t\) to \(t+\delta t\) as: \(\rho(t+\delta t)=\sum_{k}M_{k}(\delta t)\rho(t)M_{k}^{\dagger}(\delta t)\). If we work in the limit of infinitesimal time, \(\delta t\to 0\). The first order survive in \(\delta t,\rho(t+\delta t)=\rho(t)+\delta t\delta\rho\). This implies that the Kraus operator should be expanded as \(M_{k}=M_{k}^{(0)}+\sqrt{\delta t}M_{k}^{(1)}+\delta tM_{k}^{(2)}+\ldots\) Then there is one Kraus operator such that \(M_{0}=\mathbf{I}+\delta t(-i\mathcal{H}+K)+O\left(\delta t^{2}\right)\) with \(K\) hermitian (so that \(\rho(t+\delta t)\) is hermitian), while all others have the form: \(M_{k}=\sqrt{\delta t}L_{k}+O(\delta t)\), so that we ensure \(\rho(t+\delta t)=\rho(t)+\delta\rho\delta t:\)
\[\rho(t+\delta t)=M_{0}\rho(t)M_{0}^{\dagger}+\sum_{k>0}M_{k}\rho M _{k}^{\dagger} \tag{22}\] \[=[\mathbf{I}+\delta t(-i\mathcal{H}+K)]\rho[\mathbf{I}+\delta t(i \mathcal{H}+K)]+\delta t\sum_{k}L_{k}\rho L_{k}^{\dagger}\] (23) \[=\rho-i\delta t[\mathcal{H},\rho]+\delta t(K\rho+\rho K)+\delta t \sum_{k}L_{k}\rho L_{k}^{\dagger} \tag{24}\]
where operator \(K\) and the other operators \(L_{k}\) are related to each other since they have to respect the Kraus sum normalization condition,
\[K=-\frac{1}{2}\sum_{k>0}L_{k}^{\dagger}L_{k} \tag{25}\]
Figure 5: We consider a Choi matrix parametrized by a locally-purified density operator (LPDO) consisting of 4 qubits and a Kruas dimension (\(\nu=2\)). Thus we observe \(2^{4}-1=15\) intermediate eigenvalues, 1 eigenvalue that is needed for trace preservation and the remaining 240 trivial eigenvalues. We compute the spectrum of the Choi matrix obtained after variationally learning the quantum process from measurement data on a random circuit ansatz of depth=10 on a 4 qubit system. This parametrization enables us to scale up process tomography tasks as discussed in [24].
Figure 4: (a) Plot of the scaled Frobenius error (\(\frac{||C_{estimate}-C_{actual}||_{F}^{2}}{||C_{actual}||_{F}^{2}}\)) in the estimation of the processed Choi matrix (C) versus sample size. The system considered is a qudit of dimension \(d=8\) and a channel of rank 6 composed of Hamiltonian evolution and 5 random Lindblad operators. The evolution is for \(p=dt=0.1\). (b) Plot of the scaled Frobenius error between the true generator (\(G_{actual}\)) and the estimated generator (\(G_{estimate}\)) in estimation of the corresponding Generectors
Finally we substitute \(K\) in the equation above and take the limit \(\delta\to 0:\rho(t+dt)=\rho(t)+dt\hat{\rho}\). We thus obtain the Lindblad master equation with \(\{A,B\}=AB+BA\)
\[\dot{\rho}(t)=-i[\mathcal{H},\rho(t)]+\sum_{k=1}^{N}\left(L_{k} \rho(t)L_{k}^{\dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho(t)\}\right) \tag{26}\]
### Near unitary channels
Starting with pure unitary evolution, we can write the quantum channels for small-time evolution as
\[U=e^{-i\Delta tH}\approx\mathbf{I}-i\Delta tH \tag{27}\] \[U_{ik}=\delta_{ik}-i\Delta tH_{ik}\quad\text{(matrix notation)}\] (28) \[U_{jl}^{*}=\delta_{jl}+i\Delta tH_{jl} \tag{29}\]
We have used \(U_{lj}^{\dagger}=U_{jl}^{*}\) and the Hermitian property of Hamiltonian \(H_{jl}^{\dagger}=H_{jl}\). This gives us the evolution of the input density matrix term for only a unitary channel as
\[\rho^{out}=\mathcal{E}(\rho^{in})=U\rho^{in}U^{\dagger} \tag{30}\] \[\implies\rho_{ij}^{out}=\sum_{kl}U_{ik}\rho_{kl}^{in}U_{lj}^{ \dagger}=\sum_{kl}U_{ik}U_{jl}^{*}\rho_{kl}^{in}\] (31) \[=(\delta_{ik}-i\Delta tH_{ik})(\delta_{jl}+i\Delta tH_{jl})\; \rho_{kl}^{in}\] (32) \[=(\delta_{ik}\delta_{jl}+i\Delta tH_{jl}\delta_{ik}-i\Delta tH_{ ik}\delta_{jl}+(\Delta t)^{2}H_{ik}H_{jl})\;\rho_{kl}^{in} \tag{33}\]
Thus for small-time evolution using the unitary channel, we obtain
\[\mathcal{E}_{ijkl}=(\delta_{ik}\delta_{jl}+i\Delta tH_{jl}\delta_{ik}-i \Delta tH_{ik}\delta_{jl}+(\Delta t)^{2}H_{ik}H_{jl})\]
For the generator of the channel, we obtain:
\[G_{ijkl}=\lim_{\Delta t\to 0}\frac{\mathcal{E}_{ijkl}-\delta_{ik}\delta_{jl}}{ \Delta t}=-iH_{ik}\delta_{jl}+iH_{jl}\delta_{ik} \tag{34}\]
Moving from perfect unitaries to near unitaries by adding extra terms. This analysis can be termed as evolution with _mixed-unitary channel_. Simply understood as a convex combination of unitary channels. Note that at the very least they are unital i.e. \(\mathcal{E}(\mathbf{I})=\mathbf{I}\). [25]
\[(1-p)U\rho U^{\dagger}+\sum_{\alpha}^{N}p_{\alpha}(L^{\alpha}U \rho U^{\dagger}(L^{\alpha})^{\dagger}) \tag{35}\]
The trace condition is satisfied by ensuring \(1-p+\sum_{\alpha}p_{\alpha}=1\). For example, depolarizing channel will correspond to uniform distribution i.e. \(p_{\alpha}=\frac{p}{N}\).
\[(1-p)(\delta_{ik}\delta_{jl}+i\Delta tH_{jl}\delta_{ik}-i\Delta tH _{ik}\delta_{jl} \tag{36}\] \[+(\Delta t)^{2}H_{ik}H_{jl})\;\rho_{kl}^{in}+\sum_{\alpha}p_{ \alpha}(L^{\alpha}U)_{ik}(L^{\alpha}U)_{jl}^{*}\rho_{kl} \tag{37}\]
If \(p=\lambda\Delta t\), \(p_{\alpha}=\lambda_{\alpha}\Delta t\) and keeping only the first order terms, we get
\[(1-\lambda\Delta t)(\delta_{ik}\delta_{jl}+i\Delta tH_{jl}\delta_ {ik}-i\Delta tH_{ik}\delta_{jl})\rho_{kl}^{in} \tag{38}\] \[+\Delta t\sum_{\alpha}\lambda_{\alpha}[(L^{\alpha})_{ik}(L^{ \alpha})_{jl}^{*}+\mathcal{O}(\Delta t)]\;\rho_{kl}\] (39) \[\approx[(1-\lambda\Delta t)\delta_{ik}\delta_{jl}+(-i\Delta tH_{ jl}\delta_{ik}+i\Delta tH_{ik}\delta_{jl})\] (40) \[+\Delta t\sum_{\alpha}\lambda_{\alpha}(L^{\alpha})_{ik}(L^{ \alpha})_{jl}^{*}]\;\rho_{kl} \tag{41}\]
We can thus define the generators as:
\[G_{ijkl}=\lim_{\Delta t\to 0}\frac{\mathcal{E}_{ijkl}-\delta_{ik} \delta_{jl}}{\Delta t} \tag{42}\] \[=-\lambda\delta_{ik}\delta_{jl}+(-iH_{jl}\delta_{ik}+iH_{ik}\delta _{jl})+\sum_{\alpha}\lambda_{\alpha}(L^{\alpha})_{ik}(L^{\alpha})_{jl}^{*} \tag{43}\]
There is additional error coming due to ignoring the higher order terms i.e. \(\mathcal{O}((\Delta t)^{2})\)
Now let's consider a channel of the following form:
\[\mathcal{E}(\rho)=(1-p)UM\rho M^{\dagger}U^{\dagger}+p\sum_{\alpha}L_{\alpha} \rho L_{\alpha}^{\dagger} \tag{44}\]
where \(L_{\alpha}\) are arbitrary Lindblad-like operators and \(M\) has been introduced to satisfy the trace normalization property of CPTP maps.
In order to satisfy the trace condition, we need:
\[(1-p)M^{\dagger}M+p\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha}= \mathbf{I} \tag{45}\] \[M^{\dagger}M=\frac{\mathbf{I}-p\sum_{\alpha}L_{\alpha}^{\dagger}L_{ \alpha}}{1-p}\] (46) \[M=\left(\frac{\mathbf{I}-p\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha} }{1-p}\right)^{\frac{1}{2}} \tag{47}\]
In order to consider the square root in the above equation, the numerator should be positive semi-definite, thus when implementing this procedure we ensure that we choose \(p\) and the Lindblad operators \(L_{\alpha}\) such that all the eigenvalues of \(\mathbf{I}-p\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha}\) are non-negative.
Thus using this expression for \(M\) and looking at the action of the channel by expanding till the first order in \(\Delta t\) we obtain:
\[\rho^{out}=\mathcal{E}(\rho_{in}) \tag{48}\]
\[=(\mathbf{I}-i\Delta tH)(\mathbf{I}-\frac{1}{2}p\sum_{\alpha}L_{\alpha} ^{\dagger}L_{\alpha})\rho_{in}(\mathbf{I}-\frac{1}{2}p\sum_{\alpha}L_{\alpha}^{ \dagger}L_{\alpha})\] \[\times(\mathbf{I}+i\Delta tH)+p\sum_{\alpha}L_{\alpha}\rho_{in}L_{ \alpha}^{\dagger}\]
We obtain \((\mathbf{I}-i\Delta tH)(\mathbf{I}+pK)\rho_{in}(\mathbf{I}+pK)(\mathbf{I}+i\Delta tH)+p\sum_{ \alpha}L_{\alpha}\rho_{in}L_{\alpha}^{\dagger}\), and then equate it to
\[=(\mathbf{I}-i\Delta tH+\lambda\Delta tK)\rho_{in}(\mathbf{I}+\lambda\Delta t K +i\Delta tH) \tag{49}\] \[+\lambda\Delta t\sum_{\alpha}L_{\alpha}\rho_{in}L_{\alpha}^{ \dagger}. \tag{50}\]
In the above derivation we used \((\mathbf{I}-p\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha})^{\frac{1}{2}}\approx\mathbf{I}- \frac{1}{2}p\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha}\) since \(p\) is of the order \(\Delta t\) and we set \(K=-\frac{1}{2}\sum_{\alpha}L_{\alpha}^{\dagger}L_{\alpha}\)
Upto first order in \(\Delta t\) we obtain:
\[\rho_{ij}^{out}=[\delta_{ik}\delta_{jl}-i\Delta tH_{ik}\delta_{jl }+i\Delta t\delta_{ik}H_{jl}+\lambda\Delta t(K_{ik}\delta_{jl}\] \[+\delta_{ik}K_{jl})+\lambda\Delta t\sum_{\alpha}(L^{\alpha})_{ik} (L^{\alpha})_{jl}^{*}]\rho_{kl}^{in}\]
and the generator is given by:
\[G_{ijkl}=\lim_{\Delta t\to 0}\frac{\mathcal{E}_{ijkl}-\delta_{ik} \delta_{jl}}{\Delta t}=(-iH_{ik}\delta_{jl}+i\delta_{ik}H_{jl})\] \[+\lambda(K_{ik}\delta_{jl}+\delta_{ik}K_{jl})+\lambda\sum_{ \alpha}(L^{\alpha})_{ik}(L^{\alpha})_{jl}^{*}\]
The above equation for the generator is similar to the expression in the Lindblad master equation; the first term corresponds to the commutator term with Hamiltonian, the second term denotes the anti-commutator term with \(K\) and the last term corresponds to the action of the Lindblad operators. To estimate the generator from the measurement results, we note that for the channel:
\[\rho_{out}=\mathcal{E}(\rho_{in})=\sum_{\alpha}M_{\alpha}\rho_{in}M_{\alpha}^{ \dagger} \tag{51}\]
we have the following:
\[\mathcal{E}_{ijkl}=\sum_{\alpha}M_{ik}^{\alpha}M_{jl}^{\alpha^{*}} \tag{52}\]
### CP and TP projections
The projection, with respect to the Frobenius norm, of a matrix \(X\) onto the set of matrices representing trace-preserving maps is the solution to the following optimization problem:
\[\text{Proj}_{TP}[X]=\arg\min_{X^{\prime}}||X-X^{{}^{\prime}}||_{2}^{2} \tag{53}\]
\[s.t.Tr_{s}(X^{\prime})=\frac{1}{d}\mathbf{I}_{d} \tag{54}\]
The unique solution to the above optimization problem is given by the following closed form expression:
\[\text{Proj}_{TP}[X]=X+\frac{1}{d}\mathbf{I}_{d}\otimes(\frac{1}{d}\mathbf{I}_{d}-tr_{ s}(X)) \tag{55}\]
The projection of a matrix \(X\) onto the set of positive-semidefinite matrices is the solution to the following optimization problem:
\[\text{Proj}_{CP}[X]=\arg\min_{X^{\prime}}||X-X^{{}^{\prime}}||_{2}^{2} \tag{56}\]
\[s.t.X^{\prime}\succcurlyeq 0 \tag{57}\]
The condition of positive semidefiniteness is that all eigenvalues be greater than equal to zero. An obvious method, therefore, for enforcing the positive semidefiniteness of a matrix is to set all negative eigenvalues to zero. This turns out to be the unique solution to the above optimization problem.
### Matrix perturbation and Davis Kahan Theorem
We bring in ideas from the matrix perturbation theory in order to obtain certain probabilistic bounds. As usual, the estimated channel is given by:
\[\mathcal{\hat{E}}_{ik,jl} =\mathcal{E}_{ik,jl}+X_{ik,jl} \tag{58}\] \[\mathcal{\hat{E}}_{I,J} =\Phi_{I,J}+X_{I,J} \tag{59}\]
The noise term \(X_{I,J}\) can be viewed as a perturbation to the original matrix. Let us consider the eigendecomposition of the original channel to be:
\[\Phi_{I,J}=E_{0}D_{0}E_{0}^{\dagger}+E_{1}D_{1}E_{1}^{\dagger} \tag{60}\]
and the eigendecomposition of the estimated channel to be:
\[\hat{\Phi}_{I,J}=\tilde{E_{0}}\tilde{D_{0}}\tilde{E_{0}}^{\dagger}+\tilde{E_{1} }\tilde{D_{1}}\tilde{E_{1}}^{\dagger} \tag{61}\]
Here \(E_{0}\) consists of an orthonormal basis that spans the eigenspace corresponding to \(D_{0}\) and \(E_{1}\) spans the orthogonal complement. Thus \(E_{0}E_{0}^{\dagger}+E_{1}E_{1}^{\dagger}=\mathbf{I}\) and similarly \(\tilde{E_{0}}\tilde{E_{0}}^{\dagger}+\tilde{E_{1}}\tilde{E_{1}}^{\dagger}=\mathbf{I}\). We would like to figure out how closely the subspace spanned by \(\tilde{E_{0}}\) approximates the subspace spanned by \(E_{0}\) and thus minimize \(||\tilde{E_{0}}\tilde{E_{0}}^{\dagger}-E_{0}E_{0}^{\dagger}||_{F}^{2}\) which is equivalent to minimizing \(||\tilde{E_{1}}^{\dagger}E_{0}||_{F}^{2}\). Using the Davis-Kahan theorem we obtain:
\[||\tilde{E_{1}}^{\dagger}E_{0}||_{F}^{2}\leq\frac{||\tilde{E_{1}}^{\dagger}XE_{ 0}||_{F}^{2}}{g^{2}} \tag{62}\]
Here \(g\) denotes a number such that if the eigenvalues corresponding to \(E_{0}\) are contained in the interval \([a,b]\), then the eigenvalues corresponding to \(\tilde{E_{1}}\) are excluded from the interval \((a-g,b+g)\). Note that \(||\tilde{E_{1}}^{\dagger}XE_{0}||_{F}^{2}\leq||X||_{F}^{2}\). Thus, if we want to increase the bound \(||\tilde{E_{1}}^{\dagger}E_{0}||_{F}^{2}\) by \(\mathcal{E}\), it suffices to increase the bound \(||X||_{F}^{2}\) by \(g^{2}\mathcal{E}\). Applying Markov's inequality on the random variable \(||X||_{F}^{2}\) we obtain:
\[\text{Pr}(||X||_{F}^{2}\leq g^{2}\mathcal{E})>1-\frac{E[||X||_{F}^{2}]}{g^{2} \mathcal{E}} \tag{63}\] |
2309.04053 | Materials Design for Hypersonics | Hypersonic vehicles must withstand extreme conditions during flights that
exceed five times the speed of sound. These systems have the potential to
facilitate rapid access to space, bolster defense capabilities, and create a
new paradigm for transcontinental earth-to-earth travel. However, extreme
aerothermal environments create significant challenges for vehicle materials
and structures. This work addresses the critical need to develop resilient
refractory alloys, composites, and ceramics. We will highlight key design
principles for critical vehicle areas such as primary structures, thermal
protection, and propulsion systems; the role of theory and computation; and
strategies for advancing laboratory-scale materials to flight-ready components. | Adam B. Peters, Dajie Zhang, Samuel Chen, Catherine Ott, Corey Oses, Stefano Curtarolo, Ian McCue, Tresa Pollock, Suhas Eswarappa Prameela | 2023-09-08T00:52:29Z | http://arxiv.org/abs/2309.04053v2 | # Materials Design for Hypersonics
###### Abstract
Hypersonic vehicles must withstand extreme conditions during flights that exceed five times the speed of sound. This class of vehicles has the potential to facilitate rapid access to space, bolster defense capabilities, and create a new paradigm for transcontinental earth-to-earth travel. However, the extreme aerothermal environments resulting from high Mach number trajectories create significant challenges for vehicle materials and structures. As hypersonic systems advance, there is a critical need to develop novel materials that are resilient to a combination of thermal and mechanical loads, aggressive oxidizing environments, and rapid heating rates. This work aims to provide a succinct discussion of emerging design strategies for refractory alloys, composites, and ceramics used for hypersonic vehicles. We will highlight key design principles for critical vehicle areas such as primary structures, thermal protection, and propulsion systems; the role of theory and computation in elucidating structure-property-processing relationships; and strategies for advancing laboratory scale materials to flight-ready components such as aerobstructures and thermal protection systems.
keywords: Hypersonics \(|\) Materials Design \(|\) Extreme Environments \(|\) Thermal Protection Systems \(|\) High Entropy Alloys \(|\) Ultra High-Temperature Ceramics +
Footnote †: journal: Journal
## 1 Introduction - Background to Hypersonics
In the last decade, there has been a resurgence in hypersonic vehicle development driven by the desire to increase flight performance and reusability. Hypersonics refers to flight and aerodynamic phenomena that occur above Mach 5 (5 times the speed of sound). To frame hypersonic speeds, a non-stop flight from Los Angeles to Tokyo aboard a commercial airliner (Mach 0.8) takes roughly twelve hours, whereas onboard an emerging Mach 9 hypersonic vehicle it takes one. Although the first hypersonic flight was achieved \(\sim\)70 years ago, there has been increasing interest from a broader audience due to modern engineering advances that are poised to revolutionize defensive capabilities, sub-orbital travel, and rapid access to space [1; 2; 3] (Figure 1). New vehicle systems with ever-increasing capabilities and Mach numbers are being developed, including: boost-glide systems, reusable aircraft, space-launch vehicles, and missile technologies [1]. However, these remarkable leaps in Mach number and performance during atmospheric flight come with an array of formidable challenges in the domain of materials multi-property optimization, simulation, and design [4]. Vehicles are purpose-built with bespoke materials to operate at vastly different Mach numbers (5-25+), altitudes (spanning sea level to orbit), hypersonic flight times (ranging from seconds to hours), and trajectories.
When vehicle speeds increase past supersonic conditions and into the hypersonic regime, the physics of external aerodynamic flows become dominated by aerothermal heating rather than aerodynamic forces (Figure 1a). Aerodynamic compression and friction create high-enthaly gas dynamics that impart additional physical phenomena from the energy exchange of a superheated atmosphere. This superheated atmosphere results in: high heat fluxes (3-7 orders of magnitude greater than the 1.4 kW/m\({}^{2}\) from the sun); extreme thermal gradients (changing from -170degC to 3000degC across distances of order 1 cm); high stagnation pressures (\(\sim\)105-107 Pascals); and destructive plasma from gas ionization which accelerates materials oxidation [1; 5; 6]. As operational Mach numbers increase, these formidable phenomena must be accommodated by materials in the principal subsystems of a hypersonic vehicle: aeroshell/primary structure, leading edges, control surfaces, acreage thermal protection, propulsion, and guidance systems. Developing hypersonic materials has become the focus of cutting-edge research and are presently rate-limiting steps for the resilience of structures during operation in extreme environments.
Recent work reports on materials development for the propulsion system [7; 8; 9] thermoelectric generators [10], radomes [11], structural materials [12], and thermal protection systems [13; 12]. Materials for hypersonics can be broadly classified into three types: refractory metals, composites and ceramics. Each material offers distinct tradeoffs for a given sub-system and environmental application. Common metals and alloys in hypersonics, such as aluminum and nickel-base superalloys, are favorable for primary structural components and moderate thermal loads, while refractory metals with higher operating temperatures, are employed for structures that see more demanding operating conditions (see Box 1). Refractory ceramics combine high-temperature capability with moderate thermal conductivity, but lack monolithic thermal shock resistance and tend to be used as a thermal barrier coating or thin structural materials [14]. By contrast, fiber-reinforced composite materials, such as carbon/carbon or ultra-high temperature ceramics matrix composites, incorporate carbon or ceramic fibers in dense matrices to improve high temperature strength-to-weight ratios beyond metals [5], [14; 15; 16].
Advancement of these materials from laboratory scale studies to flight is hindered by standardization of materials processing; reproducibility of materials data; and difficulties testing representative thermal, oxidative, and mechanical flight conditions. High-fidelity models have historically been used to design flight trajectories to bound materials selection criteria. New materials design tools with integrated computation and predictive frameworks are emerging that can aid the design of complex materials and expand vehicle performance and reliability. We will explore how refractory metals, composites, and ceramics are designed and selected for hypersonic applications according to vehicle-specific design criteria.
## 2 Hypersonic Vehicle Configurations and Design Requirements
Material requirements for hypersonic flight are sensitively coupled to the vehicle design and flight envelope, which impose two-principal environmental challenges: (1) thermal loads that are dependent on both geometry and location on the vehicle; (2) strongly oxidizing conditions that drive changes in both material properties (oxidation) and geometry (ablation). As a result, aerostructures, wing leading edges, acreage thermal protection systems, and propulsion systems necessitate vastly different materials to accommodate these diverse thermo-chemo-mechanical loads. Depending on the flight conditions (Mach and altitude), flight time at a given Mach number and altitude (known as time on condition), and location on the vehicle, qualified materials may not exist for the desired application [12].
Aerothermal heating arises as the hypersonic vehicle pierces through the atmosphere. Fundamentally, the adiabatic dissipation of a vehicle's kinetic energy into the viscous gas environment is responsible for the extreme thermal conditions of flight [19]. In the vehicle's shock layer, (volume gas between the body and the shock
Figure 1: (a) Computational fluid dynamics (CFD) simulation of the X-43 vehicle at a Mach 7 test condition with the engine operating. The solution includes internal (air-breathing scramjet engine) and external flow fields, including the interaction between the engine exhaust and vehicle aerodynamics. The image illustrates surface heat transfer on the vehicle (red is the highest heating) and flow field contours at the local Mach number. Structural components and the associated materials used for the design of the X-43 hypersonic vehicle are indicated: (b) alumina borosilicate insulation tile with an emissive coating used for acreage protection thermal protection [17]; (c) nose and leading-edge design integrating carbon composites and refractory tungsten alloy SD 18; (d) sharp leading edge cross-section showing the carbon composite with a refractory Ir coating [5]; (e) airframe of the vehicle composed of steel/aluminum skin and Al/Ti bulkheads. (f-n) Timeline of hypersonic vehicle development spanning hypersonic airplanes, space access, re-entry, boost-glide vehicles, and cruise missile applications, where colors indicate the hypersonic vehicles configuration: (f) the first to reach hypersonic speeds, Project Bumper-WAC ”Without Any Control” (1949), (g) the reusable X-15 research aircraft (1959), (h) Apollo re-entry capsules (1961-1972), (i) Space Shuttle (1972-2011), (j) NASA X-43 airplane (2001), (k) the HVT-2 boost-glide vehicle (2010-2012), (i) Boeing X-51 scramjet (2010-2013), (m) SpaceX Starship (slated for hypersonic re-entry in 2023), (n) a notional future hypersonic airplane. (Image sources: NASA (a,b,e,f,h,i,j), (c) – adapted from [18], (d) – adapted from [5], U.S. Airforce (g,l), DARPA (k), Creative Commons - Offical SpaceX (m), U.S. Govt. images not subject to copywrite)
wave), stagnation temperature increases proportionally to Mach to the third power and root of the atmospheric density, and can reach values as high as 10,000\({}^{\circ}\)C [20]. Although much of the energy is swept away with the surrounding gas flow around the vehicle, energy transfer by convective or radiative heating generates high heat fluxes that necessitate materials capable of resisting high temperatures [7].
Material requirements are further exacerbated due to the dissociation of O\({}_{2}\) and N\({}_{2}\) into free radicals at gas-phase temperatures above 3000\({}^{\circ}\)C (typically at speeds greater than Mach 8). These conditions lead to highly reactive surface chemical interactions, causing materials degradation, microstructural evolution, phase formation, and property changes during flight [6; 12; 21]. Critical challenges for materials designers remain in both the leading-edge surfaces from direct aerothermal exposure (nose, cowl lips, and control surfaces, Figure 1a), and in the propulsion flow path where radiative cooling is not viable [12]. In the following section, we will highlight how the characteristics of the primary aerostructures, thermal protection systems, and propulsion systems influence material design and selection.
### Primary Aerostructure
Lightweight primary structures (e.g., aeroshells and airframes) may be formed into either lifting bodies or ballistic structural elements, where the leading-edge profile and flight trajectory govern the aerothermal load during flight. Unlike the traditional atmospheric re-entry vehicle designs in Figure 1h - which employ blunt features to increase drag and push the shock region away from the structure and transfer energy into the air - hypersonic vehicles require slender primary structures and sharp control surfaces to reduce drag and enable stable long-distance accuracy. However, the heating rate is inversely proportional to the square root of the tip radius and must be accommodated through various energy dissipation mechanisms.
In modern vehicles, aeroshells are designed using solid or sandwich constructions with honeycomb, lattice, corrugated, or foam cored to minimize weight while maintaining rigidity and enable advanced passive cooling strategies [5; 16; 18; 22; 23]. Robust carbon and ceramic composites remain materials of choice for modern leading-edge structures [5; 15; 16; 18; 22; 23], and enable peak temperature reduction through passive cooling by employing favorable composite weave patterns, or thermally conductive materials to more effectively transport heat to the colder regions of the aeroshell main body [16; 22]. Such designs are commonly referred to as "hot structures" (Figure 1k, l) as compared to the insulated "cold structure" design adopted by the Space Shuttles and many other types of reentry vehicles or bodies that use thick outer surface thermal insulation (Figure 1i,m ).
### Thermal Protection System
Thermal protection materials and system design has become an engineering field of its own because materials with unique property combinations can enable new flight capabilities. Thermal protection systems (TPS) are employed for thermal regulation of leading edges, nose, and propulsion features that experience the greatest heat flux, as well as acreage locations that protect the aeroshell's fuselage and control surfaces (i.e., rudders, elevons; Figure 1a). In modern vehicles, the aerostructure may have an integrated TPS for optimal heat transfer and dissipation. TPS materials are selected to best accommodate the local aerothermodynamic criteria according to their combinations of high-temperature strength, thermal conductivity, heat capacity, melting/oxidation temperature, and emissivity. Broadly, there are three fundamental types of TPS used to increase vehicle resilience to aerothermal heating [5; 12], [14; 15; 16].
Passive thermal protection systems are ideal for moderate transient heat flux scenarios, and may be composed of: (i) insulated cold structures (e.g., Space Shuttle tiles); (ii) heat sink surface structures that both absorb and radiate energy (e.g., skin of the X-15); or (iii) or emissive "hot structures" that lower the thermal load through both environmental radiation and conduction into the vehicle (e.g., nose of the X-51). Examples of these appear in Figure 1g, i. Semi-passive systems are implemented for high heat fluxes that persist for long durations, and encompass: (i) reusable heat pipes that transfer and radiate thermal energy via evaporative cooling and capillary wicking (e.g. liquid lithium or potassium [24]); or (ii) single-use ablatives materials that absorbed energy via pyrolysis or charring of a reinforced polymer/resin (used in the first re-entry capsules, Figure 1h).
Active thermal protection systems, using the forced flow of a liquid or vapor, are employed for the most extreme heat fluxes and extended flight durations [25; 26; 27; 28; 29]. These systems include: (i) convective cooling architectures, which transfer heat into a working fluid (e.g., Shuttle main engine), (ii) film cooling, whereby a fluid is injected over a large area into the flow to form an insulating blanket (e.g., X-43 propulsion system), (iii) or transpiration cooling where a fluid (e.g., H\({}_{2}\)O or He) is injected into hot gas flow through porous structures. Examples of these appear in Figure 1a, i, j.
Nose and wing leading edges that are subjected to intense heat loading may employ heat pipes or actively cooled structures for thermal regulation [12; 30]. By contrast, acreage locations -- large fuselage regions on the vehicle -- experience lower heat fluxes and have historically been passively cooled using materials with low thermal
conductivities and thermal expansion coefficients (see Box 1). Although heat sinks and thermal insulation are attractive from a risk management perspective, they suffer from excessive mass and low fracture toughness. Ablative materials have been used to great effect for shuttle re-entry but do not favor reusability. In contrast, hot structures, heat pipes, and active thermal management systems (enabled by additive manufacturing) have dominated research efforts for new materials and system designs [31; 32; 33; 34; 35; 36; 37; 38].
Each TPS will have a unique architecture and thermal profile, which results in their own set material property requirements. An example of simulated aerodynamic heating for a passive (hot structure), semi-active (heat pipe), and active (transpiration cooling) cooled leading edge is shown in Figure 1(a), b at one flight condition. In this example, each leading-edge experiences the same heat flux and stagnation temperature because these are dictated by the component geometry and flight condition. However, the resulting temperature profile depends on the TPS mechanism. The passive leading-edge exhibits the highest peak temperature and thermal gradient because it is solely relying on intrinsic material properties (conductivity, heat capacity and emissivity). The semi-passive leading edge exhibits a small thermal gradient (but a similar peak temperature to the passive structure) because heat pipes increase thermal conductivity by 1-3 orders of magnitude [39]. Lastly, the actively cooled leading edge has the lowest peak temperature because transpiration reduces the incident heat flux [40; 41; 42; 43].
### Air-Breathing Propulsion Systems
Similar to environmentally facing thermal protection systems, existing approaches to hypersonic propulsion systems can be significantly improved with refractory materials capable of operating in stressing aerothermal oxidizing and reducing environments [44]. Currently, the ram/scramjet engine is the standard form of propulsion for air-breathing hypersonic vehicles. Unlike rocket-propelled hypersonic vehicles (e.g., X-15 and the Space shuttle), the oxidizer for the propellent is supplied by the surrounding air and mixed within a combustor using onboard fuel [45]. More advanced combined cycle multi-mode propulsion systems under development include: rocket-based combined cycle (RBCC); turbine-based combined cycle (TBCC); and turbo rocket combined cycle (TRCC) that are capable of transition propulsion modes (such as rocket propulsion during the initial ascent phase and then transition to air-breathing scramjet engines at hypersonic speeds Figure 0(a)).
Specific components in these propulsion systems, including inlet ducts, nozzles, and combustors, experience extreme temperature and mechanical stress without the ability to readily dissipate heat through radiative cooling. Propulsion system materials include a combination of refractory alloys, CMC's, C/C, and metal matrix composites (MMC) [46]. Textile-based CMCs may be formed into complex structures for internal coolant flow and mechanical stiffening, but their surface temperatures are limited to \(\sim\)1600\({}^{\circ}\)C. For velocities near Mach 6, passively cooled refractory materials can be used to operate at temperatures near that of the propulsive flow, but active cooling is required above Mach 6. Active cooling methods require materials that can accommodate high temperatures, pressure, and temperature gradients between the cooled fuel and combustion chamber [12].
Materials lifetimes in this environment are largely controlled by oxidation, which is highly dependent on flow conditions when water vapor is present in the propulsion flow - this oxidation mechanism is quite different from the ionized flow in leading-edge applications. Oxidation is exacerbated by thermal gradient-induced microcracking, and limitations in high-fidelity modeling make materials properties insufficient for lifetime prediction [18; 12; 22]. These futuristic propulsion systems could enable low-cost and reusable air-breathing hypersonic vehicles for manned flights and civilian transportation, but materials development is necessary. More efficient materials will require accurate prediction of engine thermal balance, heat loads, shock conditions and oxidative character of the burning atmosphere [44].
## 3 Materials-class-specific considerations and design criteria
Materials selection is typically applied after structural components have been designed and trajectories have been determined [49]. Initial material screening can be carried out using thermo-mechanical simulations. For a given set of material properties, conditions (heat flux and stagnation temperature) are applied across a component to calculate the resulting thermal profile, which is then used as boundary conditions to calculate thermal stresses. This screening is useful in determining whether the peak temperature exceeds a material's melting point and/or the thermal stress exceeds the material's flow stress at the given temperature.
An example of this screening is illustrated in Figure 1(c)-e for a sharp passive leading edge. As a hypothetical evaluation, steady-state thermal simulations were carried out on high-temperature structural materials for over 300 Mach and altitude combinations (i.e., assuming the material was exposed to these conditions indefinitely and allowed to equilibrate). For each simulation, following the work of [50], we extracted the peak temperature and estimated the tip stress from the thermal gradient in this region. The incident heat flux changes substantially around the tip, Figure 1(b), causing a steep thermal gradient (as large
Figure 2: **Steady-state finite element (FE) simulations of aerodynamic heating of a leading edge, carried out for a range of structural materials and hypothetical hypersonic flight conditions.** (a) Illustration of the sharp leading-edge geometry used in these simulations with the following dimensions: 2.5 mm tip radius, 3-degree wedge angle, 5 cm span, 10 cm cord length. (b) Thermal profiles across a 2D TZM leading-edge, considering passive thermal management, a semi-passive Li heat pipe operating at 1500 K, and active cooling via transpiration where the incident heat flux is reduced by a factor of 2. (c) Ashby map highlighting operational tradeoffs for metal alloy, UHTC, refractory alloy, and carbon-base material classes. (d) Ashby-style plot of the FE simulation results from passive leading edges, where: the y-axis is the normalized mechanical stress resulting from a thermal expansion gradient, and the x-axis is the normalized peak temperature at the tip where the heat flux is highest. Only 4 materials are not viable from a temperature standpoint (ignoring oxidation), whereas 8 are not viable due to the expansion stress exceeding the yield strength of the material at that temperature. Constraints via oxidation will decrease the overall maximum operating temperature; there is limited availability on oxidation kinetics for these materials. (e) The culmination of (d) for different flight conditions is shown as a hypothetical hypersonic flight corridor, where each line represents the “survivability limit” for a monolithic material with this specific (sharp) wedge geometry; known flight conditions of the X-43A, X-15, and typical space re-entry are indicated for reference.
## 4 Conclusion
In this paper, we have proposed a novel approach to the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of finding the optimal solution of the problem of the problem of finding the optimal solution of the problem of finding the optimal
as 1000 K across 4 mm), which generates stresses of order 100 MPa due to non-uniform thermal expansion. If these stresses are above the materials' flow stress at the peak temperature, the tip will deform and affect the boundary layer, potentially causing a laminar-to-turbulent transition
These mesoscale models provide critical insights into guiding what materials should be used in the various structures (mentioned in the previous section) for a given set of flight conditions. For instance, Figure 2 highlights that traditional alloys (e.g., Ti-base, Ni-base and steels) have limited use as leading edges due to low melting points, high-thermal expansion coefficients and moderate thermal conductivities (Figure 2d). Monolithic ceramics suffer from high thermal stresses, but their strengths can be modified through secondary phases (see subsection 3.3) or employed as coatings. Refractory metals - owing to their high strength at temperature, thermal transport properties, and low thermal expansion coefficients - are ideal, but oxidation kinetics will constrain their maximum service temperature (see subsection 3.1).
### Metallic Materials for Hypersonics
Metallic materials are ubiquitously used in hypersonic vehicles - as nose and wing leading edges, control surfaces, and engine inlets - due to their tendency to their damage tolerance and manufacturability. These components need to withstand extremely high heat fluxes and thermal strains, which demand materials with high melting points that maintain strength at high temperatures.
Pure elements with high melting points (W, Re, Ta, Mo, Nb, V, Cr, Ti, Ni) form the basis of fielded high-temperature alloys. For instance, Ti was employed in hot aeroshell structures in the SR-71 [5; 15] the nose section of the X-43 contained a SD 180 tungsten heavy alloy [51], a Haynes Ni-base alloy was used in the Mach 7 X-43 variant [52], and both MoRe and Ni-base alloys have been tested for heat pipe structures [11]. Other refractory metals, such as the TaWH alloys T111 and T222 (see Box 1) exhibit favorable creep-resistant materials properties and are ideal for the extended containment of heated liquid alkali-metal working fluids (1000-1300\({}^{\circ}\)C) for heat pipe type leading edge designs [53]. T111/Li and niobium-based C-103/Na designs have been assessed to satisfy the requirements for Mach 8 and Mach 10 flights respectively [24].
However, melting point alone is not a singularly meaningful parameter of structural design. For comparison, carbon-carbon will not melt at 1 atm but will sublime at 3727\({}^{\circ}\)C and oxidize to CO\({}_{(g)}\) at temperatures starting as low as \(\sim\)370\({}^{\circ}\)C. As in all flight applications, density, oxidation resistance and the ability to tolerate thermomechanical loading are important considerations (Figure 2c). Nickel alloys are capable of operating at high fractions of their melting points due to both coherent precipitation strengthening and self-passivation that persist to very high temperatures. Meanwhile, W and Mo can maintain over 50% of their Young's Modulus at 2000\({}^{\circ}\)C but oxidize well below 1000\({}^{\circ}\)C, with more rapid oxidation at increasing temperature due to the high vapor pressures of their trioxides [5; 13; 14]. The combined property advantages in Ni- and Co-base alloys are often not found in state-of-the-art refractory alloys, inhibiting their true operational potential.
One promising class of metallic materials are the newly developed multi-principal element ("high entropy") alloys [54; 55; 56; 57; 58]. In particular, refractory multi-principal element (RMPE) alloys offer the opportunity to maintain high-temperature properties while, for example, decreasing density and oxidation kinetics. RMPE alloys, such as MoNbTaVW (\(\sigma_{y}=1246\) MPa, \(\rho=12.4\) g/cm\({}^{3}\)) [56] and NbMoTiVZr (\(\sigma_{y}=1785\) MPa, \(\rho=7.1\) g/cm\({}^{3}\)) [59], could provide significant benefits over legacy refractory alloys such as Ta-10W, which were chosen for re-crystallization behavior rather than strength (\(\sigma_{y}=471\) MPa, \(\rho=16.8\) g/cm\({}^{3}\)) [60]. Unfortunately, many thermophysical properties and operation-relevant properties, such as recrystallization temperature, have yet to be measured for many of these alloys. While multidisciplinary design approaches have successfully been implemented for the aerothermal and mechanical design of hypersonic vehicles [61], materials have yet to be factored into this dynamic design optimization loop.
Due to their limited oxidation resistance, alloys in hypersonic environments typically rely on a compatible coating. Coatings may be multilayered, functioning as both thermal and environmental barriers, with oxide-forming metallic layers and porous, low-conductivity ceramic overcoats [62]. Nickel alloys are designed to form alumina as a protective oxide and have well-developed aluminide coating systems due to their extensive use in aircraft engines [62].However, coatings are much less developed for refractory alloys and typically contain metal silicides, which have limited protection below 850 \({}^{\circ}\)C and slough off above 1700\({}^{\circ}\)C due to aeroshearing However, coatings are much less developed for refractory alloys and typically contain metal silicides, which have limited protection below 850\({}^{\circ}\)C and slough off above 1700\({}^{\circ}\)C due to aeroshearing [63; 64]. A wide range of potential coating failure modes need to be considered during design, and while there has been considerable progress on understanding the mechanics of coatings [57; 65], often, the material properties are missing.
Out of the numerous legacy refractory alloys developed by NASA, only a handful are manufactured and used today (such as C103, TZM, W25Re). In the past, cost and formability at ambient temperatures were the roadblocks in fielding superior refractory alloys. For instance, Nb
alloys with high fractions of W and Hf (greater than 15 wt.%) suffer from ductile-to-brittle transitions several hundred degrees above room temperature [66]. However, rapidly evolving additive manufacturing capabilities can now produce complex component designs - negating machining constraints [67; 68; 69; 70]. This provides new opportunities for the design of advanced cooling methodologies such as active TPS [67; 68; 69; 70]. These new manufacturing techniques can also be combined with high-throughput characterization and machine learning algorithms to rapidly discover and develop the next-general refractory alloys for hypersonics [71; 30; 72].
### Carbon Composites for Hypersonics
Carbon-carbon composites (C/C) are historically considered the de facto materials for the fabrication of hypersonic aeroshells and leading edges owing to their excellent performance characteristics, including: low density (1.60-1.98 g/cm\({}^{3}\)), low coefficient of thermal expansion (-0.85 to 1.1 x 10\({}^{-}\)6/K), high modulus of elasticity (200 GPa), high thermal conductivities (\(\sim\)4-35 W/mK) and retained mechanical properties up to \(\sim\)2000\({}^{\circ}\)C in inert environments [12; 16; 18; 73; 5; 22], [18; 74]. Low-density C/C is often preferred over metals for severe environment aerostructure elements. For instance, the horizontal Haynes control surfaces of the Mach 7 variant of X-43 were replaced with coated C/C for the Mach 10 variant [18].
C/C is manufactured using one of two processes to densify a 2-D "zero condition" C/C preform: (1) infiltration and pyrolysis (PIP) in which a high carbon yield resin (phenolic or pitch) is infiltrated into the fibrous pre-form (\(\sim\)40-60 vol% fiber) before undergoing high-temperature graphitization of the resin matrix; or (2) chemical vapor infiltration (CVI), where densification occurs by the infiltration and decomposition of carbonaceous gases. Both manufacturing methods have repeated steps to consolidate the matrix, increase density, and increase strength. For PIP, 4-6 pyrolysis cycles under hot isostatic pressure (HIP) are preferred to inhibit the formation of closed pores during resin pyrolysis and achieve a high density (\(>98\%\)). On the other hand, surface carbon deposition via CVI inhibits further infiltration over time so the surface deposit needs to be removed and the process restarted. Advanced C/C 6 (ACC-6) remains the state-of-the-art composite, having undergone six impregnation cycles to increase density and achieve high yield strengths at elevated temperatures.
Porosity reduction is a focal point of processing research because it is critical in limiting oxygen diffusion and aerothermal erosion during flight. Uncoated C/C formed by PIP and CVI have demonstrated in-plane tensile strengths on the order of 165 MPa (with strength increasing as a function of density) [75]. Furthermore, the thermomechanical properties of C/C are highly anisotropic and dependent on the processing method, residual porosity, and fiber architecture (fiber-woven fabrics or fiber tow windings may be orientated at an angle, such as 30\({}^{\circ}\), 60\({}^{\circ}\), or 90\({}^{\circ}\)). Given this complexity, the need for high-fidelity modeling capabilities for anisotropic, and locally varied, volume elements is critical, which makes materials properties standards insufficient for design, performance, and life prediction [12].
Despite these promising properties, uncoated C/C erodes rapidly at elevated temperatures. The oxidation of carbonaceous composites begins \(\sim\)370\({}^{\circ}\)C in air, with dramatic oxidation occurring beyond 500\({}^{\circ}\)C [13]. Present hypersonic materials design efforts aim to protect C/C from high-temperature oxidation, ablation, and erosion from prolonged and repeated aerothermal exposure. With increasingly more extreme hypersonic environments, two protection approaches are being developed: (1) deposition of high-temperature protective coatings, and (2) modification of the carbon-carbon matrix. Oxidation-resistant coating materials may be applied to C/C surface (or fibers before infiltration) to limit diffusion and modify emissivity for passive thermal regulation.
The initial development of anti-oxidation coatings for C/C was focused on refractory compositions containing SiC, HfB\({}_{2}\), and ZrB\({}_{2}\) (HfB\({}_{2}\)/SiC and ZrB\({}_{2}\)/SiC blends) [76]. Other additives such as tetraethyl orthosilicate have been applied as silica-forming impregnates (i.e. on Shuttle Orbiter), which seal microstructural defects to limit oxidization [77]. These systems can protect C/C from oxidation up to 1500-1600\({}^{\circ}\)C, but thin oxide coatings become ineffective at higher temperatures due to melting, evaporation/active oxidation of SiO(\({}_{g}\)), foaming, and/or visco-elastic erosion of the HfO\({}_{2}\) or ZrO\({}_{2}\) oxide scales containing high vapor pressure borosilicate phases. To satisfy the needs of long-term reusable hypersonic service environments, both matrix modification and deposition of coatings onto fibers and substrates are required, but challenges persist. Annular matrix cracking and thermal expansion mismatches between the C/C and coatings often lead to rupture during thermal cycling and oxidative erosion, both of which are issues for sustained high Mach number flight [78].
In an attempt to improve mechanical resilience and resistance to thermal shock, advancements in composite materials design have focused on multi-scale reinforcement strategies. Low dimensional micro/nanoscale reinforcements may include nanoparticles (0D), carbon nanotubes/fibers (CNTs/CNFs), whiskers (1D, e.g Si\({}_{3}\)N\({}_{4}\), TaC, ZrC) or graphene (2D). Such additives serve to improve properties at the fiber-matrix interface via grain refinement, debonding, deformation, pullout, bridging, crack, and deflection mechanisms propagation [76]. Higher dimensions, i.e., 2.5-D, 3-D, and 4-D, can be achieved
by adding short fiber to the resin matrix or waving and winding fiber strands into hoop or braid-like structures [76].
The synergistic effects of complex multi-scale coating structural modification and fiber-matrix interface optimization need to be further developed to expand thermomechanical and erosion resistance properties for increasingly severe service environments. The incorporation of modern computational approaches, such as those developed in the national Materials Genome Initiative [79], and integrated design of the fiber-matrix interface with unique combinations of additives, may serve to improve materials performance. However, capabilities for high-fidelity modeling of complex architectures are still limited [12; 80]. There remains a large gap in the comprehensive performance of oxidation/ablation-resistant modified C/Cs in standard databases of material properties insufficient for design and scalability to relevant components such as aerosols and leading edge [12].
### Ultra-high Temperature and Refractory Ceramics for Hypersonics
Ultra-high temperature (UHT) and refractory ceramics are a developing class of materials for leading edges due to their stability at high-temperatures [14]. UHTCs, encompassing carbides, nitrides, and borides of early transition metals (Zr, Hf, Ti, Nb, Ta), possess high melting points (\(\sim\)3000\({}^{\circ}\)C), tunable densities (4.5-12.5 g/cm\({}^{3}\)), high thermal conductivities (40-120 W/mK), moderate coefficient of thermal expansion (6.3 to 8.6 x 10\({}^{-6}\)/K), and high IR spectral emissivity for passive radiative cooling [80; 81]. The complexity of multiphase materials and their ability to survive extreme aerodynamic conditions is shown in Figure 4. Challenges with respect to oxidation thermal shock can be mitigated through tailoring the architecture and lead to materials that rival metals in leading edge applications.
Among the UHTCs, ZrB\({}_{2}\)-SiC and HfB\({}_{2}\)-SiC have received the most attention because they uniquely combine high thermal conductivity, specific strength (\(\sigma>\) 460 MPa at T = 2500\({}^{\circ}\)C, \(\rho\) = 5.5 g/cm\({}^{3}\)[82]), and superior oxidation resistance \(\sim\)1650\({}^{\circ}\)C [78; 83; 80]. More recently, transition metal carbides have garnered interested as components for nozzle throats, divert/attitude control thrusters, and nozzle liners, where higher thermal and mechanical loads are encountered [14]. Replacing HfB\({}_{2}\) and ZrB\({}_{2}\) with HfC or ZrC can increase service above 2000\({}^{\circ}\)C. Still, the oxidation of refractory carbide and boride ceramics containing SiC remains a significant challenge for extended applications beyond \(\sim\)1600\({}^{\circ}\)C. Above these temperatures, active oxidation generates gaseous oxidative products (i.e SiO\({}_{(g)}\) versus SiO\({}_{2(s)}\)), which no longer provide protection against oxygen [14; 80]. A certain level of porosity can improve resistance to thermal shock and thermal expansion mismatch. These pores help compensate for oxidation-induced volume expansion, leading to the formation of a fully dense and cohesive surface oxide scale but processing conditions and microstructure must be carefully controlled. The structure-processing property relationships for UHTCs is not well understood and additional information is needed to isolated fundamental factors that control thermomechanical behavior of emerging compositions [80]. For example, ZrB\({}_{2}\)-MoSi\({}_{2}\) ceramics processed at elevated temperatures are noted to have improved oxidation resistance [84].
The best oxidation performance for monolithic ceramics are obtained using hot pressure assisted ceramics powder processing techniques, including: hot pressing, hot isostatic pressing, and spark pressure sintering methods; these methods facilitate porosity reduction and sintering while limiting coarsening mechanisms [80]. Small UHTC grain sizes resist grain boundary fracturing during oxidation and restrict molecular oxygen transport, minimizing the disruptive effects of high-temperature martensitic transformations (phase changes) of HfO\({}_{2}\) and ZrO\({}_{2}\) (Figure 4). Other recent approaches for improved aerothermal resilience after evaporation of the protective B\({}_{2}\)O\({}_{3}\) layer form from diboride oxidation include additions of W, Mo, and Nb [85] and graphene nanoplatelets reinforcement. These additions are indicated to suppress crack formation, bursting, and oxide layer by up to 60%, while improving high heat-dissipative abilities, important to surface resiliency when exposed to plasma [86].
The high densities of UHTC materials, low thermal shock resistance, and low fracture toughness impose additional physical limitations for bulk ceramics [14]. Modern air-breathing hypersonic vehicles are extremely weight sensitive. The high materials density (\(\sim\)3-6x density compared to C/C) and poor thermal shock resistance (1/5 of ACC-6, and half that of RCC and CVI C/SiC 1100\({}^{\circ}\)C [14]) of monolithic ceramics become a limiting factor for structural components and dense segmented leading-edge inserts (Figure 2(a), b). AAs a result, the preferred instantiation of UHTCs is for emissive, anti-oxidative coatings on Cf composites or refractory alloys. UHTC coatings can be improved by adopting graded or layered compositions, enhancing bond strength by structural integration, enhancing toughness and crack bridging via nanoscale and micron-scale carbide fibers, and including emissivity enhancing dopants. Compositional complexity can lead to further property improvements, where for example TaHf-C has the highest recorded melting temperature [87].
Whether for monolithic ceramic bodies or barrier coatings, processing variables significantly affect materials properties and impose difficulties for obtaining standardized performance data. Despite a significant body of research
examining both practical densification mechanisms and compositional variation, limited information is reported on the kinetics of sintering and densification of bulk ceramics. Many studies have reported the thermomechanical properties of single ceramic compositions (strength, hardness, elastic constants, thermal conductivity, and fracture toughness), but limited information on the structure-processing-property relationship stemming from first principles are understood [80; 89]. To expand the utilization of refractory ceramics and UHTCs for hypersonic and extreme environments, discovery/synthesis of new materials is needed, yet the probability of undiscovered compounds remains low [80].
Further advancement of materials design these systems will likely parallel the development of structural metal alloys. Simultaneous improvement of oxidation resistance, creep at elevated temperatures, and transformation toughening will require additions of secondary, ternary additions, and high-entropy compositions that occupy uncharted areas of phase diagrams [90; 12; 80; 91]. Emerging work on high entropy, multi-component (ZrHfTi)C solid-solution materials is being conducted, where for example high Hf-content is beneficial for forming an amorphous oxycarbide layer enhancing initial oxidation resistance, while an equiatomic ratio of metallic atoms increased high-temperature phase stability [87; 92]. Multi-scale computational modeling of UHTCs can aid in the development of high entropy materials by integrating ab-initio (fundamental chemistry and electronic properties), atomistic (thermomechanical properties), and continuum frameworks (mechanical properties, thermomechanical analysis of microstructure) [88], [93; 94; 95; 96].
#### 3.3.1 Ceramics-Matrix Composites
The poor thermal shock resistance and high densities of bulk UHTCs and refractory ceramics can be overcome by the incorporation of ceramic fibers (\(\sim\)35-60 vol\(\%\)) to create ceramics matrix composites (CMCs) [97; 98; 99]. On the other hand, the oxidation resistance of C/C can be improved by replacing a carbonaceous with a ceramic matrix that forms a self-healing glassy oxide passivation layer. For the latter, SiC was determined to be a suitable substitute for the carbon matrix due to its high oxidation temperature, thermal shock stability, and creep resistance [22]. The most well-established carbon fiber-reinforced CMCs for hot structures are carbon fiber-reinforced silicon carbide composite (C/SiC), and carbon fiber-reinforced carbon-silicon carbide (C/C-SiC) composites [22]. While the active oxidation of C/C starts at \(\sim\)500\({}^{\circ}\)C in air and becomes more significant above 600\({}^{\circ}\)C, C/SiC and C-C/SiC can be stable up to 1600\({}^{\circ}\)C due to the formation of a thin self-passivating SiO\({}_{2}\) scale. Alternatively, environmentally stable oxide/oxide ceramic composites have been produced to combat oxidation experienced by non-oxide materials. These "Ox/Ox" CMCs incorporate
Figure 3: **Hypersonic wing leading edge designs and associated materials microstructures processed under using different conditions showing materials oxidation** (a) Schematic drawings of wing leading-edge conceptual design using a monolithic ceramic segmented edge. (b) Photographs of monolithic ZrB\({}_{2}\)/20 vol\(\%\) SiC leading edges shown before and after acjet testing in the H\({}_{2}\) arc-jet facility (see Figure 4) with failed ceramics due to oxidation and thermal shock. (c) Failed and successful coated-C/C X-43 leading edges following arc-jet for simulated flight conditions of Mach 10, 32 km altitude using 1475 W/cm\({}^{2}\), 130 seconds. (d) HfB\({}_{2}\)-SiC UHTC nose cone subjected to a total 80 minutes of arc jet exposure at heat fluxes of 200 W/cm\({}^{2}\). The sample formed an oxide layer and a SiC depletion zone, which leaves behind a porous oxide surface (e). (f) depicts SEM cross sections of HfB\({}_{2}\)-SiC or HfB\({}_{2}\)-SiC-TaSi\({}_{2}\) materials formed via hot pressing and/or field-assisted sintering and with or without TaSi\({}_{2}\) additives. The images indicate how grain structure, oxide layer formation, and SiC depletion is dramatically impacted by processing conditions and the inclusion of tertiary phases. (g) SEM cross-section of a UHTCMC incorporating Cf and high aspect ratio SiC. (Images from NASA and adapted from [14; 5; 88].)
polycrystalline alumina or aluminosilicate fibers (e.g Nextel 610 or Nextel 720) into alumina, aluminosilicate, or SiOC matrixes and were initially suggested for hypersonic thermal barrier materials. However, actual service applications are limited to temperatures \(\sim\)1000-1200\({}^{\circ}\)C due to degradation in tensile strength, stiffness, and creep [100].
The fracture behavior of damage-tolerant CMCs is dominated by the stiff reinforcing Cf (or SiC\({}_{f}\)), where fiber-matrix debonding is associated with frictional effects and crack deflection within porous or multilayer interfaces. Fiber/matrix bonding using a coating with adapted interphases (e.g. CVD pyrolytic carbon, silicon, \(\beta\)-SiC, BN, alumina) serves to: (1) increase fiber-matrix bonding for enhanced mechanical properties (tensile strength \(\sim\)350 MPa for CVI C/SiC); (2) protect carbonaceous fibers from oxidative degradation at \(\sim\)450\({}^{\circ}\)C during crack formation when exposed to an oxidative atmosphere; and (3) mitigate the severity of CTE mismatch between C\({}_{f}\) and the SiC matrix in the absence of matrix damage [14; 22]. Due to the anisotropic nature of thermal expansion in composite materials and the CTE mismatch between the SiC matrix and the C\({}_{f}\), these materials may be more prone to cracking compared to SiC\({}_{f}\)/SiC CMCs. In recent years, melt silicon carbide fiber-reinforced carbon-silicon carbide composites (SiC/SiC) can reach use temperatures up to 1600\({}^{\circ}\)C. Yet, high temperature "sweating" of unreacted silicon leaves processing improvements to be desired.
Practically, similar manufacturing techniques for C/C fabrication can be used to infiltrate a carbon matrix with SiC or other refractory ceramic compositions (including UHTCMs). CMC fabrication techniques include chemical vapor infiltration/deposition, PIP, reactive melt infiltration, slurry infiltration, in-situ reaction, hot pressing, and powder pre-infiltration. Several techniques can be combined to achieve multi-component compositions and gradient/sandwich structures. Still, the fabrication of complex ultra-high-temperature ceramic matrix composites (UHTCMCs) compositions remains of significant interest in replacing C/C or C/SiC CMC materials for improved thermomechanical capabilities. The advancement of UHTCMCs that incorporate HfC, ZrC, TaC, HfB\({}_{2}\), and ZrB\({}_{2}\) matrix compositions will serve to greatly benefit the development of propulsion platforms for hypersonic flights [98].
Carbon fiber-reinforced UHTC-matrix composites as shown in Figure 3g, especially those containing HfC and ZrC, can resist oxidation at temperatures above 2000\({}^{\circ}\)C under hypersonic flight conditions. However, CTE mismatches between UHTC matrix phases are more prone to the microcracking during processing and aerothermal heating that reduces strength [101]. The use of SiC fiber will improve the structural properties, and oxidation resistance (compared to C\({}_{f}\)), and decrease the density UHTCMCs. HfC composites reinforced with \(\sim\)15% short linear chopped SiC fibers showed strengths up to 370 MPa at room temperature, decreasing to 290 MPa by 2200\({}^{\circ}\)C in Ar [102]. These properties far exceed the strengths reported for C/C which has demonstrated strengths of \(\sim\)200 MPa at room temperature to 2200\({}^{\circ}\)C. The addition of up 40-60 vol% SiC\({}_{f}\) is suggested to further improve mechanical properties [103].
## 4 Advances in Computational Tools for Materials Development
### Advances in Theory and Computational Tools
Many experimental characterizations remain costly and/or inaccessible at the extreme conditions experienced by hypersonic vehicles [104; 105; 106], making modeling/simulation efforts difficult to validate. Nonetheless, the lack of experimental data can be an opportunity for theory and computation to provide some insight. A history of simulation codes modeling thermal protection system materials is presented in [106] and dates to the 1960s and includes software like the CMA code by the Aerotherm Corporation [107; 108] and the FIAT code by the NASA Ames Research Center [108], that incorporates internal energy balance, decomposition equations, general surface energy balance boundary conditions, and a thermochemical ablation model [106; 109]. Other simulations have been performed modeling ablation in carbon-phenolic (charring) composites [110; 111; 112], electromagnetic shielding at microwave frequencies [113], water mass flow rate in transpiration (active) cooling systems [114], flow fields and thermal behavior of solid samples, temperature-dependent fracture toughness of particulate-reinforced UHTCs [115], and mechanical-thermoelectric performance for multi-functional thermal protection systems [10]. High-temperature thermal and elastic properties of high entropy borides have also been modeled using molecular dynamics [116]. All of these methods contribute to framing an understanding of the relevant vehicle systems level and materials design criteria.
The challenge in performing experimental characterizations at conditions relevant for hypersonic flight translates to a scarcity of empirical data. Computational data is similarly limited: the complex mass and heat transfer behavior at play spans several scales, and first-principles modeling is largely disjointed across these scales and far too expensive for high-throughput workflows. Any relevant data available exists in unstructured and programmatically inaccessible formats. This makes it difficult to leverage emerging artificial intelligence methods for accelerated screening. Solutions are on the horizon: synchrotron x-ray computed microtomography was able to fully resolve microcrack damage as cracks grew under load at temperatures up to 1750\({}^{\circ}\)C [105] and
high-throughput first-principles frameworks are becoming capable of accurately modeling finite-temperature properties [95; 96; 117]. Disorder plays an ever-increasing role as the environment becomes more extreme [91; 118], further complicating characterization and modeling efforts. The prediction and optimization materials with useful properties will require an understanding of the interplay between ultra-high-temperature phenomena, which will be assisted by high-fidelity structured data, artificial intelligence and solid thermodynamic-kinetic analysis.
To further the development of new and existing hypersonic materials, a computational approach alone is insufficient. Real-world data is needed to validate truly predictive material modeling and design. Furthermore, material phenomenology and behavior at extreme temperatures are difficult to predict by first principles alone, especially as material systems become more complex.
## 5 Role of integrated experiments and flight readiness pathways
Within the material design framework, computational models and experiments serve vital and complementary roles. The intended application of the material being designed largely drives the environmental loads. For example, a TPS material designed for the exterior of a hypersonic vehicle will experience a vastly different heating and chemical environment than the inside of a scramjet combustor. These testing and modeling methodologies must take into account the coupled thermal, structural, and chemical nature of the material-environment interactions [6]. In addition, the intended reusability of the material dictates the relevant timescales that must be examined. For typical ballistic or boost-glide trajectories, flight times span tens of seconds to tens of minutes. A single-use application may be able to ignore slower phenomena that occur (e.g. creep), while a material designed for a reusable application must account for the overall life of the vehicle.
Computational models for materials are traditionally broken down by domain (material, fluid) and scale. At the larger scales, continuum codes rely on finite-element analysis (FEA) or similar numerical methods to solve the macroscopic governing equations over the physical domain. In the fluid domain, the focus is usually to predict the environment that the material experiences, such as the heat flux, pressure, and shear force. The field of computational fluid dynamics (CFD) has expanded to include thermal and chemical nonequilibrium, turbulence effects, surface reactions, and multiphase flow [124; 125]. On the material side, thermal and structural analyses have traditionally been considered separately. Material response tools evaluate the thermal and chemical response, including any ablation and pyrolysis within the material [126]. Thermo-structural tools model the combined aero-mechanical and thermo-structural loads arising from thermal expansion and gradients within the materials.
In recent years, there has been a push towards higher-fidelity tools and more physics-based modeling, necessitating multi-scale modeling approaches to describe the material at smaller and smaller scales. An example of this is the focus on micro-scale modeling for porous TPS materials by NASA [127]. These span meso-scale models describing the distinct phases that are present in a material, down to atomistic models describing the fundamental material interactions. A challenge is to bridge the gap between the various scales using a truly physics-based approach, describing the relationship between materials processing, microstructure, physical properties, and thermal, structural, and chemical performance. These approaches can be generalized in an Integrated Computational Materials Engineering (ICME) framework.
Flight tests are prohibitively expensive, and this has historically been a major barrier in the development of hypersonic vehicles. Dedicated ground tests provide an alternative way to emulate flight conditions in a controlled environment. These include both aerothermal and structural tests. Although aerothermal ground tests seek to recreate flight conditions as accurately as possible, no facility is able to reproduce the exact flight conditions, and instead seek to match two or more parameters [128]. Flight parameters of interest include (but are not limited to) Reynolds number, Mach number, heat flux, pressure, shear, temperature, chemical environment (e.g. dissociated air), thermal shock, and exposure time. The freestream enthalpies experienced during hypersonic flight are huge (in the MJ/kg), which presents another challenge for ground test facilities. Arc jets, which produce realistic flight enthalpies, shear, and pressure for up to several minutes, have been considered the gold-standard of aerothermal testing for decades. However, these tests are costly and time-consuming to prepare and perform. Other facilities such as shock tubes/tunnels can match other flight parameters more accurately, but only for sub-second exposure times, which limits the utility of these facilities for thermal testing [129].
Similarly, the aim of conventional thermo-mechanical ground testing is to validate some structural property, feature, or behavior given flight-realistic mechanical and thermo-structural loads. For external TPS materials, particularly composites, properties such as the interlaminar and shear stress strengths are critical to the material performance and can vary greatly with respect to material processing. Thermo-mechanical testing can span simple property characterization, coupon-level, sub-scale, up to full vehicle-level tests [130]. For materials designed for re-usable applications, the lifecycle of the material under
the thermo-structural loads is also critical to evaluate, including any structural creep mechanisms.
For any flight vehicle, there is a large reliance on heritage materials, i.e. materials that have flown previously. Thus, there has historically been a large barrier to flight-test any new material system. In general, the technology readiness level (TRL) and the manufacturing readiness level (MRL) must be sufficiently high in order for a material to be considered flight-ready, depending on the risk tolerance of the flight program. Increasing both the TRL and MRL is a graduated process that requires extensive testing at multiple scales, ranging from coupon-level and sub-scale to full-vehicle tests.
Since the opportunities for flight tests are so limited, the conventional approach to developing a material system is to conduct as much learning beforehand via computational models and ground testing, and to treat the flight test as a final verification of the material/vehicle system. In this paradigm, the goal of integrated testing is to provide data to inform and validate models, as well as characterize phenomena that are difficult or impossible to model. These phenomena can be broadly categorized by the physical length scales, illustrated in Figure 4. At the macroscopic level, the fundamental chemical, thermal, and structural material properties need to be characterized for any computational model. Some statistical basis and understanding of the variance is generally required as well. The microstructure of a material is inherently linked to these thermo-physical properties. In turn, the microstructure and material architecture are strongly tied to the manufacturing process, especially for composite materials such as carbon-carbon [131]. At the largest scales, integrated testing is also critical to understand how the material behaves under flight conditions combining oxidation/ablation, shear, and thermo-structural loads.
Within a multi-scale modeling framework, there exists a symbiotic relationship between modeling and experiments. As models become more and more detailed and span many different length scales, experiments, and diagnostic tools are needed to provide data for the validation of these models. Concurrently, models are used to bridge the gap between ground test and flight environments. In fact, validation of models that occur at smaller length scales is often achieved by testing at successively larger scales, e.g. a meso-scale material model may be validated by coupon-level testing in Figure 4. The development of improved modeling methodologies must be accompanied by test data to validate the methods and models.
## 6 Conclusion
This perspective offers valuable insights into the critical challenges that researchers must tackle in order to advance materials technologies for hypersonic environments. By delving into key sub-systems of hypersonic vehicles -- primary structures, thermal protection, and propulsion systems -- we elucidate the essential material properties required to withstand the severe thermal and oxidative conditions inherent in such settings. Emerging materials design strategies are actively being formulated to enhance
Figure 4: **Multi-scale modeling and testing framework for materials design and flight testing. Length scales for both modeling and testing approaches span many orders of magnitude. Smaller scale models can inform and be validated by successively larger scale tests (Images adapted from [110, 6, 119, 120, 121, 122, 6])**
material properties and address pivotal challenges within each principal materials group.
1. Refractory metals, while possessing noteworthy attributes, exhibit limitations in oxidation resistance and strength at elevated temperatures, impacting their resilience in high heat flux environments. Multi-principal element "high entropy" alloys could provide significant benefits over legacy refractory alloys by decreasing density and improving oxidation kinetics. However, there remains a gap in our understanding of thermophysical properties in operation-relevant conditions. Leveraging rapidly evolving manufacturing methods alongside high-throughput characterization and machine learning algorithms holds promise for expeditiously uncovering novel compositions. Integrating these approaches could pave the way for novel advanced cooling technologies and resilient metallic structural components.
2. Composites display promising strength-to-weight ratios and elevated temperature tolerance in inert atmospheres. Nevertheless, uncoated materials are susceptible to significant oxidation and erosion when subjected to extreme temperatures. Enhancing performance involves the application of high-temperature emissive protective coatings and modifying carbonaceous matrices. Challenges arise from the disparity in material properties between coatings and matrices, as well as matrices and fibers, often leading to premature failure. The inherent anisotropic properties of existing fiber-reinforced materials underscore the importance of advancements in multi-scale reinforcement strategies and higher-dimensional materials, which represent compelling avenues for future development.
3. Refractory ceramics and UHTCs are characterized by exceedingly high melting points and thermal conductivities, but present challenges due to their low thermal shock resistance and density when used as monolithic components. Their optimal application lies in thermal barrier coatings and ceramics matrix composites. Notably, transition metal carbides incorporating SiC have gained attention for components exposed to elevated thermal and mechanical loads. However, the oxidation of refractory compositions containing SiC remains a formidable hurdle for extended applications beyond approximately 1600degC. Further research is imperative to exploring ceramics' structure-processing property relationships, transformation toughening, oxidation enhancement through high entropy compositions, and the intricacies of fiber/matrix bonding for ultrahigh-temperature ceramics.
The adoption of expanded materials design frameworks - from the atomistic to the macroscopic - is needed to create the next generation of resilient materials systems that can survive in complex hypersonic environments.
## Acknowledgements
S.C. acknowledges support by DoD (N00014-21-1-2515) and NSF (NRT-HDR DGE-2022040). This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-22-1-0221. I.M and C.O. acknowledge support from the Air force Office of Scientific Research, and discussions with Michael Brupbacher and Tom Magee. T.M.P. acknowledges the support of a Department of Defense Vannevar Bush Fellowship Grant ONR N00014-18-1-3031.
## Competing Interests
The authors declare no competing interests.
|
2309.08669 | Measuring the physical imprints of gas flows in galaxies I: Accretion
rate histories | Galaxies are expected to accrete pristine gas from their surroundings to
sustain their star formation over cosmic timescales. Its lower abundance
affects the metallicity of the ISM in which stars are born, leaving chemical
imprints in the stellar populations. We measure the amount of pristine gas that
galaxies accrete during their lifetime, using information on the ages and
abundances of their stellar populations and a chemical evolution model. We also
aim to determine the efficiency of star formation over time. We derived star
formation histories and metallicity histories for a sample of 8523 galaxies
from the MaNGA survey. We use the former to predict the evolution of the
metallicity in a closed-box scenario, and estimate for each epoch the gas
accretion rate required to match these predictions with the measured stellar
metallicity. Using only chemical parameters, we find that the history of gas
accretion depends on the mass of galaxies. More massive galaxies accrete more
gas and at higher redshifts than less massive galaxies, which accrete their gas
over longer periods. We also find that galaxies with a higher star formation
rate at z = 0 have a more persistent accretion history for a given mass. The
star formation efficiency shows similar correlations: early-type galaxies and
higher-mass galaxies had a higher efficiency in the past, and it declined such
that they are less efficient in the present. Our analysis of individual
galaxies shows that compactness affects the peak star formation efficiency that
galaxies reach, and that the slope of the efficiency history of galaxies with
current star formation is flat. Our results support the hypothesis that a
steady and substantial supply of pristine gas is required for persistent star
formation in galaxies. Once they lose access to this gas supply, star formation
comes to a halt. | A. Camps-Fariña, P. Sánchez-Blázquez, S. Roca-Fàbrega, S. F. Sánchez | 2023-09-15T18:00:07Z | http://arxiv.org/abs/2309.08669v1 | # Measuring the physical imprints of gas flows in galaxies I: Accretion rate histories
###### Abstract
Context:Galaxies are expected to accrete pristine gas from their surroundings to sustain their star formation over cosmic timescales. This mechanism is well established in models and simulations, but evidence from observations is mostly indirect. These gas inflows leave distinct traces in the chemical composition of newborn stars and alter the distribution of stellar abundances compared to what would be expected from a closed-box model of chemical evolution.
Aims:The goal of this work is to measure the amount of pristine gas that galaxies accrete during their lifetime, using information on the ages and abundances of their stellar populations and a chemical evolution model. We also aim to determine the efficiency of star formation over time.
Methods:We derived star formation histories and metallicity histories for a sample of 8523 galaxies from the MaNGA survey. We use the former to predict the evolution of the metallicity in a closed-box scenario, and estimate for each epoch the gas accretion rate required to match these predictions with the measured stellar metallicity.
Results:Using only chemical parameters, we find that the history of gas accretion depends on the mass of galaxies. More massive galaxies accrete more gas and at higher redshifts than less massive galaxies, which accrete their gas over longer periods. We also find that galaxies with a higher star formation rate at \(z=0\) have a more persistent accretion history for a given mass. We characterize the individual accretion histories in terms of two parameters: the total accreted gas mass and the 780 of the accretion history, a measure of when most of the accretion occurred. As expected, there is a strong correlation between the integrated star formation history and the total accreted gas mass, such that more massive galaxies accreted more gas during their lifetime. Currently star-forming galaxies lie above this correlation, so they tend to accrete more gas than average. The relationship between \(\tau\)80, the current stellar mass, and the current specific star formation rate is split such that star-forming galaxies (as now observed) may be found in a population with persistent gas accretion regardless of their stellar mass. The star formation efficiency shows similar correlations: early-type galaxies and higher-mass galaxies had a higher efficiency in the past, and it declined such that they are less efficient in the present. Our analysis of individual galaxies shows that compactness affects the peak star formation efficiency that galaxies reach, and that the slope of the efficiency history of galaxies with current star formation is flat.
Conclusions:We show throughout the article that we can obtain information about the processes that regulate the chemical composition of the interstellar medium during the lifetime of a galaxy from the properties of stellar populations. Our results support the hypothesis that a steady and substantial supply of pristine gas is required for persistent star formation in galaxies. Once they lose access to this gas supply, star formation comes to a halt.
## 1 Introduction
In the standard Lambda cold dark matter (LCDM) paradigm, pristine gas inflows are a natural expectation and serve as the foundation for galaxy growth (e.g., Finlator & Dave 2008; Schaye et al. 2010; Fraternali & Tomassetti 2012; Lilly et al. 2013; Ceverino et al. 2016; Molla et al. 2016; Rodriguez-Puebla et al. 2016). However, direct observations of these inflows have remained elusive due to the difficulty in detecting accreted gas (e.g., Sancisi et al. 2008; Fraternali & Binney 2008; Sanchez Almeida et al. 2014; Sanchez Almeida 2017; Cimatti et al. 2019). Furthermore, the accretion of gas and its observable consequences, such as higher star formation rates (SFRs) and lower metallicity, occur at different times, making concurrent observations challenging. Giavalisco et al. (2011) detected large amounts of pristine gas associated with a cluster of galaxies undergoing an ongoing infall process. Martin et al. (2012) and Rubin et al. (2012) also found evidence of cold gas around star-forming galaxies using Fe and Mg absorption lines, although the low covering fractions of the gas suggest that there is not enough to fuel their star formation. In the Milky Way, the estimated rate of HI gas accretion derived from detected HI clouds is around 0.1-0.4 M, which is insufficient to account for the current SFR (Putman et al. 2012).
Observational evidence of gas accretion in galaxies is largely indirect. For example, the Schmidt-Kennicutt relation has shown that galaxies at all redshifts cannot maintain their current SFR
for more than 0.5-2 Gyr without replenishment by gas (e.g., Schmidt 1959; Kennicutt 1983, 1998; Colombo et al. 2018, 2020; Sanchez et al. 2021b; Genzel et al. 2010; Daddi et al. 2010; Tacconi et al. 2013; Kennicutt 1983; Dekel & Birnboim 2006; Dekel et al. 2009; Fraternali & Tomassetti 2012; Lilly et al. 2013; Davis & Bureau 2016). Additionally, the gas fraction decreases with redshift at a much lower rate than the stellar density (Prochaska et al. 2005; Rao et al. 2006; Lah et al. 2007).
The study of chemical abundances also provides indirect evidence of gas accretion, as gas accreted from the cosmic web is believed to be mostly unenriched by metals (van de Voort & Schaye 2012). This could explain the low gas metallicities in some star-forming regions in local and high-redshift galaxies (e.g., Bresolin et al. 2012; Ceverino et al. 2016) or the strong gas-phase metallicity gradients observed in some high-redshift galaxies (e.g., Cresci et al. 2010).
Models of chemical evolution in the Milky Way have shown that the narrow metallicity distribution of long-lived stars in the solar neighborhood can only be reproduced with a continuous inflow of relatively low-metallicity gas (Larson 1972; Fenner & Gibson 2003; Chiappini 2009), known as the G-dwarf problem (Searle & Sargent 1972; Tinsley 1980; Nordstrom et al. 2004; Caimmi 2008). These studies predict an exponential decrease in the infall rate with time, with a current value of 0.4 M\({}_{\odot}\) yr\({}^{-1}\). Although such predictions are useful for constraining both cosmological models and subgrid recipes used in numerical simulations, it is important to note that the Milky Way is just one galaxy, and differences in gas accretion rates as a function of mass, morphology, and the environment may exist.
For this study, we adopted a similar approach to estimate the gas accretion rates over time for a sample of 8523 galaxies from the MANGA survey. Specifically, we used the star formation histories and stellar age-metallicity relation to calculate the mass of metal-poor gas required over time to dilute the amount of metals predicted by a chemical evolution model (e.g., Roca-Fabrega et al. 2021; Lilly et al. 2013). We explored the differences between galaxies as a function of various parameters.
The paper is structured into the following sections: Sec. 2 describes the sample and the observational data employed for this study, while Sec. 3 explains the methodology used to derive the star formation and chemical evolution histories (SFHs and ChEHs hereafter), as well as measuring gas accretion. Sec. 4 presents the results on the gas accretion and star formation efficiency (SFE) histories, as well as the trends with stellar mass, morphology, and the current star formation. Finally, Sec. 5 discusses the results and outlines possible improvements, while Sec. 6 provides a summary of the findings.
## 2 Data
The MaNGA survey (Bundy et al. 2015) consists of integral field unit (IFU) spectroscopic observations of a luminosity-selected sample of 10\({}^{4}\) local galaxies (\(\langle{\rm z}\rangle\sim 0.03\)). The spectra were taken at the 2.5 m Sloan Telescope at Apache Point Observatory (Gunn et al. 2006) with the BOSS spectrographs (Smee et al. 2013) with fiber bundles of different sizes depending on the galaxy (Drory et al. 2015). Additional fibers and fiber bundles are used for flux calibration and sky subtraction (Yan et al. 2016).
The observations were reduced and calibrated using the Data Reduction Pipeline (DRP, Law et al. 2016) after which the data cubes are produced, which are the central distributed data product of the project. These have a spatial resolution of about 2.5\({}^{\prime}\)/FWHM and a typical spectral resolution of R \(\sim\) 2000 over a wavelength range between 3600 AA and 10300 AA.
The full sample in the final data release DR17 (Abdurro'uf et al. 2022) consists of over 10,000 galaxies, which we refined to the working sample by removing those that showed a poor spectrophotometric fit in the quality control of the reduction and analysis. The procedure to identify galaxies with poor fitting is described in detail in sec. 4.5 of (Sanchez et al. 2022), the first step is an automatic procedure to flag galaxies with anomalous determinations of parameters such as redshift or mass compared to the NASA-Sloan Atlas (NSA)1 catalog. This is followed by human inspection of the central spectrum of each galaxy and its fitted model as well as maps of mock photometry, line emission, ages and metallicities, diagnostic diagrams such as BPT (Baldwin et al. 1981) and WHAN (Cid Fernandes et al. 2011), and the kinematic properties. The ChEH and mass assembly history (MAH) measured at different galactocentric radii are also checked. Galaxies which fail on the inspection are flagged and do not pass the quality check.
Footnote 1: [http://nsatlas.org/](http://nsatlas.org/)
Following Camps-Farina et al. (2022), we also removed galaxies with an inclination greater than 70\({}^{\rm o}\) or those whose line emission is characteristic of the presence of an AGN, resulting in a working sample of 9087 galaxies. We measure the metallicity at the effective radius, which is difficult to determine in highly inclined galaxies, while AGN can have wide emission lines that interfere when fitting the stellar component of the spectra. The sample covers a stellar mass range between log M \(\sim 8.5-12\) M\({}_{\odot}\) and includes all morphological types. In Fig. 1 we show how the galaxies in the sample are distributed according to their stellar mass.
## 3 Analysis
### pyPyPipe3D
We used the SFHs and ChEH derived in (Camps-Farina et al. 2022) using the methodology of Camps-Farina et al. (2021b). The metallicity value for each age is measured at the effective radius after averaging the values azimuthally in the projected plane of the galaxy. This value has been shown to serve as a robust proxy for the global properties of a galaxy (Gonzalez Delgado et al. 2014; Sanchez 2020). Below we give a brief description of the method, but refer to (Camps-Farina et al. 2022) for more details.
SFHs and ChEHs were derived with pyPipe3D (Sanchez 2006; Sanchez et al. 2016a,b; Lacerda et al. 2022) using the MaStar-sLOG stellar population template library (Sanchez et al. 2022). PyPipe3D is a full spectrum fitting code that can handle both the emission and absorption components of the galaxy spectra and analyze them separately. The stellar component is analyzed with spectral fitting techniques that obtain the linear
Figure 1: Distribution of the sample in stellar mass.
combination of simple stellar population (SSP) templates that best reproduce the observed spectrum. The coefficients represent the light fraction contribution of each SSP and can be used to obtain the SFH and ChEH. The fitting procedure is nonparametric, that is, we do not impose priors such as a functional shape of the SFH, which is measured based only on the amount of stellar mass that is assigned to each age. The advantage of this methodology is that we are not limiting the shape of the measured histories but on the other hand we have more free parameters than with a parametric approach.
The emission lines of the spectra are fitted by Gaussian functions after the stellar continuum and absorption spectra are subtracted, which allows us to correct their emission for the absorption of the stellar component. The redshift, velocity dispersion and dust attenuation of the spectra are determined first by performing a fit with a limited set of SSP templates and are then used as inputs when performing the second fitting with the full set of templates. The redshift and velocity dispersion are used to shift and broaden the templates before using them to fit the spectra. This two-step procedure has been shown to improve the quality of the spectral fitting (e.g., Sanchez-Blazquez et al. 2011; Sanchez et al. 2016,b; Lacerda et al. 2022), mitigating existing degeneracies between line-broadening and metallicity.
Light fractions are converted into mass fractions using predicted mass to light ratio (M/L) values. The mass fractions can be used to derive the SFH using the fraction of stellar mass that is lost during stellar evolution after each burst of star formation. The ChEH is measured by averaging the metallicity of the populations measured at each age weighed by the corresponding fractions of light. The metallicity in stellar atmospheres reflects the metallicity of the gas clouds in which they originated. By quantifying the metallicity of stellar populations of different ages, we can obtain a measure of the metallicity evolution within the ISM.
The MaNGA Pipe3D VAC catalog2 contains a large number of parameters of the MaNGA sample derived using PyPipe3D analysis. In this article, we used the following parameters from this catalog: Stellar mass, current SFR from H\(\alpha\) emission, EW\({}_{\rm H_{+}}\), and the effective radius (R\({}_{\rm e}\)). The first two parameters are used to calculate the specific star formation rate (sSFR), and the EW\({}_{\rm H_{+}}\) is used to determine the star formation status (SFS) of the galaxies according to the prescription of Lacerda et al. (2020). Star forming galaxies (SFG) are defined as those with EW\({}_{\rm H_{\alpha}}\) (R\({}_{\rm e}\)) \(>\) 10A and retired galaxies (RG) are defined as those with EW\({}_{\rm H_{\alpha}}\) (R\({}_{\rm e}\)) \(<\) 3A, whereas Green Valley galaxies (GVG) are defined as those with EW\({}_{\rm H\alpha}\) (R\({}_{\rm e}\)) between the two aforementioned values.
Footnote 2: [https://www.sdss4.org/dr17/manga/manga-data/manga-pipe3d-value-added-catalog/catalog/](https://www.sdss4.org/dr17/manga/manga-data/manga-pipe3d-value-added-catalog/catalog/)
Morphology was determined using a machine learning algorithm (see Sanchez et al. 2022) trained with 6000 galaxies with visual classification by (Vazquez-Mata et al. 2022). We classified the galaxies into E, S0, Sa, Sb, Sc and Sd-Irr morphological bins following the same definition as in Camps-Farina et al. (2022).
### Galaxy chemical evolution model
We obtained the evolution of the metal content in a galaxy for a given SFH using the chemical evolution code presented in Roca-Fabrega et al. (2021). The code's inputs, aside from the SFH, are the gas accretion history, initial gas mass, and an initial mass function (IMF). The code follows the evolution of the gas-phase chemical abundances of different elements, including Fe-peak and the alpha elements, using the yields by (Kobayashi et al. 2011; Stockinger et al. 2020) for SNII, (Chiappini et al. 1997; Greggio 2010; Hillman et al. 2015) for SNIa, (Hernanz 2005; Izzo et al. 2015) for novae and (Ventura et al. 2013, 2020) for AGB stars.
We have modified the code to input the SFHs and stellar metallicities to derive the amount of low metallicity gas accretion needed to reproduce the latter. We assume that the accreted gas has an iron abundance of [Fe/H] = \(-\)1.5, a value based on measurements of damped Lyman-\(\alpha\) absorbers at z\(\sim\)5 (Poudel et al. 2020). We chose a Salpeter IMF (Salpeter 1955) for consistency with the SSP templates. The impact of the IMF on the results is mentioned in Sec. 5.3. The metallicity value of the ISM is initialized to the stellar metallicity derived for the oldest age bin we considered (\(\sim\) 12.7 Gyr), which tends to be substantially higher than the primordial values as stellar population analysis cannot resolve the SFH at its initial burst due to a loss of age resolution among other reasons (see Sec. 5.2).
In order to study the robustness of using the oldest determination of the abundance as the initial value is we can compare our results to the abundance of the oldest populations detected in the Milky Way. In figure 8 of Minchev et al. (2018) the average [Fe/H] at \(\sim\)12.5 Gyr is -0.45, while averaging the abundance for the oldest age for galaxies with log M\({}_{\star}\) = 10-11 M\({}_{\odot}\) gives a value of -0.32. This is not too large of a discrepancy even before we take into account that Minchev et al. (2018) only measures stars at or farther than 3 kpc, which is higher than the value of R\({}_{\rm e}\) for the Milky Way at \(\sim\)2.5 kpc (van den Bergh 1999). The effect of measuring only stars farther than the R\({}_{\rm e}\) would lower the abundance compared to our measurements at R\({}_{\rm e}\) and even so there is no guarantee that the Milky Way should precisely match the average value of galaxies of this stellar mass range. As such, we consider that initializing the ISM at the oldest value is valid and produces reasonable accretion histories.
The initial gas mass is assumed to match the stellar mass at the initial time, an approximation based on a study of the HI
Figure 2: Diagram of the model considered for the relation between the ChEH, the SFH and the gas accretion history. The change in the metallicity of the gas over a time interval (t1 to t2) is the result of the metal input from the stellar populations created by the SFH up to that point and the dilution due to the accreted pristine gas. The model does not consider other mechanisms such as outflows or mergers whose effects of dilution or over-enrichment will therefore be conflated with the balance of gas accretion.
content at high redshift by which \(\rm M_{HI}\sim M_{*}\) at the redshift values we can reasonably resolve (z\(\sim\)2-3) (Heintz et al. 2022). In Fig. 2 we show a schematic representation of the considered model.
The matching of iron abundance with the history of chemical enrichment obtained with spectral population fitting is due to the fact that the metallicity values of the templates used in stellar population synthesis are primarily related to iron abundance, which has the greatest influence on stellar spectra, and therefore is how the templates are labeled (e.g., Sanchez et al. 2021).
We have a set of input parameters for each galaxy in our sample, as well as those resulting from averaging the SFH and ChEH of each galaxy in stellar mass, morphology, and the current SFR bins. The averaging procedure was developed to prevent the nonuniform redshift coverage in the MaNGA sample from affecting the averaged histories. The full procedure can be found in Camps-Farina et al. (2022). The averaged SFHs and ChEHs are more reliable due to their higher statistical significance and allow for a more accurate determination of accretion rates representative of galaxies within a given group. The individual histories, on the other hand, allow us to identify more detailed trends. In Sec. 4 we show the gas accretion histories resulting from both types of input. What we show as "averaged gas accretion histories" corresponds to the gas accretion that results from using averaged CheHs and SFHs, rather than averaging the individual gas accretion histories.
We use \(\tau 80\), defined as the time (in Gyr) required for a galaxy to accrete 80% of its gas, to parameterize the shape of the gas accretion history. Low values indicate that galaxies accreted most of their gas at an early time, while high values represent persistent gas accretion over cosmic times.
As an example of the histories we used as input, and to show the importance of having a mechanism for metal dilution, we show in Fig. 3 the ChEH and SFH resulting from averaging all the individual histories of the galaxies in our sample, as well as the predicted ChEH without dilution. The amount of metals in the gas increases steadily with time until it reaches a plateau, while the SFR decreases in general.
Some galaxies (about 5% of the sample) clearly show unphysical gas accretion rate histories and were therefore removed from the sample. The affected galaxies are generally those with low stellar mass and bright emission lines. Lower mass galaxies tend to have poorer signal-to-noise ratios, and bright emission lines (especially if they are broad) can interfere with the spectral fitting procedure that produces our ChEHs and SFHs. This affected \(\sim\)5% of the objects, leaving 8523 objects in the sample.
## 4 Results
### Averaged gas accretion histories
In Fig. 4 we show the average accretion histories in different stellar mass ranges. For each mass bin, we estimated an uncertainty in the mean accretion history by propagating the errors in the mean SFH and ChEH, and we estimated the scatter by propagating the standard deviation of each ChEH and SFH.
It can be seen that the total gas accreted by galaxies throughout cosmic time increases with stellar mass but is very similar when this value is normalized to the stellar mass at \(z=0\). The first statement is obviously expected, but the novelty is that we have only used the difference between stellar metallicity measured from the spectra and that expected in a closed-box chemical evolution model to derive this result, without adding any mass-related constrain. Furthermore, while more massive galaxies accrete more gas relative to their mass at early times, this trend reverses at \(z=0\).
In the lowest mass bins, we also find epochs where the measured stellar metallicity is higher than predicted by the chemical evolution code, which results in a negative value for the accretion rate. These can be simply due to imprecise measurements, especially as they appear to occur for the mass bins with lower accretion rate, but they can also indicate a larger importance of outflows in low mass galaxies, where more metal poor gas is selectively lost. A loss of metal-poor gas results in the average metallicity of the galaxy rising compared to a closed box evolution as the higher metallicity gas remains instead.
In Fig. 5 we show the average mass accretion rates of galaxies in the intermediate stellar mass range 10\({}^{10.5-11}\) M\({}_{\odot}\), separated in bins of morphological type and SFS. It can be seen that the accretion rate for all morphological types increases at early times, reaches a maximum, and then decrease almost continuously, reaching their minimum values at \(z=0\). The decrease starts at 7, 4 and 2 Gyr ago for E-S0, Sa and Sb types respectively, which is reflected in the different \(\tau 80\) values. This progression should be at least partially related to the fact that later morphological types are more likely to be star-forming and, indeed, it can be seen in the figure that SFG and GVG have an accretion rate history very similar to Sb galaxies, while the evolution of the mass accretion rate in RG follows that of E and S0s.
The similarity between the accretion histories of SFG and GVG is particularly interesting. The most recent peak of accretion at \(\sim 10^{9.3}\) yr (\(\sim\)2 Gyr) is lower for GVG but otherwise both bins have had accretion episodes throughout cosmic time. 1 Gyr ago, the accretion rate for GVG fell to the values of RG separating from SFG. This similarity is in very good agreement with what we would expect as GVG are galaxies that are just now becoming retired, meaning that their accretion history up until this point should be similar to that of SFG, as is observed.
Keres et al. (2005) and van de Voort & Schaye (2012) use hydrodynamic cosmological simulations to estimate the gas accretion rates of galaxies by tracking the kinematics of the gas around them from the cosmic web. For Milky Way-like halos they predict accretion rates which peak at around 10-30 M\({}_{\odot}\) yr\({}^{-1}\) at z\(\sim\) 2-3 and drop to about 1-3 M\({}_{\odot}\) yr\({}^{-1}\) in the present. For similar stellar masses we obtained an average accretion rate a fac
Figure 3: Set of ChEH (top) and SFH (bottom) corresponding to the averaged histories of all galaxies in our sample. In the top panel, the solid line corresponds to out measured ChEH and the dashed line to the predicted enrichment history in the absence of dilution from pristine gas accretion.
tor of 2-6 lower at the peak (\(\sim\) 4.8 M\({}_{\odot}\) yr\({}^{-1}\)) at the peak, but a very good agreement in the local universe (1.2 M\({}_{\odot}\) yr\({}^{-1}\)). Given the agreement on the recent value it is possible that the discrepancy in the peak value is due to us not being able to observe the peak of the accretion within the LBT range that we can reliably resolve. Alternatively, the simulations might overestimate the early accretion rate as a result of the subgrid physics recipes used or due to resolution issues. The choice of 1 Gyr ago for the recent value instead of the most recent measurement is due the fact that the accretion rates measured for the most recent times (relative to when the light was emitted) are likely to be underestimated simply due to the delay between the infall of the gas, it becoming well-mixed to the ISM and finally stars being born whose metallicity has been affected by the accretion.
Stellar population analyzes are especially suited for drawing conclusions by measuring the quantities in relative terms rather than for absolute values, such as findings on which objects or areas of a galaxy are younger or have higher abundances than others. The reason for this is that the values of the properties can change depending on the templates used for the fitting (e.g., see Cid Fernandes et al. 2014). For example, Muzzin et al. (2009) uses three stellar population models to perform SED fitting, finding a 25-50% variation for the stellar mass and SFR. Combined with the substantial simplifications in the model (whose impact is discussed in Sec. 5.3) we consider the discrepancy with the aforementioned simulations to be fairly reasonable, especially as the difference between the two hydro-dynamical estimates of the accretion rate themselves is a factor of 3, which is higher than the factor between our results and the closest of the estimates. Additionally, we do not have independent measurements of the abundance of the primordial gas that is accreted, and we assume it does not change its abundance over time. If the ac
Figure 4: Absolute (\(\dot{\rm M}_{\rm acc}\)) and relative (\(\dot{\rm M}_{\rm acc}\)/M\({}_{\star}\)) accretion rate histories (left and right panel respectively) as a function of the look-back time for the galaxies in our sample segregated by their current stellar mass (colors). In both panels the shaded areas correspond to the uncertainties derived from the error of the average value in the ChEHs and SFHs. This is not representative of the scatter of the distribution within each bin, which is indicated in the left side of the left panel.
Figure 5: Accretion rate histories for different morphology (left panel) and SFS (right panel) bins at a fixed mass bin of \(10^{10.5-11}\) M\({}_{\odot}\). In the bottom right corner of each panel, we show the \(\tau\)80 of the accretion histories, defined in Sec. 3.2. In both panels the shaded areas correspond to the uncertainties derived from the error of the average value in the ChEHs and SFHs.
creted gas has a different abundance than the one we assume, this would conversely apply a scaling factor to the accretion rates we measured. Future work on the chemical evolution model should improve the accuracy of the estimates.
Another important point to consider is the fact that our measurements are akin to an "effective" gas accretion rate. As stated above, in order for the accreted gas to be measured using our method it not only has to be accreted into the galaxy, but it must also reach the loci of future star formation before a star formation burst is triggered. The aforementioned studies measure the accretion rate of the gas from its kinematic infall, not considering how much of this gas will eventually condense into H\({}_{2}\) and form stars. Galactic winds with high mass-loading factors in particular can also remove a substantial portion of the accreted gas before it has a chance to form stars (e.g., Veilleux et al. 2005; Shen et al. 2012; Genzel et al. 2014; Lopez-Coba et al. 2019, 2020; Concas et al. 2022). This removal is dependent on the mass of the galaxy and is thus not simply a general scaling factor in our results.
### Star formation efficiency
Given that we now have the accretion histories as well as the SFHs of the galaxies in each bin, it is trivial to compute the SFE histories (SFEHs) as well. Unlike the accretion rate, the SFE is inversely proportional to the total gas mass present in the galaxy at each time. As a result, it is more affected by the choice of the initial gas mass. In Appendix A we show how changing the value of the initial gas mass has a much higher effect on the SFEH than on the accretion rate history.
Another important point to consider is which phase of the ISM the gas mass we are measuring with our gas mass parameter. The gas accretion we measured is derived from matching the abundance of the ISM to the stellar populations, so whether it is HI, HII or H\({}_{2}\) depends on which phase the mixing occurs. Given that the evidence for accretion of gas is observed in HI and that the dense clouds that form stars are made up of H\({}_{2}\) we generally expect our measurement of the gas mass to refer primarily to HI or to the total sum of gas. It is also important to keep in mind that even if the gas is accreted and mixed in the HI phase this does not guarantee that several Gyr after accretion it will still be in HI form, the fact that stars are formed out of it (a prerequisite to be measured in our methodology) means that at least a portion of it was converted into H\({}_{2}\). We removed the amount of stellar mass formed from the amount of gas, but this is only a lower bound to the amount of gas converted into H\({}_{2}\). There is also some amount of gas which will be ionized by the stellar populations into HII, so in general our gas mass estimates correspond to somewhere between the total mass of HI and the total gas mass in a galaxy.
In Fig. 6 we show the averaged SFE of galaxies in bins of stellar mass, SFR and morphology. The two morphology panels (bottom ones) use two different mass bins so that the late-type galaxies can also be shown while maintaining good statistical representation in each bin, with the Sb bin serving as a bridge between the two panels.
It can be seen that the peak of the SFE occurs at higher redshift for more massive galaxies. It is also clear that the decline of the efficiency after the peak is steeper for massive galaxies. This leads to the current segregation such that less massive galaxies are more efficient at forming stars in the present day. At LBT\(\sim 10^{9.4}\) yr the trend with stellar mass has inverted and less massive galaxies have higher SFE. The two less massive bins have similar SFE histories though they still show the same trend of more massive galaxies being more efficient in the past and less so in the present.
At a given stellar mass, the SFE decline after the peak depends on the current star formation status. RG dropped earlier in SFE compared to GVG and SFG. After LBT\(\sim 10^{9.4}\) yr the three bins show a fairly flat histories but SFG consistently maintain a higher SFE compared to GVG and RG.
Regarding morphology similar results can be seen, likely affected by the correlation between morphology and SFS, with the earlier type galaxies in the E-Sb range showing an earlier peak of SFE which declines more than the rest which produces an inversion in the M\({}_{\star}\)-SFE correlation. The lower mass, late type galaxies in the bottom right panel appear to follow the trend of the less massive galaxies in the top left panel, with similar SFEHs but in this case the earlier type galaxies are more efficient at all times. These galaxies as well as the low mass ones on the top left panel show a divergence at the earliest times toward high SFE values, likely pointing to the fact that our prescription for the initial gas mass underestimates the value for these galaxies and that even at the earliest ages that we can resolve the gas fraction for late-type and lower mass galaxies is higher than for more massive, earlier type galaxies.
One could interpret the fact that the inversion in SFE is barely detected for Sb and Sc and not at all for Sd in the bottom right panel as a difference in timing rather than in the physics involved. For lower mass Sb and Sc galaxies it could be that the drop in their average SFE, observed around LBT\(\sim 10^{9.3}\) yr for higher mass galaxies, has not yet occurred. Indeed, for Sb galaxies in this panel the most recent values show their SFE dropping slightly below Sc.
In order to compare our values for the SFE with observations we need to consider the gas phases. We proceed under the assumption that the gas is accreted as HI but part of it is converted into H\({}_{2}\) and HII according to the ratios which are currently observed. Calette et al. (2018) put the H\({}_{2}\) to HI ratio for galaxies with log M\({}_{\star}\sim 10.5\) M\({}_{\odot}\) (the bin we use for the comparison) at about 0.4 dex for late type galaxies, while the ratio of HII to HI is estimated as 0.1 in Kado-Fong et al. (2020). As such, we can use these ratios to obtain estimations for each gas phase in terms of our measured gas mass as:
* M\({}_{\rm HI}\sim 0.67\) M\({}_{\rm gas}\)
* M\({}_{\rm H_{2}}\sim 0.26\) M\({}_{\rm gas}\)
Our measured SFE for SFG galaxies of \(10^{10-10.5}\) and \(10^{10.5-11}\) M\({}_{\odot}\), is \(\sim\)0.05 Gyr\({}^{-1}\) 1 Gyr prior to observation, which can be converted under our assumptions to:
* SFE\({}_{\rm HI}\sim 0.08\) Gyr\({}^{-1}\) ; \(\tau_{\rm dep}\sim 12.5\) Gyr
* SFE\({}_{\rm H_{2}}\sim 0.19\) Gyr\({}^{-1}\) ; \(\tau_{\rm dep}\sim 5.2\) Gyr
Leroy et al. (2008) find a value of 0.5 Gyr\({}^{-1}\) averaging measurements of the SFE in local galaxies of log M\({}_{\star}\sim 10.1-10.9\) M\({}_{\odot}\) in the SINGS survey using molecular gas and Colombo et al. (2018) similarly computes the molecular gas depletion time for EDGE-CALIFA galaxies finding a typical value of \(\sim 10^{9.5}\) yr (except for E galaxies) which corresponds to \(\sim 0.3\) Gyr\({}^{-1}\). Regarding studies on SFE\({}_{\rm HI}\), Parkash et al. (2018) and Chowdhury et al. (2022) find depletion times of 3-6 Gyr which correspond to 0.15-0.33 Gyr\({}^{-1}\). For both phases our estimations of the SFE are a factor of 2-6 times lower than observational values which is expected given that we do not consider the effect of outflows on the amount of gas within the galaxy. Outflows could be responsible for expelling substantial amounts of gas, thus lowering the remaining mass causing us to underestimate the SFE, especially at later times. A removal of around 20-80% of the total amount of accreted gas via outflows over the lifetime of these galaxies
would give a reasonable match with the results and this is actually a relatively low value compared to the expected mass loading factors in galaxies from simulations (Mitchell et al. 2020) and observations (Chisholm et al. 2017) with M\({}_{\rm out}\)/M \(\geq\) 1.
### Individual gas accretion histories
From this point on we used the individual accretion histories of the 8523 galaxies in the cleaned sample instead of the averages used in previous sections. As we are dealing with many objects, instead of the accretion histories, we focus on two quantities that parameterize them: (i) the total accreted gas mass and (ii) the 780 of the accretion history. Negative values of the accretion rate were set to zero prior to calculating these two parameters. These negative values appear sometimes in the accretion histories and can arise due to method inaccuracies, or they could be due to effects that produce an over-enrichment of the ISM such as outflows of low-metallicity gas (thus raising the average metallicity).
The code tries to match the abundance of the gas by adding material of lower abundance and therefore the main assumption that is expected to hold true is that the measured abundances are equal or lower than if no gas was accreted. A higher value of the abundance than expected would therefore give a negative numerical value to match the observed value. As with any measurement, there are errors associated with the determination of the abundances and it is therefore expected that in some cases the aforementioned main assumption fails to hold true purely due to uncertainty fluctuations. This is more likely for lower mass (and consequently brightness) galaxies which also tend to have lower abundances in general, thus increasing the errors and lowering the values at the same time. Stellar population synthesis is also affected by an intrinsic degeneracy between age and metallicity such that an increase in age and in metallicity produce similar changes in the template spectra. The effects it has on the accretion rate histories are discussed in Sec. 5.2. While there is significant value in checking for the existence of galaxies whose over-enrichment is physical in nature, for the purposes of this article we are focusing on dilution effects and negative values make the measurement of two aforementioned parameters ambiguous.
In the left panel of Fig. 7, we show the relation between the total amount of stellar mass formed in a galaxy (\(\int\) SFH), the total mass of the gas it accreted, and its current sSFR. It is important to note that due to the fact that we do not resolve the earliest times in the SFH combined with mass loss due to aging stellar populations, the integrated SFH is not the same value as the current stellar mass. The former is the result of integrating the SFH derived from the stellar populations while the latter is measured from its current luminosity and applying the appropriate M/L in the V band (see Sanchez et al. 2022).
Figure 6: SFE histories for different bins of galaxies. In each panel the X-axis is the look-back time in log and the Y-axis the SFR divided by the total amount of gas present at each time as predicted by the model. In the top left panel, we show the galaxies divided into stellar mass bins, in the top right panel we show one mass bin (\(10^{10.5-11}\) M\({}_{\odot}\)) divided into current SFS bins, in the bottom left panel we show the same mass bin divided into four morphology bins (E-Sb) and in the bottom right panel we show a different mass bin (\(10^{10.5-11}\) M\({}_{\odot}\)) divided into the Sb, Sc and Sd morphology bins.
There is a clear, very tight, correlation between the amount of stellar mass a galaxy produces and the amount of gas it needs to accrete in order to match the observed chemical content. There is also a secondary correlation with the sSFR such that galaxies which are currently still forming stars accrete more gas compared to others of similar stellar mass. This result is especially remarkable as the sSFR is determined from emission lines and is therefore only a measurement of the current star formation of galaxies. The correlation provides a clear link between a galaxy's past history of accretion and its current star formation properties.
The gray dashed line shows the 1:1 relation between the two quantities, with practically all galaxies located above the line and the bulk of the distribution lying just above and parallel to it. The absence of galaxies which have accreted less gas than they have consumed to form stars is in very good agreement with the hypothesis that star formation is fueled by gas accretion. The reality of this process, however, is much more complex once we account for the different phases of matter (HI, H\({}_{2}\), HII), the infall, mixing and star formation timescales and the effects of outflows on the balance between accreted gas and stellar mass formed. The fact that galaxies lie above rather than centered on the 1:1 line is probably the result of a combination of these factors. For example, the presence of metal-rich outflows would simultaneously dilute the gas in the absence of accretion, making us overestimate the amount of gas in the galaxy, and also remove material available to form stars, moving a galaxy up and left in the relation making it seem like an outlier with very low SFE.
The fact that star-forming galaxies can lie at an order of magnitude or more above the relation is likely partly explained by feedback removing or heating gas so it is not available for star formation. Another hint to the effect of outflows could be that for lower /SFH galaxies (\(\sim 10^{9}\) M\({}_{\odot}\)) in the left panel of Fig. 7 the relation curves slightly upward. In these galaxies outflows are expected to be more efficient due to lower escape velocities so there should be, on average, a larger difference between the amount of gas accreted and the amount that is left for star formation after outflows remove a portion.
The efficiency of the conversion of accreted gas, which is mostly atomic, into molecular gas also plays a role in this, though typical recipes for the conversion cannot be applied directly. Since we measured the accreted gas using the information on the chemistry of the stars, the gas we are measuring has at the very least mixed with the gas that ends up forming the stars, the phase of matter (HI, HII, H\({}_{2}\)) in which the mixing occurs affects how much of the measured accreted gas mass contributes to fueling star formation in the galaxy and therefore the actual SFE.
In the left panel, the binned data points for the two ranges of sSFR (red and blue dots) show a convergence such that they become more similar for higher stellar masses, which could also be the result of stellar feedback outflows becoming more efficient at lower stellar masses. They also show that the bulk of the galaxies lies close to the 1:1 line and that the galaxies that accrete very high amounts of gas relative to their stellar mass are a minority.
Other than how much gas is accreted over a galaxy's lifetime it is also important to understand how this accretion is distributed over its lifetime. In the right panel of Fig. 7, we show the correlation between the \(\tau 80\) parameter of the accretion history and their current stellar mass and sSFR. The larger cloud of points in the figure shows a correlation between the three parameters such that currently star-forming galaxies lie in a fairly compact sequence of high \(\tau 80\) for all stellar mass values whereas retired galaxies appear to be centered at higher masses but also at significantly lower \(\tau 80\) values. This distribution is reminiscent of the star formation main sequence, and it shows that galaxies that are currently star-forming have overall a persistent gas accretion or have had a significant recent accretion event, while galaxies whose accretion has declined tend to be retired nowadays.
There is another cloud of points at low mass and low \(\tau 80\) which also has high value of the sSFR, suggesting the existence of galaxies which are still star-forming but have not required recent accretion to dilute their gas compared to earlier epochs. Since in the left panel star-forming galaxies consistently accrete more gas without showing a separate cloud of points it is possible that these galaxies are still accreting enough gas to sustain
Figure 7: Results for the gas accretion histories obtained using the chemical and star formation history of each galaxy. In the left panel, we show the relation between the integrated SFHs and the accreted gas masses and in the right panel we show the relation between the current stellar masses and the \(\tau 80\)s of the accretion histories. In both panels the color corresponds to the current sSFR values and the contours enclose 35%, 65% and 95% of the distribution. The thick points correspond to the median values within 0.5 dex wide mass bins separated into two sSFR ranges with the error bars marking the 25th and 75th percentiles. In the right panel, the log sSFR \(>\) -2 Gyr\({}^{-1}\) averaged values do not include the cloud of points located below 8 Gyr in \(\tau 80\).
star formation, but they had an unusually high accretion rate in the past, shifting the \(\tau\)80 to lower values.
The binned data points show the offset between galaxies as a result of sSFR as well as a lack of correlation of \(\tau\)80 with stellar mass beyond a slight decrease at higher masses for low sSFR galaxies. An important thing to note is that the calculation of the high sSFR data points does not include the secondary cloud of low \(\tau\)80 points which have been masked prior to their calculation so that they show the trend of what we consider to be the main distribution.
As a whole, Fig. 7 shows that for a galaxy to still be forming stars today it needs: (i) higher amounts of accreted gas and (ii) for the accretion to not have stopped or heavily declined in the recent past. This is compelling evidence for the scenario in which persistent star formation in galaxies occurs primarily as a result of ongoing accretion of pristine gas.
#### 4.3.1 Star formation efficiency
Much like for the accretion histories, we need to parameterize the individual SFE histories in order to be able to study them. The SFE is an intensive quantity, meaning that it does not scale with volume or mass like the SFR. If we "double" a galaxy by making it have twice as much material of each type, the resulting galaxy will have double the mass and SFR, but the SFE will stay the same because both SFR and gas mass double, canceling each other's increase. Because of this, the parameters used in the previous section to characterize the accretion rate are no longer relevant for the SFE because the cumulative sum of the SFEH does not have a physical meaning. As a result we have characterized the SFE histories using the median SFE value over time, \(\langle\)SFE\(\rangle_{\rm med}\), and the slope of a linear fit of the history, \(\beta_{\rm SFEH}\). In order to avoid the effect of early LBT divergences due to the underestimation of the initial gas mass seen in Fig. 6 we removed the earliest 2 Gyr of each SFE history prior to calculating the parameters.
In the top panels of Fig. 8, we show the relation between \(\langle\)SFE\(\rangle_{\rm med}\), \(\beta_{\rm SFEH}\) and the current stellar mass and sSFR of the galaxies in our sample. Similarly to the results observed in Fig. 6 we find that higher mass galaxies have, in general, a higher \(\langle\)SFE\(\rangle_{\rm med}\) but a steeper decline over cosmic time. The \(\langle\)SFE\(\rangle_{\rm med}\) vs M\({}_{\star}\) relation has a very large dispersion which increases for higher masses, but average trends can still be discerned, which generally coincide with those found for the averaged histories.
Currently star-forming and retired galaxies appear to have different distributions within the full sample: The most star-forming galaxies appear to cluster at low \(\langle\)SFE\(\rangle_{\rm med}\) with no correlation with M\({}_{\star}\), but calculating the median \(\langle\)SFE\(\rangle_{\rm med}\) within mass bins and the two sSFR groups shows the underlying correlation. While below \(10^{10.5}\) M\({}_{\odot}\) the trend for star-forming galaxies is fairly flat, similar to what can be seen in the top left panel of Fig. 6, it steepens for higher masses. For galaxies with lower sSFR the relation is practically flat regardless of stellar mass, producing a convergence between the two sSFR groups at high stellar masses.
The top right panel shows how star-forming galaxies are tightly clustered around a value of 0 for \(\beta_{\rm SFEH}\) which means that, on average over their lifetimes, they maintain a similar SFE value. Lower sSFR galaxies, on the other hand, have negative slopes on average, indicating a decline in SFE over time. The median slope decreases for higher stellar masses indicating that for these galaxies the decline in SFE is steeper. The apparent conclusion is that galaxies which used to form stars at a very high efficiency in the past end up becoming retired in current times, which could be evidence for stellar feedback-based quenching mechanisms but also for other mechanisms that are indirectly related to high SFE. Merger-triggered star-bursts, especially followed by an AGN, would also fit the picture as would morphological quenching of early disk galaxies.
The physical size of galaxies is a parameter that is expected to correlate with their SFE such that smaller galaxies are more efficient at forming stars than larger ones (e.g., Young 1999). In the bottom panels of Fig. 8, we check whether this applies to our SFE histories. In the left panel, we show the M\({}_{\star}\)-Re relation colored by the median SFE of the galaxies with a cubic fit between M\({}_{\star}\) and Re. Below \(\sim 10^{11.5}\) M\({}_{\odot}\) the distribution appears to be skewed with \(\langle\)SFE\(\rangle_{\rm med}\) such that more efficient galaxies lie below the average Re and conversely less efficient ones lie above. This trend is confirmed by dividing the sample into high and low-efficiency groups and calculating the median within stellar mass bins. Galaxies with very high (low) \(\langle\)SFE\(\rangle_{\rm med}\) follow a trend that is parallel to the full sample cubic fit but lies below (above) the fit such that they are smaller (larger) than galaxies of the same stellar mass.
In the bottom right panel, we try to find a more robust determination of this correlation and also to tie it to \(\beta_{\rm SFEH}\). We calculated the ratio between the values of Re we measured and those predicted by fit for their stellar mass, thus removing the dependence on M\({}_{\star}\), and plotted this ratio versus the \(\langle\)SFE\(\rangle_{\rm med}\) with the \(\beta_{\rm SFEH}\) as the color.
Despite the width of the distribution, the binned data points show a weak trend such that galaxies become less efficient the bigger they are relative to their average M\({}_{\star}\). It is particularly interesting that the relation plateaus below a ratio of 1 but declines quickly above this value. The leftmost bin does drop again but we might be affected by that bin having less galaxies.
Compared to those shown in Young (1999) our correlations are much weaker, a result that is likely to be affected (other than by the lower precision of our measurements) by the fact that we account for the correlation between the effective radius of the galaxies and their stellar mass but they do not, showing the correlation between radius and SFE. As such the correlation they find might be affected by both parameters correlating separately with stellar mass. We are also considering the median SFE over their lifetimes rather than the current measurements.
#### 4.3.2 Matching the remaining gas to observations
The validity of our measurements of the SFE depends on how accurate our estimation of the remaining gas in the galaxies is, whose main caveat is the fact that we do not consider outflows. As such, we expect our gas mass values to be overestimated by a certain factor, which in turn means that our SFE values are underestimated, as shown in Sec. 4.2 for the comparisons to literature values.
The HI-MaNGA (Masters et al. 2019) survey is a follow-up program for MaNGA which aims to provide information on the neutral gas for galaxies in the MaNGA sample. In the second data release (Stark et al. 2021), they provide HI data for 3818 galaxies of which 1809 have a HI gas mass determination. The intersection with our refined sample is 1371 objects. In Fig. 9 we compare the predicted current amount of gas, obtained by subtracting the total stellar mass formed from the total accreted gas mass and the initial gas mass, to the measurements of HI gas mass in the galaxies. The 1:1 relation is located as an upper envelope of the distribution, mostly parallel to the centermost contour where more objects are present. The galaxies with higher predicted gas mass, which are generally also the most massive
in stellar mass, show a significantly lower ratio of measured-to-predicted gas mass, consistent with predictions of the outflow rate due to AGN (e.g., Mitchell et al., 2020).
We show two fits to the data, one linear and one allowing only an offset to the 1:1 relation. The linear fit was performed using orthogonal distance regression (ODR) to ensure that the slope of the relation follows the overall direction of the distribution, as it minimizes perpendicular distance to the regression rather than distance in the Y-axis. The fitted offset shows on average a 66% loss of the accreted gas, either by being expelled in outflows or converted into other phases of matter such as molecular and ionized gas. The linear fit has a shallower slope as a result of the under-prediction of gas mass at low masses and over-prediction at high masses. While the shape of the distribution at the outermost contours favors the linear fit, the central parts appear to follow the offset distribution better, seen in the center-most contour. Overall, the linear fit is better as seen in the reduced chi-square values but the offset one is not much worse.
The over-prediction of gas mass for high stellar mass galaxies fits well with the expectations from simulations. Simulations predict that above log M\({}_{\star}\sim 10.5\) M\({}_{\odot}\) AGN feedback becomes efficient (e.g., Dekel & Birnboim, 2006; Mitchell et al., 2020),
Figure 8: Results for the SFE histories obtained using the chemical and SFH of each galaxy. In the upper left panel, we show the relation between stellar masses and median SFEs and in the upper right panel the relation between stellar masses and the slope of the SFE histories. In both of these panels the color corresponds to the sSFRs. In the bottom left panel, we show the relation between the stellar masses and the effective radii with the median SFEs shown in color. The blue line corresponds to a cubic fit to the M\({}_{\star}\)-Re relation. In the bottom right panel, we show the relation between the ratio of Re to the value of the cubic fit shown in the left panel and the median SFE, with the slope of the SFE histories shown in color. The contours in each panel enclose 35%, 65% and 95% of the distribution respectively. The thicker points shown in each panel correspond to the median values of the Y-axis quantity within 0.5 dex wide bins in the physical quantity of the X-axis selected as indicated in the legend for each color, with the error bars marking the 25th and 75th percentiles.
Figure 9: Comparison between our predicted remaining amount of gas and measurements of the current HI gas mass from the HI-MaNGA survey (Stark et al., 2021). The dashed line shows the 1:1 relation and the solid lines two different fits to the data, either a linear one or one offset from the 1:1 ratio. The color corresponds to the current sSFR of the galaxies.
which would increase the effect of outflows removing gas. A relative increase in outflow efficiency above these masses would produce the observed widening of the gap between the measured gas mass and the one predicted in the absence of outflows. While the X-axis in Fig. 9 does not represent stellar mass directly, more massive galaxies will, on average, have more total gas mass so it serves as a proxy. The other end of the relation, which shows that low-mass galaxies can have higher measured gas masses compared to our predictions in the absence of outflows, could be the result of underestimating the initial gas mass for these objects. In Sec. 4.2 we proposed this underestimation as a way to explain the sharp upturn of the SFEH at the earliest ages and it would also explain why we measured lower amounts of gas compared to observations.
Knowing the ratio between the gas masses we predict and those that are measured, we can use the fitted functions to correct the SFE values in our galaxies. In Sec. 4.2 we compared our results for the log M = 10.5-11 M\({}_{\odot}\) bin with literature values, finding that our SFE values are too low in general. Fig. 9 shows that we are, in general, overestimating the amount of gas in the galaxies, so after correcting our measured gas masses using the fitted functions we find HI depletion times of 2.9 Gyr (linear) and 4.1 Gyr (offset) while for H\({}_{2}\) the depletion times become 1.2 Gyr (linear) and 1.7 Gyr (offset). The literature values for HI are 3-6 Gyr and for H\({}_{2}\) it is \(\sim\)2 Gyr.
Both versions of the SFE now fit very well with the results from the literature but this is to be expected as our "correction" effectively transforms our gas masses into the average values of the HI-MaNGA survey whose gas masses have been validated against other works in the literature. The main result of this section is the fact that our gas mass estimations are reasonable given the approximations in the model and the fact that the trends, such as more massive galaxies having lost more gas to outflows, make sense given the physics we have implemented in the model.
## 5 Discussion
The results shown here provide a consistent argument for the role that gas accretion of pristine gas has on regulating star formation in galaxies and the effect that it has in their evolution. The most important aspect of this study is perhaps the evidence that gas flows in galaxies leave imprints in the stellar populations that can be recovered using spectral population fitting techniques and relatively simple models. The potential for this type of study is evidenced by the fact that we are able to predict the current star formation state of a galaxy given only its past chemical and star formation histories derived by stellar population synthesis.
### Clues to the origin of the mass-metallicity relation
The mass-metallicity relation (MZR) is one of the key sources of information regarding how the different evolutionary mechanisms in galaxies interact with one another. Reproducing the shape and values of the MZR has been one of the main ways to test and constrain theoretical models in galaxy evolution (e.g., Tremonti et al. 2004; Kewley & Ellison 2008; Erb et al. 2006; Torrey et al. 2019; Camps-Farina et al. 2021b, 2022)
In a closed box model (Tinsley & Cameron 1974) there is no exchange of material out of the system and the metallicity and stellar mass advance in lockstep as the metal injection is proportional to the stellar mass formed. The predicted mass-metallicity relation is too shallow compared to the measurements which fit with lower mass galaxies having lower effective yields (Tremonti et al. 2004). The most invoked mechanisms to increase the slope of the MZR are metal-rich outflows and gas accretion as a means to directly reduce the average metallicity of the ISM (e.g., Larson 1974; Tremonti et al. 2004; Dalcanton et al. 2004; Finlator & Dave 2008; Zhu et al. 2017; Barrera-Ballesteros et al. 2018). There are a number of works, however, which argue that the observed MZR cannot be reproduced solely with accretion (Dalcanton 2007) and outflows (Brooks et al. 2007; Calura et al. 2009), requiring an additional correlation between the stellar mass and the SFE such that less massive galaxies are less efficient in forming stars. As a result, the amount of metals injected per unit mass of gas would be lower, producing the slope of the MZR.
This hypothesis, however, clashes with observations that low-mass galaxies in the Local Universe tend to have higher SFE than massive ones. On the other hand, our results show that in the past massive galaxies were more efficient at forming stars than low-mass ones, and it is only in recent times that low-mass galaxies became more efficient only because they are more likely to still be forming stars. This is seen in the values of the median SFE and the slope of the SFEH. Since we also show the effects that gas accretion (and possibly outflows too) has on diluting the ISM over the lifetime of the galaxies our results support a hybrid scenario for the MZR slope to arise from the combination of accretion, outflows as well as a higher SFE for massive galaxies in the past when the bulk of the star formation and therefore metal yields occurred. Other mechanisms such as IMF variation (Koppen et al. 2007) with redshift or metallicity as well as mergers (Yates et al. 2012) are not discarded by our results due to not being part of the model.
### Caveats and precision of stellar population synthesis
Due to the nature of stellar evolution, the spectrum of a single stellar population changes much more in the time period shortly after being formed compared to when it is of advanced age, due to how the most massive stars die earlier. Because of this, the resolving power for the age of the stellar populations becomes lower as the age increases, which corresponds to lower reliability and resolution of the derived histories at high LBT.
Additionally, stellar population synthesis techniques are well-known to have degeneracies in determining the properties of the populations (e.g., Walcher et al. 2011), the most important of which is the age-metallicity degeneracy by which an increase in age and an increase in metal abundance have similar effects on the spectra of the templates. As a result, when fitting the populations, the code can obtain a good fit with either (i) the templates of the correct age and metallicity, (ii) templates that are younger but more metallic than the correct ones or (iii) templates that are older but less metallic than the correct ones.
In our testing we have determined that this degeneracy typically produces a positive secondary correlation between SFR and [Z/H] above \(\sim\)3-5 Gyr in age (see Camps-Farina et al. in prep.). There are two ways to explain why this is the result: (i) in the LBT range we can resolve, SFHs tend to be decreasing in time and, as such, if the age is underestimated the SFR values are higher than those of the appropriate age. A bias to younger ages is compensated in the degeneracy by using higher abundance templates, thus producing a positive SFR-[Z/H] correlation. Alternatively, we can consider (ii) that the errors in assigning fractions of light to the ages associated with the templates produces an uncertainty in the stellar mass at different LBT. Due to how both the MZR and SFMS have positive correlations with M\({}_{\star}\) an overestimation of M\({}_{\star}\) means that galaxy is located below
both the MZR and SFMS and, conversely, an underestimation of M\({}_{\star}\) means the galaxy has higher SFR and [Z/H] than it should for its mass. In either case the result is that the degeneracy produces a positive SFR-[Z/H] correlation.
This is relatively fortunate for the purposes of this work, as the effect of a positive correlation between these parameters will tend to cancel each other to a certain extent. The accretion rate we measure depends mostly on the difference between the expected abundance produced by the yields from the SFR and the measured [Z/H]. If the SFR is increased due to the degeneracy the expected abundance will also increase proportionally. As the [Z/H] will also increase due to the degeneracy, in this case the gap between expected and measured [Z/H] is affected much less than the values of the SFR and [Z/H] themselves, mitigating the effect that the age-metallicity has on the gas accretion rates.
Another way the histories are model dependent is in the choice of templates used to fit the spectra. The range of metallicities that we can measure is directly related to the values that are present in the template grid. The choice of templates is a key aspect of stellar population synthesis as populating the parameter space too much can give rise to significant artifacts at high LBTs if the spectra are too similar to one another (see Discussion section and Appendix B of (Camps-Farina et al., 2022)). In the case of templates composed from observed stars, which are more reliable as they do not depend on stellar atmosphere models, finding stars that cover the desired parameter space can also be difficult.
The dependence of the metallicity values on the templates is one of the main reasons that great care should be taken when analyzing our accretion histories as quantitative measurements. The method is capable of finding differences between galaxies even with a relatively narrow range of metallicity values in the templates (see appendices B and C in Camps-Farina et al. (2022)) so the trends are reliable, but a global change to the scaling of the accretion rates can arise if the range of the metallicity in the templates is too narrow. The library employed for this study has a decently wide range of metallicity values with good coverage of low values at Z = 0.0001-0.04.
Overall, we expect a loss of detail in the accretion rates relative to the true ones (i.e., short accretion episodes) due to these effects and the intrinsic capabilities of the method as to how accurately the populations can be recovered. Ibarra-Medel et al. (2019) assessed these effects by taking galaxies from the EAGLE (Schaye et al., 2015) simulations and producing mock observations which were then analyzed with Pipe3D, the predecessor to the code we employed. As the "true" SFHs are available from the simulations they can be compared to the ones recovered by the fitting, showing how the histories are similar but the recovered ones lose detail on the individual bursts of star formation (see also Sarmiento et al., 2023; Corcho-Caballero et al., 2023).
### Validity of the physics in the model
Other than the imprecision intrinsic to the stellar population synthesis method, we are also affected by the significant simplifications of the physical characteristics of the systems we modelled and the processes involved. One of the roughest simplifications is the fact that we are effectively considering each galaxy to be a single homogeneous gas cloud into which pristine gas is instantly deposited, on demand. In the case of the averaged histories, we are assuming that this single cloud acts as the average of the galaxies in each bin considered.
In order for this approximation to be valid we expect (i) the averaged histories to be a good proxy for those of an "average galaxy" representative of the bin, and (ii) for the individual histories to be representative of the variety of environments within each galaxy. We are further expecting that the processes that dilute the ISM scale somewhat linearly with other properties such as the stellar mass, SFR and abundance. As an example, consider a galaxy where the chemical enrichment in the central parts is determined solely by the input from stars (closed box) and in the outer parts there is instead significant accretion diluting the content. Our model is valid if the average ChEH combined with the integrated SFH is capable of recovering the entirety of the accretion, which only occurs in the outskirts. In practice we expect this to hold true in principle, provided that we can observe the histories with reasonable precision at each time and cover most of the area of the galaxies. We intend to explore how well this scenario represents the data in future work.
Simulations predict that the bulk of the accretion occurs at the outskirts of the galaxies, so we expect the presence of internal flows to distribute the accreted gas throughout the galaxy. Genzel et al. (2023) report observations on the noncircular motions of CO in galaxies at z\(\sim\)2, which they use to detect gas which moves inward in the galaxies from its kinematics. Averaging the properties of the galaxies in their table 2 we obtain the following values: log M\({}_{\star}\) = 10.95 M\({}_{\odot}\), R\({}_{\rm e}\) = 5.8 kpc, f\({}_{\rm gas}\) = 0.52, v\({}_{\rm r}\) = 74 km s\({}^{-1}\).
If we assume that half the mass of molecular gas is contained within 1 R\({}_{\rm e}\) and apply it to the expression for the total rate of inflow they give: \(\dot{M}=\beta\cdot M_{\rm gas}(R_{\rm e})\cdot v_{r}/R_{\rm e}\), where \(\beta\) is estimated as 0.2 in Genzel et al. (2023), we obtain an inflow rate of 52 M\({}_{\odot}\) yr\({}^{-1}\) at R\({}_{\rm e}\). This is substantially higher than our corresponding peak accretion rate at this mass range, 5-12 M\({}_{\odot}\) yr\({}^{-1}\) but we have no information about the abundance of the gas as it arrives at 1 R\({}_{\rm e}\) from the outskirts. If the difference in abundance between the gas at the outskirts and at 1 R\({}_{\rm e}\) is much lower than the difference between the latter and value for the accreted gas, we use then it is possible for the dilution effects to be equivalent. Instead of low amounts of very metal poor gas we can match the measured abundance with large amounts of gas with only slightly lower metallicity than the one at 1 R\({}_{\rm e}\). The value of the abundance gradient required for the effect of the inflows from Genzel et al. (2023) to match the dilution caused by the gas accretion we measured is -0.02 dex/kpc for radii larger than 1 R\({}_{\rm e}\).
As such, if the abundance gradient is similar to the inferred value of -0.02 dex kpc\({}^{-1}\), we can consider the massive inflows measured in Genzel et al. (2023) to represent the inward transport of gas accreted at the outskirts which afterward dilutes the ISM in the entire galaxy. In this scenario either (i) the gas is quickly mixed once it reaches the galaxy and begins to move inward, (ii) the gas "drags" local clouds inward without mixing with them via destabilizing their orbits or (iii) we are underestimating the abundance of the accreted gas in the first place, lowering our estimations for the accretion rate. The abundance gradient of galaxies at high redshift is poorly constrained, but it is generally expected to be rather flat and our inferred value is very similar to the reported values (e.g., Swinbank et al., 2012; Wuyts et al., 2016; Jafariyazani et al., 2020).
Another caveat is that the single cloud which represents the ISM is not divided into the different phases of matter HI, HII and H\({}_{2}\), despite their importance in determining the SFH. Regarding the measurement of the accretion histories themselves, this is not critical, and the only question is in the efficiency of the accreted gas to become available for star formation. As we are measuring the accreted gas from its resulting stellar populations, we are intrinsically assuming that 100% of the accreted gas at least mixes with the gas that will be used to form stars. Any gas which
does not reach locations where stars will eventually be formed over the galaxy's lifetime is not observed and therefore our measured accretion histories are lower bounds compared to total the amount of gas that enters the galaxy. The SFE histories, on the other hand, are strongly affected by this, as they are typically measured using the amount (or density) of H\({}_{2}\) and are therefore very sensitive to the ratios between phases. In our results this can be seen as we need a conversion factor of 10, consistent with observed HI to H\({}_{2}\) ratios, to obtain reasonable values.
The effect of not considering the different phases of matter on the accretion rates therefore depends on which phase the gas mixes in. Consider two edge cases: (i) if the gas efficiently mixes in the HI phase followed by condensation into H\({}_{2}\) then our estimates for the gas accretion rate are unaffected by not considering different phases, but (ii) if the gas quickly condenses into H\({}_{2}\) and it is only then that the mixing occurs we need to correct by the ratio between M\({}_{\rm HI}\) and M\({}_{\rm H2}\). Following Calette et al. (2018) this ratio should be between 0.1 and 1 so at most it could increase the accretion rate one order of magnitude in the most extreme case of a galaxy in which the accreted gas immediately condenses into H\({}_{2}\) and simultaneously has a very small fraction of M\({}_{\rm H2}\) compared to M\({}_{\rm HI}\). This combination is highly unlikely as it requires vastly different condensation timescales for accreted gas and that which is present in the galaxy.
The internal timescales that apply to the journey that the accreted gas travels from the halo until it becomes a star are not considered in the model. Therefore, the accretion histories are expected to be shifted in time by a certain factor. Unless there are significant changes to the physics of the ISM over cosmic time, we can expect this factor to be fairly constant over time, therefore only changing the specific values in the X-axis in the figures, which are already strongly dependent on the choice of isochrones for the stellar population templates. At a minimum, we expect that the delay includes the free-fall time of the cloud from the halo to the galaxy interior, and the timescales for mixing, condensation to H\({}_{2}\) and SFR burst.
Another very important simplification is the fact that we do not discriminate between the different effects that can produce the observed "missing metals" in the newer stellar populations. The effect of outflows has been mentioned in the results section to account for some of the observed properties, as their contribution to the gas balance and exchange can significantly alter the results. Depending on the mass-loading factor and the abundance of the ISM at the location where the outflows occur these can either contribute to the dilution, have no effect on the abundance or even produce an over-enrichment compared to the entire galaxy. For example, the centers of galaxies tend to be more metallic than other areas of galaxies, so an AGN outflow with a high mass-loading factor will remove gas that is more metallic than average, thus lowering the average abundance in the galaxy (e.g., Camps-Fariha et al. 2021a). With our method we would conflate the observed drop in expected metallicity to an episode of gas accretion.
Should the metallicity of the gas in an outflow be the same as the average in the galaxy we would not detect anything and if the outflow occurs at the outskirts the gas could be significantly lower in abundance compared to the average in the galaxy, thus raising its average metallicity. In either of these three cases, we would be overestimating the amount of gas present in the galaxy leading to an underestimation of the SFE. Most of our results could be recalculated in terms of outflows rather than gas accretion, the assumption of the gas exchange in galaxies being dominated by outflows was explored in Zhu et al. (2017) which presents a leaky-box model where outflows are the primary dilution mechanism managing to reproduce observed resolved properties of MaNGA galaxies.
In Fig. 9 we showed the difference between predicted and measured HI gas mass, and we can use this information to find the mass-loading factor by dividing the difference in gas mass by the integrated SFH. The median mass-loading factor we obtained is 0.96, which is significantly smaller than typical inferred values of 2-3 (e.g., Bouche et al. 2012; Zaragoza-Cardiel et al. 2019). One possible explanation is that the measurements of these studies are biased to concentrated or more violent episodes of star formation while our measurements are averaged over long times and include slower star formation, bringing the average mass-loading factor down. Alternatively, we are underestimating the amount of gas that is accreted to the galaxies, which would increase the mass-loading factor we measured. The most interesting option, however, would be that our measurements are roughly correct and the difference between the values corresponds to the effect of gas that reenters the galaxy after being expelled. Given that we measured the mass-loading factor from the cumulative effects of gas accretion and outflows the time delay to re-accrete the expelled gas is intrinsically included in our calculations. Assuming our measurements are accurate it means that, on average and for the subsample of HI-MaNGA, only about 30-50% of the gas initially expelled does not fall back into the galaxy after an outflow.
Mergers and interactions between galaxies are also a mechanism that we are not considering, and which can significantly affect both the gas flows within a galaxy and strip it of part of its gas content. Instabilities due to interactions can funnel gas from the outskirts to the center of galaxies and, since the gas at the outskirts tends to have lower abundances, this can mimic an accretion of pristine gas from the halo in our results and probably account for part of the accretion we measured.
The choice of the IMF is a complex topic for the results. We use a Salpeter IMF for consistency with the stellar population synthesis performed on the IFU data. However, our implementation of the SNIa injection (which dominates the Fe yields) is scaled to currently measured rates in the Milky Way and therefore does not explicitly depend on the IMF. Compared to Salpeter (Salpeter 1955), bottom-light IMFs such as Chabrier (Chabrier 2003) or Kroupa (Kroupa et al. 1993; Kroupa 2001) are typically considered to be a better representation of the true distribution of stars. The difference between these and Salpeter is typically a 60% factor in stellar mass or SFR owing to the difference in mass to light (M/L) ratios corresponding to each age (Kennicutt & Evans 2012). Changing the IMF in our results would reduce the gas accretion rate by a factor between 0 and 60% as the higher SFR of a Salpeter IMF is balanced by its lower ratio of stars above 1 M\({}_{\odot}\). Given main sequence lifetimes, at the present time no stars below 0.9 M\({}_{\odot}\) should have left the main sequence and as such the difference in Fe injection between IMFs lies primarily in the amount of stars they predict between 1-8 M\({}_{\odot}\) (these are the ones which can produce SNIa). Salpeter IMFs predict higher stellar masses as a result of their higher fraction of stars below 1 M\({}_{\odot}\), and as such their fraction of stars between 1-8 M\({}_{\odot}\) is lower than for Chabrier or Kroupa IMFs.
Additionally, the validity of a universal IMF shape across cosmic time is highly debated with dissenting results on whether the IMF is top-heavy at high redshift (e.g., Hayward et al. 2013; Eales et al. 2023). In conclusion, the IMF remains a source of uncertainty in the results though its impact in our case is limited due to the fact that our methodology only considers stars with masses above the value at which commonly used IMFs differ.
## 6 Conclusions
We have applied a model for the enrichment of the ISM to measurements of SFHs and ChEHs from the MaNGA sample of galaxies. We find that we are able to extract information about the dilution processes that take place in galaxies. We find several trends in gas accretion and evolution of SFE that are in fairly good agreement with expectations from theoretical models as well as good agreement with recent measurements. Some of the most important results are:
1. We find a good agreement with expected trends for gas accretion in galaxies as a function of their mass, with more massive galaxies having higher peak accretion and a steeper decrease over time.
2. Controlling for the current stellar mass, galaxies show different accretion histories depending on whether they are forming stars or not. GVG only separate the shape of their average accretion history from SFG in the last Gyr, showing that in general galaxies stop forming stars once they lose access to pristine gas.
3. The individual accretion histories confirm many of these trends and show a tight correlation between the integrated SFH and the total accreted gas mass. Currently, star-forming galaxies lie above or in the upper envelope of this relation.
4. The relation between the shape of the accretion history and the current stellar masses and sSFR values indicates that galaxies currently undergoing star formation have persistent gas accretion over cosmic time.
5. We used the amount of gas predicted to be in the galaxies at each time to calculate their SFE histories. We find that more massive galaxies are currently less efficient, but were more efficient in the past. There is a trend that galaxies with a high median SFE have a steeper decline in SFE over their lifetime.
6. Galaxies that are currently star-forming tend to have lower SFE at earlier times but persistent SFE over cosmic time.
7. More compact galaxies are more efficient at forming stars over their lifetimes.
###### Acknowledgements.
ACF thanks J. Sanchez Almeida and C. Dalla Vecchia for useful discussion on the results. ACF and PSB acknowledge financial support by the Spanish Ministry of Science and Innovation through the research grant PID2019-107427-GB-31, SRF acknowledges financial support from MINECO under grant number AYA2017-90589-REDT, RTI2018-096188-B-100, and S2018/NN47-429. SFS acknowledges funding from PRIP-TIG-AAP-AGAG100602 (UNAMP) project. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Camegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian (CIA), the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatorio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
|
2309.10026 | Origin of magic angles in twisted bilayer graphene: The magic ring | The unexpected discovery of superconductivity and strong electron correlation
in twisted bilayer graphene (TBG), a system containing only sp electrons, is
considered as one of the most intriguing developments in two-dimensional
materials in recent years. The key feature is the emergent flat energy bands
near the Fermi level, a favorable condition for novel many-body phases, at the
so-called "magic angles". The physical origin of these interesting flat bands
has been elusive to date, hindering the construction of an effective theory for
the unconventional electron correlation. In this work, we have identified the
importance of charge accumulation in the AA region of the moire supercell and
the most critical role of the Fermi ring in AA-stacked bilayer graphene. We
show that the magic angles can be predicted by the moire periodicity determined
by the size of this Fermi ring. The resonant criterion in momentum space makes
it possible to coherently combine states on the Fermi ring through scattering
by the moire potential, leading to flat bands near the Fermi level. We thus
establish the physical origin of the magic angles in TBG and identify the
characteristics of one-particle states associated with the flat bands for
further many-body investigations. | Wei-Chen Wang, Feng-Wu Chen, Kuan-Sen Lin, Justin T. Hou, Ho-Chun Lin, Mei-Yin Chou | 2023-09-18T18:00:01Z | http://arxiv.org/abs/2309.10026v1 | # Origin of magic angles in twisted bilayer graphene: The magic ring
###### Abstract
The unexpected discovery of superconductivity and strong electron correlation in twisted bilayer graphene (TBG), a system containing only _sp_ electrons, is considered as one of the most intriguing developments in two-dimensional materials in recent years. The key feature is the emergent flat energy bands near the Fermi level, a favorable condition for novel many-body phases, at the so-called "magic angles". The physical origin of these interesting flat bands has been elusive to date, hindering the construction of an effective theory for the unconventional electron correlation. In this work, we have identified the importance of charge accumulation in the AA region of the moire supercell and the most critical role of the Fermi ring in AA-stacked bilayer graphene. We show that the magic angles can be predicted by the moire periodicity determined by the size of this Fermi ring. The resonant criterion in momentum space makes it possible to coherently combine states on the Fermi ring through scattering by the moire potential, leading to flat bands near the Fermi level. We thus establish the physical origin of the magic angles in TBG and identify the characteristics of one-particle states associated with the flat bands for further many-body investigations.
+
Footnote †: Corresponding author. Email: [email protected]
+
Footnote †: Corresponding author. Email: [email protected]
+
Footnote †: Corresponding author. Email: [email protected]
+
Footnote †: Corresponding author. Email: [email protected]
## I Introduction
Two graphene sheets with a _small_ rotation angle \(\theta\) between them creates a special system of twisted bilayer graphene (TBG), in which a moire pattern emerges with a spatial periodicity inversely proportional to the twist angle [1; 2]. It has been shown that the variation of the twist angle in moire two-dimensional (2D) materials could modify the electronic properties, giving rise to an interesting research area of twistronics [3]. Previous calculations on TBG using different theoretical methods including the low-energy continuum model [4; 5], the tight-binding method [1; 6; 7; 8], and density functional theory (DFT) [9] all had similar findings: as \(\theta\) decreases, the Fermi velocity of the linear bands also becomes progressively smaller. This results from an enhanced interlayer hybridization between the two rotated Dirac cones of different layers as they get closer in momentum space when \(\theta\) gets smaller. These calculations also revealed that as the bandwidth of the low-lying bands is reduced at small \(\theta\) angles, the corresponding electron charge becomes localized to the AA-stacked region in the moire supercell [2; 9; 7]. One would anticipate that the Fermi velocity should decrease smoothly as \(\theta\) approaches zero; however, it was found in these calculations that at certain so-called "magic angles" the velocity deviates away from the expected monotonic curve and drops to almost zero [2; 5; 8], with the first of these magic angles being around \(1.1^{\circ}\). The resulting unusually flat bands thus have a large ratio between the Coulomb and kinetic energies and favor the formation of possible many-body phases.
Experimentally, with the "tear-and-stack" technique [10; 11; 12; 13; 14], researchers have been able to control the rotation angle of TBG precisely. It was first discovered by Y. Cao et al. [15] that around the first magic angle there exist insulating gaps when the doping level is tuned to integer fillings. Moreover, superconducting states appear [16] in the intermediate filling regions. These surprising reports stimulated further transport measurements for detailed studies on correlated insulating states at several integer fillings [17], superconductivity domes in a wide, continuous range of doping [17; 18; 19; 20; 21], ferromagnetism [17; 22], and the integer quantum anomalous Hall effect [23] at certain odd fillings. The existence of flat bands in TBG was confirmed by scanning tunneling spectroscopy (STS) studies [24; 25; 26] that revealed two pronounced van Hove singularities very close to each other in the density of states (DOS) near the first magic angle. Using angle-resolved photoemission spectroscopy with nanoscale resolution (nano-ARPES), energy bands with little dispersion were directly observed in momentum space at \(\theta\approx 0.96^{\circ}\)[27] and \(1.34^{\circ}\)[28]. The charge localization in the AA region at small \(\theta\) was also confirmed using scanning tunneling microscope (STM) [24; 25; 26; 12]. Furthermore, both transport and STS measurements in the presence of magnetic field have recently identified correlated gaps that account for both integer [29; 30; 31; 32; 33; 34; 35] and fractional [31; 36] Chern insulating states, as a result of strong correlation in combination with the flat band topology.
It is unprecedented that a carbon material with only _sp_ electrons can exhibit such rich phenomena of electron correlation driven by interaction. Thus, magic-angle twisted bilayer graphene (MATBG) and related systems have presented special challenges in our understanding of two-dimensional physics. In order to explain these exotic properties observed, various theoretical investigations have been reported. Most of these studies [37; 38; 39; 40] relied on building a minimum model/basis set consisting of, for example, Wannier orbitals, which reproduces the single-particle flat band dispersion while satisfying the original symmetry and topology properties as much as possible. Many-body interactions were then included in the framework of (generalized) Hubbard model to shed
light on the nature of many-body states observed experimentally. Some other studies [41; 42; 43] pursued alternative approaches to avoid possible bias toward the basis set. Nevertheless, no convincing theory that has been proposed to date can fully explain the interesting and intriguing experimental findings in MATBG yet.
The existence of multiple magic angles with extremely flat bands was found in many one-particle calculations. Yet their origin and why a series of flat bands occur at these specific magic angles are still not understood until now. Building an effective many-body theory for electron correlation and superconductivity requires, as the starting point, a good knowledge of the characteristics of one-particle electronic states at the magic angles. Therefore, in this theoretical work we aim to uncover the unique features in the electronic properties of TBG. Since the system is too large for full-scale DFT calculations, we mostly used the tight-binding method with newly developed parameters for accurate angle-dependent interlayer interactions [44; 45]. First, we provide an explanation using DFT calculations for the reason why electrons near the Fermi level become localized in the AA region of TBG when the twist angle \(\theta\) gets small. As a result, the local electronic structure of AA-stacked bilayer graphene (AABLG) with a characteristic Fermi ring becomes the key feature to be considered for generating a special effect at certain twist angles. Second, we demonstrate that when the size of this Fermi ring matches the reciprocal lattice vectors of the moire superlattice at a set of angles, the Dirac points of TBG fall on the Fermi ring. Multiple states on the ring can then be coupled coherently by the moire potential, leading to energy bands with extremely small dispersions. This matching condition in momentum space generates a series of discrete magic angles in TBG. Computational results on TBG with different interlayer coupling strengths when it is under uniaxial pressure are presented to illustrate our theory. This work provides the physical explanation on the origin of magic angles. The result also indicates that magic angles are special features in twisted graphene systems arising from their particular electronic structure.
## II Results
### Electron accumulation in the AA region
In our tight-binding calculation, we have considered commensurate TBG configurations with moire lattice vectors of \(\mathbf{L_{1}}=m\,\mathbf{a_{1}}+(m+1)\,\mathbf{a_{2}}\) and \(\mathbf{L_{2}}=-(m+1)\,\mathbf{a_{1}}+(2m+1)\,\mathbf{a_{2}}\), where \(\mathbf{a_{1}}\) and \(\mathbf{a_{2}}\) are lattice vectors for monolayer graphene, and \(m\) is an integer [2]. The rotation starts from AABLG with the axis passing through a vertical pair of carbon atoms. We neglect the possible interlayer corrugation and intralayer relaxation in this work. Figure 1a shows the atomic arrangement of a typical moire pattern in TBG with a rotation angle of \(\theta=3.15^{\circ}\) (\(m=10\)). Because of the relative rotation between two graphene layers, the local atomic arrangements exhibit different stacking patterns at different locations. Some noticeable ones, such as the AA-stacked and Bernal-stacked (AB or BA) regions, are labeled in Fig. 1a, as well as six other representative stacking patterns (T1-T6) in between. Thus, the Bloch electrons will experience different lattice potentials in different stacking regions within a single moire supercell, which is an important feature for this system. To understand the stacking effect, we have calculated the electronic DOS for these different lattice potentials with DFT, and the results are shown in Fig. 1b. It turns out that the AA stacking gives significantly more low-energy states near the Fermi level, while the AB(BA) stacking has the fewest states in the same energy range. This can be understood by examining the band dispersion. Compared with monolayer graphene with one Dirac cone (Fig. 1c), AABLG has two cones shifted vertically in energy (Fig. 1d), giving rise to a sizable constant DOS between the two original band crossings. In contrast, the Bernal stacking gives roughly two low-energy parabolic bands joined at the Dirac point with a smaller DOS. A tight-binding model with only the nearest-neighbor interlayer interaction will give a ratio of 4:1 for the low-energy DOS in these two systems. As the size of the moire superlattice increases with decreasing \(\theta\), so is the size of a region with a particular stacking pattern. Therefore, this explains the observation that, at small \(\theta\) values, the electrons near the Fermi level tend to accumulate around the AA region in the moire cell, because more states are available to them there.
To illustrate the major change in the electronic structure as \(\theta\) varies, we present in the upper panels of Fig. 2a-e the band structure of TBG by tight-binding calculations for five different twist angles (\(9.43^{\circ}\), \(3.89^{\circ}\), \(2.13^{\circ}\), \(1.12^{\circ}\) and \(0.86^{\circ}\)). Note the change of the energy scale in these plots. The size of the moire supercell is 1.5 nm, 3.6 nm, 6.6 nm, 13 nm, and 16 nm, respectively. The four bands in red near the Fermi energy gradually flatten as the twist angle decreases in Fig. 2a-c, which is expected for an increased coupling between the Dirac cones of two layers. Then there is a sudden change near the first magic angle of \(1.1^{\circ}\) where the bands become extremely flat (Fig. 2d). After passing through the first magic angle, the bands regain their slopes (Fig. 2e). This trend can be seen clearly from Fig. 2f in which the red curve shows the normalized Fermi velocity of TBG (with respect to the Fermi velocity \(v_{F}^{0}\) of monolayer graphene) as a function of angle \(\theta\) and exhibits a clear sharp dip at the first magic angle. The fact that the extremely flat bands only occur at certain discrete magic angles indicates that there exists a characteristic condition that breaks the expected monotonic behavior.
It is interesting to examine the developed electron accumulation in the AA region. We present the probability of finding electrons at each atomic site in the lower panels of Fig. 2a-e. The dark and light dots mark the atoms in the first and second layer, respectively, and the dashed dark hexagon marks the Wigner-Seitz cell of the moire
superlattice. The electron distribution of the zero-energy states at \(K\) is found to be similar to that integrated over the four red bands in the upper panels. Therefore, we show only the distribution of the four degenerate states at \(K\), represented by the size of red dots in the lower panels of Fig. 2a-e. As the twist angle is reduced and the AA region clearly develops, the electrons gradually become accumulated in the AA region at the center of the Wigner-Seitz cell. We measure the degree of localization by the integrated fraction of electrons within the dashed blue circle of a radius \(0.2L\), where \(L\) is the lattice constant of the moire supercell at each angle. This localization measure is found to increase smoothly as the twist angle decreases, as shown by the blue curve in Fig. 2f. As discussed above, this behavior can be explained by the fact that the AA region provides significantly more low-energy states than other stacking patterns. Surprisingly, this smooth behavior is not affected by the presence of the magic angle at all, in contrast to the variation of the Fermi velocity in Fig. 2f. It was proposed previously [2; 8] that the series of magic angles are associated with quantization conditions of the confined states in the AA region. However, results in Fig. 2f indicate that the flat band dispersion at the magic angle may not reflect the most localized electronic distribution and that the emergence of the magic angle requires a mechanism other than electron confinement alone.
### Connecting AABLG Fermi ring and magic angles in TBG
As the low-energy electrons are accumulated in the AA region at small angles, apart from the moire potential they mostly experience the local lattice potential of AABLG, which has a ring-shaped Fermi surface. This Fermi ring (Fig. 1d) has a particular radius (\(Q_{F}\)) in momentum space that is determined by the interlayer interaction. Next we propose a physical explanation for the development of flat bands at the magic angles that
Figure 1: \(|\) **Moiré effect on electronic density of states.****a**, Moiré pattern of TBG at \(\theta=3.15^{\circ}\) with six representative local atomic configurations T1 - T6 in addition to AA and AB(BA). The black (gray) atoms are in the first (second) layer. The arrow in the insets shows how the second layer of AA-stacked graphene is shifted to generate a particular local stacking configuration. The thick hexagon marks the Wigner-Seitz cell of the moiré supercell. **b**, Density of states (DOS) of representative bilayer stacking configurations in **a**, obtained from density-functional-theory (DFT) calculations. The uppermost panel covers a larger energy range, while the lower two panels compare the DOS variations near the Fermi level along two paths (AA \(\rightarrow\) T1 \(\rightarrow\) T2 \(\rightarrow\) T3 \(\rightarrow\) AB and AA \(\rightarrow\) T4 \(\rightarrow\) T5 \(\rightarrow\) T6). It is noted that the AA region has significantly more low-energy states around the Fermi level. **c**, Dirac cone in momentum space for monolayer graphene. **d**, Two Dirac cones shifted in energy for AA-stacked bilayer graphene with the Fermi ring marked by a dashed circle.
is connected to the Fermi ring of AABLG. In particular, we will show that a special situation exists if the TBG Dirac points (in the extended Brillouin zones) fall on the Fermi ring of AABLG. We will then demonstrate that the magic angles occur under these matching conditions and discuss the characteristics of the states associated with the resulting flat bands.
The interaction between the layers can be changed by applying uniaxial pressure, namely, by imposing a compressive vertical stress. With different reduced interlayer distances, both the magic angle and the size of the Fermi ring are separately modified. This provides a collection of sample systems that allow us to examine our proposed mechanism systematically by tight-binding calculations. The band dispersion of TBG with different compressed vertical strain values (0%, 3%, 5%, 10%, 15%, and 20%) was evaluated as a function of the twist angle in order to determine their corresponding magic angles. The com
\begin{table}
\begin{tabular}{l r r r r r r} \hline
**compression** & **0\%** & **3\%** & **5\%** & **10\%** & **15\%** & **20\%** \\ \hline \(m\) & 29 & 26 & 23 & 17 & 13 & 10 \\ \(\theta\) (\({}^{\circ}\)) & 1.12 & 1.25 & 1.41 & 1.89 & 2.45 & 3.15 \\ \(L\) (nm) & 12.6 & 11.3 & 10.0 & 7.46 & 5.75 & 4.45 \\ \hline \end{tabular}
\end{table}
Table 1: First magic angle \(\theta\) and corresponding moiré lattice constant \(L\) of TBG obtained from tight-binding calculations for different vertical compression levels. These are determined by a commensurate supercell of \(m\) that has the smallest Fermi velocity.
Figure 2: **Fermi velocity and localized electrons.****a-e**, Band structure (upper panel) of twisted bilayer graphene with angles of \(9.43^{\circ}\), \(3.89^{\circ}\), \(2.13^{\circ}\), \(1.12^{\circ}\) and \(0.86^{\circ}\), respectively, calculated by the tight-binding method. The four bands near the Fermi level (energy zero) are marked in red. Note that the bands become flatter when the angle decreases, with a minimal Fermi velocity near \(1.12^{\circ}\), but the slope at \(0.86^{\circ}\) bounces back after passing through this magic angle. The lower panel shows the electronic probability distribution of the zero-energy states at \(K\). The dark and light dots mark the atoms in layer 1 and 2, respectively, and the dashed dark hexagon represents the Wigner-Seitz cell with the real-space dimension labels in units of Å. The size of the red dots represents the probability of finding the electrons at each atom. **f**, Calculated Fermi velocity (red curve) and electron localization (blue curve) as a function of the twist angle \(\theta\). The Fermi velocity is normalized with respect to that in monolayer graphene \(v_{P}^{0}\). The electron localization is measured by the total fraction of \(K\) electrons within the the blue dashed circle centered at the AA region that has a radius of 20% of the moiré lattice constant. The Fermi velocity has a sharp dip at \(1.12^{\circ}\) near the magic angle. In contrast, the electron localization measure increases smoothly and monotonically as the twist angle decreases.
plete results are listed in Table 1. When the interlayer separation is compressed, the magic angle occurs at a larger angle [45]. For example, the first magic angle shifts from \(\theta=1.12^{\circ}\) at 0% compression to \(3.15^{\circ}\) at 20% compression. At the same time, the radius of the Fermi ring in AABLG also enlarges as the bilayer is compressed. For large compression values, a trigonal warping may be found, and an approximated Fermi ring is determined by a circle with the same area. This deviation will be considered in later discussions.
Since the compressed bilayer graphene has a smaller moire supercell at the first magic angle, it is easier to graphically visualize this effect in a compressed bilayer system. We plot in Fig. 3a-d the 2D Brillouin zone (dashed green hexagon) and Fermi ring (red circle) of AABLG with 20% vertical compression. Super-imposed in the figures are the Brillouin zones (small black hexagons) of TBG at different angles, where the \(K\) and \(K^{\prime}\) Dirac points are marked by solid and open circles, respectively. Figure 3a shows the configuration for \(\theta=7.34^{\circ}\), with an enlarged view in Fig. 3b. At this angle, the Dirac points of TBG do not seem to be influenced by the AABLG Fermi ring. Note that the Brillouin zone of TBG shrinks as the moire size increases with a decreasing twist angle. A special matching condition is found in Fig. 3d, where the Fermi ring of AABLG centered at \(K\) reaches the first star of Dirac \(K\) points in the extended Brillouin zones of TBG. We will show below that this particular matching condition yields the first magic angle in systems we have investigated.
We plot in Fig. 3e the moire supercell size \(L\) at the first magic angle found in the calculation (determined by the dip in the calculated Fermi velocity) versus the inverse of \(Q_{F}\) (the radius of the AABLG Fermi ring) for bilayer systems with various compression levels. These two quantities were independently obtained for two different systems by our tight-binding calculations. The horizontal error bars on the data points mark the variation in the size of the Fermi ring due to the trigonal warping: the smallest (largest) radius in the \(K\Gamma\) (\(KM\)) direction gives the upper (lower) bound of \(\pi/Q_{F}\). The vertical error bars represent the uncertainty in determin
Figure 3: \(|\)**Connection between magic-angle moiré periodicity and AABLG Fermi ring.****a-d**, Brillouin zone (dashed green hexagon) and Fermi ring (red circle) of AABLG superimposed on the Brillouin zones (small black hexagons) of TBG at different angles. For a clear illustration, the Fermi ring for AABLG at 20% compression of the interlayer separation is shown. Both the dashed green hexagon and the red ring are of the same size in **a-d**, but the Brillouin zones of TBG shrink as the moiré size increases with a decreasing twist angle. The matching condition (see text) is reached at \(\theta=3.15^{\circ}\), which is the magic angle for TBG at 20% compression. **e**, Correlation between the moiré size \(L\) at the first magic angle (\(S=1\)) and the inverse of the radius \(Q_{F}\) of the Fermi ring in AABLG with different interlayer compression. The linear behavior follows the relationship \(L=4/\sqrt{3}\times(\pi/Q_{F})\) (dashed line). The horizontal error bars on the data points mark the variation in the size of the Fermi ring due to trigonal warping, while the vertical error bars represent the uncertainty in determining the magic angle of TBG since our calculations used a particular collection of commensurate moiré supercells. **f**, Similar linear connection is found for the second (\(S=\sqrt{3}\)), third (\(S=2\)), and the fourth (\(S=\sqrt{7}\)) magic angles for the TBG system with 10% compression.
ing the exact magic angle, because our calculations used a particular collection of commensurate moire supercells. If the largest Fermi velocity dip was found in the calculation for a moire supercell with a particular \(m\) value, then the moire size for the exact magic angle is bounded by those corresponding to \(m-1\) and \(m+1\) moire supercells. As one can see from Fig. 3e, an almost perfect linear behavior is found with a slope of \(4/\sqrt{3}\). In other words, _the size of the AABLG Fermi ring determines the moire periodicity where the magic angle occurs_. And the first magic angle appears when the radius of the Fermi ring matches \(G\), the smallest magnitude of nonzero reciprocal lattice vectors of the moire supercell, as shown in Fig. 3d and Fig. 4a. As will be shown below, the critical matching condition is \(Q_{F}=SG\), where \(S=1\) corresponds to the first magic angle.
When the radius of the AABLG Fermi ring matches the second and third stars of the Dirac \(K\) points, as shown in Fig. 4b and c, we expect to see the second (\(S=\sqrt{3}\)) and third (\(S=2\)) magic angles, respectively. We show in Fig. 4d a more complete plot of the calculated Fermi velocity versus \(\theta\) for TBG with a 10% compression, and data up to the fourth magic angle (\(0.71^{\circ}\), \(S=\sqrt{7}\)) was obtained. The corresponding moire size at the first four magic angles versus \(\pi S/Q_{F}\) is plotted in Fig. 3f. Again, a consistent linear behavior is found that matches the relationship
\[L=\frac{4}{\sqrt{3}}\frac{\pi}{Q_{F}}S, \tag{1}\]
where \(S=1,\sqrt{3},2,\sqrt{7}\). Given that the minimal length of the reciprocal lattice vector is \(G=4\pi/(\sqrt{3}L)\), \(Q_{F}=S\,G\) is then the criterion in momentum space that determines the occurrence of the series of magic angles. Possible \(S\) values are expected to reflect the radius of the stars of \(K\) points: \(S=1,\sqrt{3},2,\sqrt{7},3,2\sqrt{3},...\). Under these matching conditions, the zero-energy states within AABLG (Fermi ring) and the zero-energy states within TBG (Dirac points) are aligned.
At small angles, we have \(L\approx a/\theta\), where \(a\) is the lattice constant of monolayer graphene. Therefore, the magic angles in TBG can be expressed as
Figure 4: **Higher-order magic angles.****a-c**, Matching conditions between the AABLG Fermi ring (red circle) and the first, second, and third stars of the \(K\) points (black filled circles) of TBG in the extended zones (black hexagons). These give rise to the first, second, and third magic angles, respectively. The magnitude of \(\mathbf{G}\), the smallest reciprocal lattice vector of the moiré supercell decreases in **a-c**. **d**, Calculated (normalized) Fermi velocity as a function of the twist angle for TBG with 10% vertical compression, showing four dips corresponding to the first four magic angles. **e**, Enlarged matching condition in **a** showing the six \(K\) points on the Fermi ring and possible scattering between them via reciprocal lattice vectors of the moiré supercell.
\[\theta_{magic}\approx\frac{Q_{F}}{\sqrt{3}\ l_{K}}\frac{1}{S}\,, \tag{2}\]
where \(l_{K}=4\pi/(3a)\) is the distance from \(\varGamma\) to \(K\) for monolayer graphene, and \(S\) is not necessarily an integer. Our DFT value for \(Q_{F}\) is about 0.055 A\({}^{-1}\). Therefore, the predicted magic angles are 1.07\({}^{\circ}\), 0.62\({}^{\circ}\), 0.53\({}^{\circ}\), 0.40\({}^{\circ}\), \(\ldots\) for TBG. If a tight-binding model considers only the nearest neighbor coupling \(t_{\perp}\) for the interlayer interaction, the AABLG Fermi ring will have a radius of \(Q_{F}=t_{\perp}/(\hbar v_{F}^{0})\).
### Wave functions of the Dirac point at magic angles
Away from the magic angle, the energy bands in Fig. 2 can be explained by the following picture: As the two layers are rotated by an angle in real space, the Dirac cones of the two layers are also rotated with respect to each other in momentum space; the interlayer interaction induces coupling between the two sets of linear bands, giving rise to a reduced slope (Fermi velocity) that is supposed to vary monotonically with the angle. This simple picture is understandable, since each moire supercell has regions of different stacking patterns (see Fig. 1a) that cannot influence the energy bands in an abrupt way. On the other hand, when the angle gets small, the relevant low-energy electrons are accumulated in the AA region, so they should be highly influenced by the lattice potential of AABLG. When the matching condition \(Q_{F}=S\,G\) is satisfied at the magic angles, it would be reasonable to construct the wave functions for the Dirac point of TBG from the Bloch states on the AABLG Fermi ring.
We use the first magic angle (\(Q_{F}=G\)) as an example. Figure 4e shows the detailed geometry with six Dirac points on the AABLG Fermi ring; they could be folded in TBG to the \(K\) point at the ring center (marked by a red point). The Bloch states on the AABLG Fermi ring are doubly degenerate and half occupied in a neutral system. Using the nearest-neighbor tight-binding model with an orthogonal basis of \(p_{z}\) orbitals on the \(A\) and \(B\) sublattices in layer 1 and 2: {\(A_{1}\), \(A_{2}\), \(B_{1}\), \(B_{2}\)}, these two degenerate states can be chosen as \(\left|k\right\rangle\propto(1\ 0\ 0\ -e^{i\phi})^{\overline{1}}\) and \((0\ 1\ -e^{i\phi}\ 0)^{T}\) with \(\phi=\tan^{-1}\left(k_{y}/k_{x}\right)\). These two degenerate states will not be coupled with each other by a local moire potential, so in the following analysis we make this assumption and deal with one set of them at a time. Each pair of the six \(k\) points in Fig. 4e are connected by a reciprocal lattice vector of the moire supercell, so they will be coupled with each other via the scattering by a 2D moire potential \(U(\mathbf{r})=\Sigma_{\mathbf{G^{\prime}}}U_{\mathbf{G^{\prime}}}e^{i\mathbf{G^{\prime}}\cdot \mathbf{r}}\), where \(\mathbf{G^{\prime}}\) is a reciprocal lattice vector of the moire superlattice. With the combination of \(C_{3}\) (vertical axis) and \(C_{2}\) (horizontal axes) rotation symmetries and the fact that the moire potential is real (\(U_{\mathbf{G^{\prime}}}=U_{-\mathbf{G^{\prime}}}^{*}\)), the number of independent Fourier coefficients is greatly reduced. We denote that \(U_{\mathbf{G^{\prime}}=0}=U_{0}\), \(U_{\mathbf{k_{1}-k_{2}}}=U_{\mathbf{k_{2}-k_{3}}}=\ldots=U_{1}\), and \(U_{\mathbf{k_{1}-k_{3}}}\) = \(U_{\mathbf{k_{2}-k_{4}}}=U_{\mathbf{k_{3}-k_{5}}}=\ldots=U_{2}\). \(U_{1}\) is real, and \(U_{2}\) can be made real by choosing the 2D origin at an inversion center.
After including the symmetry of the system and assuming that the long-scale moire potential varies little over a single graphene unit cell (see Methods), the perturbation Hamiltonian of the moire potential for one set of the wave functions at the six \(k\) points (\(\mathbf{k_{1}}\), \(\mathbf{k_{2}}\), \(\ldots\), \(\mathbf{k_{6}}\) in Fig. 4e) has this form:
\[H=\left(\begin{array}{ccccc}u_{11}&u_{12}&u_{13}&0&u_{13}^{*}&u_{12}^{*}\\ &u_{11}&u_{12}&e^{i2\pi/3}u_{13}^{*}&0&e^{-i2\pi/3}u_{13}\\ &&u_{11}&u_{12}&u_{13}&0\\ &&u_{11}&u_{12}&e^{i2\pi/3}u_{13}^{*}\\ &\mathrm{c.\,c.}&&u_{11}&u_{12}\\ &&&u_{11}\end{array}\right), \tag{3}\]
where \(u_{ij}=\left\langle k_{i}\right|U\left|k_{j}\right\rangle\) with the following values: \(u_{11}\) = \(U_{0}\), \(u_{12}=(\sqrt{3}/2)\,U_{1}\,e^{i\pi/6}\), \(u_{13}=(1/2)\,U_{2}\,e^{i\pi/3}\), and \(u_{14}=0\). This perturbation Hamiltonian turns out to have three doubly degenerate eigenstates. After a constant shift of \(U_{0}-\mathrm{Re}[U_{2}]\) to align the energy of the middle state with zero, we have the three eigenvalues:
\[E_{0}=0\,;\ \ E_{\pm}=\frac{3}{2}\left[\mathrm{Re}[U_{2}]\pm\sqrt{U_{1}^{2}+ \left(\mathrm{Im}[U_{2}]/\sqrt{3}\right)^{2}}\,\right]\,. \tag{4}\]
In the space of the six \(k\) states on the Fermi ring of AABLG, the two degenerate eigenvectors \(v_{1}\) and \(v_{2}\) for \(E_{0}\) are:
\[v_{1}=\left(\begin{array}{c}0\\ 1\\ 0\\ e^{i2\pi/3}\\ 0\\ e^{i4\pi/3}\end{array}\right)\ \mathrm{and}\ \ v_{2}=\left(\begin{array}{c}1\\ 0\\ e^{i2\pi/3}\\ 0\\ e^{i4\pi/3}\\ 0\end{array}\right)\,, \tag{5}\]
where \(v_{1}\) is a linear combination of \(k_{1}\), \(k_{3}\), and \(k_{5}\) states, while \(v_{2}\) is made of \(k_{2}\), \(k_{4}\), and \(k_{6}\) states. Both \(v_{1}\) and \(v_{2}\) are invariant under the \(C_{3}\) rotation and with the same eigenvalue \(\omega\). (The value of \(\omega\) could be \(e^{-i2\pi/3}\) or \(1\) with the rotation axis chosen at a C atom or the center of a six-atom ring.) The results for the other set of degenerate Bloch states at the six \(k\) points are the same. Therefore, we obtain four zero-energy degenerate states at the \(K\) point of TBG that all transform in the same way under the \(C_{3}\) rotation. For higher magic angles, the matrix elements in Eq. (3) will involve higher Fourier coefficients of the moire potential. Nevertheless, the features of the final eigenvectors will be similar to those for \(S\) = 1.
Next we show that the bands turn out to be flat going away from the \(K\) point by the \(k\cdot p\) perturbation theory. The main \(O(k)\) correction in energy away from the \(K\) point comes from the \(k\cdot p\) intraband coupling among the
fourfold degenerate states at \(E\)=0. These intraband couplings, however, turn out to vanish because all the four eigenstates (\(v_{l}\)) at \(E\)=0 have the same \(C_{3}\) rotation symmetry with the same eigenvalue of \(\omega\). This can be seen as follows. First, we rewrite \(k\cdot\hat{p}=k_{+}\hat{p}_{-}+k_{-}\hat{p}_{+}\), where \(k_{\pm}=(k_{x}\pm ik_{y})/\sqrt{2}\) and \(\hat{p}_{\pm}=(\hat{p}_{x}\pm i\hat{p}_{y})/\sqrt{2}\). The operator \(\hat{p}_{\pm}\) transforms as \(C_{n}\hat{p}_{\pm}C_{n}^{n}=e^{\mp i2\pi/n}\hat{p}_{\pm}\) under the \(C_{n}\) rotation. Then \(\bra{v_{l}}\hat{p}_{+}\ket{v_{m}}=\bra{v_{l}}C_{3}C_{3}\hat{p}_{+}C_{3}^{ \dagger}C_{3}\ket{v_{m}}\)\(=\bra{v_{l}}\omega^{*}(e^{-i2\pi/3}\hat{p}_{\pm})\omega\ket{v_{m}}=e^{-i2\pi/3} \bra{v_{l}}\hat{p}_{+}\ket{v_{m}}\). Therefore, \(\bra{v_{l}}\hat{p}_{+}\ket{v_{m}}=0\), and similarly \(\bra{v_{l}}\hat{p}_{-}\ket{v_{m}}=0\). As such, the resonant scattering by the moire potential regulates the \(C_{3}\) rotation symmetry of the degenerate states at \(E\)=0, kills the intraband \(k\cdot p\) coupling, and thus significantly flattens the bands away from \(K\).
In addition, the net \(O(k^{2})\) correction from the interband coupling also tends to be small. If we neglect the long-range interaction and set \(U_{2}\)=0, we have \(E_{+}=-E_{-}\) and \(\sigma H\sigma=-H\) with \(\sigma=\) Diag[1,-1,1,-1,1,-1]. Consequently, contributions from the interband coupling cancel each other: the \(E_{+}\) bands tend to bend downward the \(E_{0}\) band, while the \(E_{-}\) bands tend to bend it upward, rendering a net zero correction in energy to the order of \(O(k^{2})\).
## III Discussion
In this work we have identified the following key factors for the occurrence of magic angles in TBG, which may suggest the directions for searching for this interesting phenomenon in other 2D systems.
First, there exists a particular stacking pattern within the the moire supercell that attracts low-energy electrons to accumulate. This charge concentration becomes significant at small twist angles, because the region associated with the particular stacking pattern becomes substantial over many bond lengths. This happens in the AA region of TBG that has more low-energy states available compared with regions with other stacking patterns. If this condition does not occur and the electrons spread out throughout the moire supercell, the averaging effect will only decrease the bandwidth monotonically as the supercell size increases.
Second, this particular region has its characteristic band structure near the Fermi level that will respond to the perturbing moire potential in a particular way only at certain twist angles, where the resulting electronic states can have special features. In the case of TBG, the AA region has a characteristic Fermi ring, and the states on the ring can be coherently coupled when the size of the ring matches the reciprocal lattice vectors of the moire supercell (\(Q_{F}=SG\)). Therefore, the size of this Fermi ring determines the moire periodicity where the series of magic angles occur, and the corresponding TBG wave functions at the Dirac \(K\) point are special linear combinations of wave functions on the AABLG Fermi ring.
Third, the characteristic wave functions at the magic angles have certain symmetry properties that are able to induce a flat band dispersion away from the reference \(k\) point. This is possible because with a proper symmetry transformation property for \(\hat{p}_{\pm}\), the first-order correction within the \(k\cdot p\) perturbation theory can vanish. Possible symmetries include inversion, \(C_{n}\) rotations, and reflections. At the magic angles in TBG, the fourfold degenerate \(E\)=0 states at \(K\) have the same eigenvalue under the \(C_{3}\) rotation. The transformation property \(C_{3}\hat{p}_{\pm}C_{3}^{\dagger}=e^{\mp i2\pi/3}\hat{p}_{\pm}\) sets the intraband \(k\cdot p\) coupling to zero, giving rise to flat bands out of \(K\) to the linear order.
The calculations presented in this work did not include the lattice relaxation effect in TBG. Previous theoretical studies have shown that the relaxation pattern in the AA region can simply be described as adding an extra small local rotation around the center of the AA region [46; 47]. For example, for the case of \(\theta=1.05^{\circ}\), the local rotation angle between the two layers is increased from \(1.05^{\circ}\) to \(1.63^{\circ}\) upon relaxation [46], and for very small \(\theta\) values, the relaxed local twist angle at the AA stacking converges to \(1.7^{\circ}\)[47]. As a consequence, the AA stacking remains intact after relaxation, albeit with a reduced extent. Nevertheless, the remaining size is still of the order of a few nanometers for small twist angles (see Fig. 5 in [9] for \(\theta=0.99^{\circ}\) and Fig. 7 in [46] for \(\theta=1.05^{\circ}\)). This is consistent with the STM topographic images at around \(\theta=1.1^{\circ}\) that also showed bright spots of AA stacking regions with a size of a few nanometers [48; 26]. Since there still exist sizable AA regions after one takes into account the lattice relaxation, our theory based on the Fermi ring of the AA stacking should remain valid.
A previous study focusing on an idealized continuum model by switching off the interlayer coupling parameter \(w_{AA}\) for the AA stacking yielded perfectly flat bands at multiple magic angles [49]. Intriguingly, the wave functions of these flat bands at the magic angles in this particular model were found to be reminiscent of quantum Hall wave functions on the torus. Because the condition \(w_{AA}=0\) gives rise to perfectly flat bands, it was further speculated that the fundamental features of TBG are mainly connected with the interlayer coupling \(w_{AB}\) in the AB stacking. Since this idealized model does not correspond to any realistic graphene systems, it was hoped that one can extend the results to cases with nonzero \(w_{AA}\) by treating \(w_{AA}/w_{AB}\) as a perturbation. However, this may be difficult because \(w_{AA}\) and \(w_{AB}\) should be of the same order of magnitude in realistic TBG systems. In contrast, the current work focuses on the important effect of the \(w_{AA}\) interaction based on the fact that low-energy electrons in TBG are concentrated in the AA region. Our proposed theory is therefore able to fully explain the occurrence and positions of magic angles in terms of the local electronic properties of the AA region.
In conclusion, this work resolves the long-standing question why magic angles happen in twisted bilayer graphene. The origin of the magic angles turns out to be the "magic" Fermi ring in AA-stacked bilayer graphene. We have theoretically derived the series of magic angle
values. In addition, we have identified the composition of the one-particle wave functions associated with the flat bands for further many-body investigations.
## Methods
### Tight-binding calculations
To calculate the electronic properties of TBG, we used an extended tight-binding model proposed by S. Fang and E. Kaxiras [44] with parameters determined from calculations with density functional theory (DFT). This model includes intralayer couplings up to eighth nearest neighbors, and a strong angular dependence of interlayer couplings between atoms in two different layers. The spin-orbit interaction is neglected. The interlayer couplings under uniaxial pressure were also obtained using similar procedures [45]. The Hamiltonian was then diagonalized by the FEAST eigenvalue solver [50], and the eigenvalues and eigenvectors were verified by Mathematica in some cases. The density of states were calculated by the tetrahedral method [51], and the slope at the \(K\) point (Fermi velocity) was evaluated based on the Hellmann-Feynman theorem [52] as follows. The velocity operator is given by \(\vec{v}(\vec{k})=\partial_{\vec{k}}H(\vec{k})\), where \(\vec{k}\) is the crystal momentum, and \(H(\vec{k})\) is the tight-binding Hamiltonian of the moire structure. Due to the fourfold degeneracy at the Dirac point, we formed a velocity matrix \(v_{ij}=\left\langle i\mid v(K)\mid j\right\rangle\), where \(i\) and \(j\) are labels for the degenerate states at the Dirac point. The eigenvalues of the velocity matrix give the velocities; the one with the smallest absolute value was taken as the velocity result. Since the velocity is almost isotropic, we show the result for the \(K\to M\) direction in most cases. When there was uncertainty, the average between \(K\to M\) and \(K\to\Gamma\) was taken.
### Density functional calculations
We also performed DFT calculations for bilayer graphene with different stacking patterns using the Vienna _Ab initio_ Simulation Package (VASP) [53; 54]. We used the projector augmented wave (PAW) method [55; 56] to treat core electrons and the optB86b functional to include the van der Waals correction [57; 58], with a plane-wave cutoff energy of 400 eV. The Monkhorst-Pack k-point mesh of 21\(\times\)21\(\times\)1 was used in the self-consistent calculations for the 1\(\times\)1 unit cell. The calculation of the density of states used a more denser 91\(\times\)91\(\times\)1 grid.
### Matrix elements of the moire potential
The Bloch-state components constructed from the local orbitals at any of the four sites \(\{A_{1},A_{2},B_{1},B_{2}\}\) in the unit cell of AABLG have the form:
\[\left\langle\mathbf{r}\right|\alpha_{i}\rangle=\frac{1}{\sqrt{N_{\alpha,i}}}\sum_{ \mathbf{R_{\alpha,j}}}e^{i\mathbf{k}\cdot\mathbf{R_{\alpha,j}}}\zeta_{p_{z}}(\mathbf{r}-\mathbf{R_{ \alpha,j}}-\delta_{i,2}c\hat{z})\;,\]
where \(\mathbf{R_{\alpha,j}}\) is the in-plane position vector for sublattice \(\alpha\) in the \(j\)th unit cell; \(c\) is the interlayer separation, \(i\) refers to the layer index {1,2}, and \(\delta_{i,2}=1\) if \(i\)=2. \(\zeta_{p_{z}}(\mathbf{r})\) describes the orthonormal \(p_{z}\) atomic orbital, and \(N_{\alpha,i}=N\) is the total number of atoms pertaining to sublattice \(\alpha\) and layer \(i\) in AABLG. For the six \(\mathbf{k}\) vectors on the Fermi ring in Fig. 4e, there are two degenerate states, and the wave functions can be chosen as \(\left|k\right\rangle\propto(1\;0\;0-e^{i\phi})^{T}\) and \((0\;1-e^{i\phi}\;0)^{T}\) with \(\phi=\tan^{-1}\left(k_{y}/k_{x}\right)\). These two states will not be coupled with each other by the local moire potential, so we only have to deal with one of them at a time. Both of them are eigenstates of the \(C_{3}\) rotation.
The \(C_{3}\) rotation symmetry exists for both TBG [and thus the moire potential \(U(\mathbf{r})\)] and AABLG. It connects the matrix elements \(u_{ij}=\left\langle k_{i}\right|U\left|k_{j}\right\rangle\), for example, \(\left\langle k_{1}\right|U\left|k_{2}\right\rangle=\left\langle k_{1}\right|C _{3}^{\dagger}(C_{3}U_{3}^{\dagger})C_{3}\left|k_{2}\right\rangle=\left\langle k _{3}\right|U\left|k_{4}\right\rangle\). Therefore, we have (as shown in Fig. 4e) \(u_{12}=u_{34}=u_{56}\) (navy dashed lines); \(u_{23}=u_{45}=u_{61}\) (blue lines); \(u_{13}=u_{35}=u_{51}\) (orange dashed line); \(u_{24}=u_{46}=u_{62}\); and \(u_{14}=u_{36}=u_{52}\) (brown dashed line). As a consequence, the \(C_{3}\) symmetry reduces the number of variables from \(\left(6^{2}-6\right)/2=15\) to \(5\), which are \(u_{12},u_{23},u_{13},u_{24},\mathrm{and}u_{14}\).
Since the moire supercell is much larger than the unit cell of graphene, we can assume that the effective moire potential varies little within one unit cell of AABLG. Therefore, we have the following approximation:
\[\int\mathrm{d}^{3}r\zeta_{p_{z}}^{*}(\mathbf{r}-\mathbf{R_{\alpha,j}}- \delta_{i,2}c\hat{z})\,U(\mathbf{r}) \zeta_{p_{z}}(\mathbf{r}-\mathbf{R_{\alpha^{\prime},j^{\prime}}}-\delta_{i^{ \prime},2}c\hat{z})\] \[\approx U(\mathbf{R_{\alpha,j}})\delta_{i,i^{\prime}}\delta_{j,j^{\prime}} \delta_{\alpha,\alpha^{\prime}}\,.\]
Therefore, the matrix elements of the 2D moire potential for six \(\left|k\right\rangle\propto(1\;0\;0-e^{i\phi})^{T}\) are:
\[u_{lm} =\left\langle k_{l}\right|U\left|k_{m}\right\rangle\] \[\approx\frac{1}{2N}\left[\sum_{R_{A,j}}e^{i(\mathbf{k_{m}}-\mathbf{k_{l}}) \cdot\mathbf{R_{A,j}}}\;U(\mathbf{R_{A,j}})\right.\] \[\left.\hskip 28.452756pt+e^{i(\phi_{m}-\phi_{l})}\sum_{R_{B,j}}e^{i (\mathbf{k_{m}}-\mathbf{k_{l}})\cdot\mathbf{R_{B,j}}}\;U(\mathbf{R_{B,j}})\right]\] \[\approx\frac{1}{2A}\left(1+e^{i(\phi_{m}-\phi_{l})}\right)\int e^{i (\mathbf{k_{m}}-\mathbf{k_{l}})\cdot\mathbf{r}}U(\mathbf{r})\mathrm{d}^{2}r\] \[=\frac{1}{2}\left(1+e^{i(\phi_{m}-\phi_{l})}\right)U_{\mathbf{k_{l}}- \mathbf{k_{m}}}\,,\]
where \(A\) denotes the whole area of the TBG. With the combination of \(C_{3}\) and \(C_{2}\) rotation symmetries and the fact that the moire potential is real (\(U_{\mathbf{G^{\prime}}}=U_{-\mathbf{G^{\prime}}}^{*}\)), we have \(U_{\mathbf{k_{1}}-\mathbf{k_{2}}}=U_{\mathbf{k_{2}}-\mathbf{k_{3}}}=\ldots=U_{1}\), and \(U_{\mathbf{k_{1}}-\mathbf{k_{3}}}=U_{\mathbf{k_{2}}-\mathbf{k_{4}}}^{*}\)
\(=U_{\mathbf{k_{3}}-\mathbf{k_{5}}}=\ldots=U_{2}\). \(U_{1}\) is real, and \(U_{2}\) is generally complex but can be made real by choosing the 2D origin at an inversion center. Given that \(\phi_{2}\) - \(\phi_{1}\) = \(\pi/3\), \(\phi_{3}\) - \(\phi_{1}\) = \(2\pi/3\), \(\phi_{4}\) - \(\phi_{1}\) = \(\pi\), \((1+e^{i\theta})/2=\cos(\theta/2)e^{i\theta/2}\), and \(u_{lm}\) = \(u_{ml}^{\ast}\), we then have \(u_{11}=U_{G=0}\), \(u_{12}=(\sqrt{3}/2)\,U_{1}\,e^{i\pi/6}\) = \(u_{23}\), \(u_{13}=(1/2)\,U_{2}\,e^{i\pi/3}\), \(u_{24}=e^{i2\pi/3}u_{13}^{\ast}\) and \(u_{14}=0\).
###### Acknowledgements.
We gratefully acknowledge Li-Feng Yen and Po-Tung Fang for sharing the tight-binding codes and for helping with technical matters. Helpful discussions with Wen-Ying Ruan, Chi-Ruei Pan, Jie-Cheng Chen, Wei-En Tseng, and Martin Callsen are acknowledged. This work is supported by Academia Sinica, Taiwan.
|
2309.06644 | Weakened vortex stretching effect in three scale hierarchy for the 3D
Euler equations | We consider the 3D incompressible Euler equations under the following three
scale hierarchical situation: large-scale vortex stretching the middle-scale,
and at the same time, the middle-scale stretching the small-scale. In this
situation, we show that, the stretching effect of this middle-scale flow is
weakened by the large-scale. In other words, the vortices being stretched could
have the corresponding stress tensor being weakened. | In-Jee Jeong, Jungkyoung Na, Tsuyoshi Yoneda | 2023-09-12T23:52:34Z | http://arxiv.org/abs/2309.06644v1 | # Weakened vortex stretching effect in three scale hierarchy for the 3D Euler equations
###### Abstract.
We consider the 3D incompressible Euler equations under the following three scale hierarchical situation: large-scale vortex stretching the middle-scale, and at the same time, the middle-scale stretching the small-scale. In this situation, we show that, the stretching effect of this middle-scale flow is weakened by the large-scale. In other words, the vortices being stretched could have the corresponding stress tensor being weakened.
0
Footnote 0: _2020 AMS Mathematics Subject Classification:_ 76B47, 35Q35
## 1. Introduction
Recent direct numerical simulations [2, 3, 12, 13] of the 3D Navier-Stokes turbulence at high Reynolds numbers have shown that there exists a _hierarchy_ of scale local vortex stretching dynamics. In particular, Goto-Saito-Kawahara [3] discovered that turbulence at sufficiently high Reynolds numbers in a periodic cube is composed of a self-similar hierarchy of anti-parallel pairs of vortex tubes, which is sustained by creation of smaller-scale vortices due to stretching in larger-scale strain fields. This discovery has been further investigated by Y.-Goto-Tsuruhashi [16] (see also [15]). From these previous results, we could conclude physically that the most important features of the Navier-Stokes turbulence could be scale local vortex stretching which does not seem to be random (see also [5, 6, 7] for the related mathematical results). Therefore as the sequence of these studies, our next study will be clarifying the locality of vortex stretching dynamics precisely, and in this paper we consider it with three-scale hierarchical structure (c.f. for a geometric approach considering this locality, see Shimizu-Y. [14]).
In this paper, we shall consider solutions to the 3D incompressible Euler equations in \(\mathbb{T}^{3}\)
\[\left\{\begin{aligned} \partial_{t}u+u\cdot\nabla u+\nabla p& =0,\\ \nabla\cdot u&=0,\end{aligned}\right. \tag{1.1}\]
in which the velocity has the following hierarchical structure: \(u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}\), where the letters \(\mathcal{L},\mathcal{I},\mathcal{S}\) stand for large, intermediate, and small-scale, respectively. They will be arranged in a way that the corresponding vorticities, denoted by \(\omega^{\mathcal{L}},\omega^{\mathcal{I}},\omega^{\mathcal{S}}\), are mutually almost orthogonal. This is natural since the rate of vortex stretching interaction is maximized when the vortex lines are orthogonal to each other. Indeed, this orthogonality was confirmed in direct numerical simulations [2, 3, 12, 13] by statistical means. The goal of this paper is to understand the dynamics of vortex stretching under this hierarchical structure.
Before we describe our results, let us give details of our flow configuration. We take the length scale \(L\) of the torus \(\mathbb{T}^{3}:=[-L,L)^{3}\) to be large (\(L=100\) suffices) and fix the large-scale velocity to be the linear strain
\[u^{\mathcal{L}}(x):=(Mx_{1},-Mx_{2},0), \tag{1.2}\]
in the region \(|x|\leq 10\), for some large \(M\gg 1\). This velocity field corresponds to a large-scale antiparallel columnar vortex parallel to the \(x_{3}\)-axis supported away from the origin. In the following, we shall implicitly assume that \(u^{\mathcal{L}}\) is a steady solution to 3D Euler with some smooth forcing \(f^{\mathcal{L}}\) which is supported in the region \(|x|>10\).
To motivate our choice of smaller scale vorticities, consider the _linearized_ Euler dynamics around \(u^{\mathcal{L}}\) in vorticity form:
\[\partial_{t}\omega+u^{\mathcal{L}}\cdot\nabla\omega=\nabla u^{\mathcal{L}}\omega \tag{1.3}\]
and since
\[\nabla u^{\mathcal{L}}=\begin{pmatrix}M&0&0\\ 0&-M&0\\ 0&0&0\end{pmatrix},\]
we see that, with respect to the \(L^{\infty}\)-norm, the solution to (1.3) expands and contracts exponentially in time with rate \(Mt\) along the \(x_{1}\) and \(x_{2}\)-axis, respectively.
This raises the following question: if we arrange intermediate and small-scale vorticities to be respectively parallel to \(x_{1}\) and \(x_{2}\) axis, is it possible that the intermediate vortex (being exponentially stretched by the large-scale) significantly stretches the small-scale, by dominating the decay effect of the large-scale?
It turns out that, interestingly, the answer is no: the small-scale vortex, even in the presence of exponentially stretched intermediate-scale, still decays exponentially in time with the same rate \(-Mt\). While this is not too hard to see for the linearized Euler equations, we establish this in the full nonlinear Euler equations: consider the system
\[\begin{cases}\partial_{t}\omega+(u+u^{\mathcal{L}})\cdot\nabla\omega=\nabla(u +u^{\mathcal{L}})\omega,\\ \qquad\qquad\qquad\qquad u=\nabla\times(-\Delta)^{-1}\omega,\end{cases} \tag{1.4}\]
where we shall assume that \(\omega=\omega^{\mathcal{I}}+\omega^{\mathcal{S}}\) is supported in the region \(|x|\leq 10\), so that (1.2) applies. We write \(u^{\mathcal{I}}=\nabla\times(-\Delta)^{-1}\omega^{\mathcal{I}}\) and \(u^{\mathcal{S}}=\nabla\times(-\Delta)^{-1}\omega^{\mathcal{S}}\). More precisely, we take
\[\omega_{0}^{\mathcal{I}}(x):=(\omega_{1,0}^{\mathcal{I}}(x_{2},x_{3}),0,0), \qquad\omega_{1,0}^{\mathcal{I}}(x_{2},x_{3}):=\sum_{i,j\in\{0,1\}}(-1)^{i+j} \varphi((-1)^{i}x_{2},(-1)^{j}x_{3}), \tag{1.5}\]
where \(\varphi\in C_{c}^{\infty}\) is a non-negative function supported in \(\big{\{}(x_{2},x_{3})\in\mathbb{T}^{2}:1\leq x_{2},x_{3}\leq 2\big{\}}\). Note that \(\omega_{0}^{\mathcal{I}}\) is parallel to the \(x_{1}\)-axis and has odd symmetry in both \(x_{2}\) and \(x_{3}\). These odd symmetries are imposed to maximize the stretching effects of \(\omega^{\mathcal{I}}\) near the origin and has been inspired by several works [8, 17, 1]. Finally, with some sufficiently small \(0<\varepsilon,\ell\ll 1\), we fix some divergence-free vector field \(\tilde{\psi}\in C_{c}^{\infty}(B_{0}(2\ell))\) and require
\[\omega_{0}^{\mathcal{S}}(x)=\varepsilon\tilde{\psi}(x),\qquad\tilde{\psi}(x) =\left(0,\psi\left(\ell^{-1}x\right),0\right),\qquad|x|\leq\ell \tag{1.6}\]
for some smooth bump function \(\psi\geq 0\).
We consider the flow map \(\Phi\) generated by \(u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}\): namely, \(\Phi(0,x)=x\) and
\[\frac{d\Phi}{dt}(t,x)=(u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}})(t,\Phi (t,x)). \tag{1.7}\]
The solution to (1.4) satisfies the Cauchy formula \(\omega(t,\Phi)=\nabla\Phi\omega_{0}\), and naturally, we define the evolution of intermediate and small-scale vortex by \(\omega^{\mathcal{I}}(t,\Phi(t,x))=\nabla\Phi(t,x)\omega_{0}^{\mathcal{I}}(x)\) and \(\omega^{\mathcal{S}}(t,\Phi(t,x))=\nabla\Phi(t,x)\omega_{0}^{\mathcal{S}}(x)\). As long as the solution remains smooth, the support of \(\omega^{\mathcal{I}}\) and \(\omega^{\mathcal{S}}\) remains disjoint.
We are now ready to state our main theorem. For \(0<r<L\), \(B_{0}(r)\) denotes the ball \(\{|x|<r\}\).
**Theorem A**.: _Consider the solution to (1.4) with initial data \(\omega_{0}=\omega_{0}^{\mathcal{I}}+\omega_{0}^{\mathcal{S}}\in C^{\infty}( \mathbb{T}^{3})\) with \(\mathbb{T}^{3}:=[-L,L)^{3}\). The solution remains smooth in the time interval \([0,T_{M}]\), where \(T_{M}=M^{-1}\log(1+M)\). On this time interval, we have_
\[\big{\|}\nabla u^{\mathcal{I}}(t,\cdot)\big{\|}_{L^{\infty}(B_{0}(\ell))}\leq C \exp(-C^{-1}Mt)\|\nabla u_{0}^{\mathcal{I}}\|_{L^{\infty}(\mathbb{T}^{3})} \tag{1.8}\]
_and_
\[\big{\|}\omega^{\mathcal{S}}(t,\cdot)\big{\|}_{L^{\infty}(B_{0}(\tilde{\ell}) )}\leq C\exp(-Mt)\|\omega_{0}^{\mathcal{S}}\|_{L^{\infty}(\mathbb{T}^{3})}, \qquad\tilde{\ell}=(M^{-1}\ell)^{C} \tag{1.9}\]
_uniformly for all \(M\gg 1\), \(0<\ell\leq M^{-C}\), and \(0<\varepsilon\leq\ell^{2s-3}\exp{(-M^{C})}\) with \(s>5/2\), where \(C>1\) is a constant depending only on \(L\), \(\psi\) and \(\varphi\)._
_Remark 1.1_.: While \(T_{M}\to 0\) as \(M\to\infty\), we have that \(MT_{M}\to\infty\), so that within the timescale of \(T_{M}\), the exponential terms in the right hand sides of (1.8) and (1.9) decays to zero. Furthermore, (1.8) should be contrasted with exponential growth of \(\omega^{\mathcal{I}}\): \(\|\omega^{\mathcal{I}}(t,\cdot)\|_{L^{\infty}(\mathbb{T}^{3})}\gtrsim\exp(Mt)\| \omega_{0}^{\mathcal{I}}\|_{L^{\infty}(\mathbb{T}^{3})}\) in the same time interval. In other words, the vortices being stretched could have the corresponding stress tensor being weakened.
**Interpretation of the result**. It is important to compare the above with the case when the large-scale strain field is absent: in this case, the small-scale vortex gets stretched _at least exponentially in time_ by the intermediate-scale (cf. [1]). Therefore, we see that the vortex stretching effect of the intermediate scale is weakened by the large-scale, and the resulting small-scale vortex dynamics is not really different from the case when the intermediate vortex is absent. Our result suggests that the vortex stretching of "adjacent" scales is the one most likely to occur, and thus possible blow-up solution to the three dimensional Navier-Stokes and/or Euler equations may not possess multi-scale vortex stretching motion. This insight is consistent with the numerical result by Kang-Yun-Protas [9]. Based on solving a suitable optimization problem numerically, they investigated the largest possible growth of the vorticity in finite time in three-dimensional Navier-Stokes flows. Their findings revealed that the flows maximizing the vortex growth exhibit the form of three perpendicular pairs of anti-parallel vortex tubes with the same size (see Figure 11 in [9]). Furthermore, the flow evolution resulting from such an initial vorticity is accompanied by reconnection events. We can at least see that this flow does not possess multi-scale vortex stretching motion.
_Remark 1.2_.: One could similarly investigate the situation in which the directions of the intermediate and small-scale vorticities are switched. In this case, both large and intermediate-scale vortex stretch the small-scale vortex. However, in this case, the length scale of the intermediate vortex grows at the same time, and escapes the \(O(1)\)-region around the origin.
**Ideas of the proof**. Let us briefly explain the main steps of the proof. To begin with, when \(\omega^{\mathcal{S}}\) is completely absent, one can simply study the nonlinear evolution of \(\omega^{\mathcal{I}}\) and obtain the bound (1.8). This is already non-trivial since one needs to understand the nonlinear self-interaction of \(\omega^{\mathcal{I}}\).
Then, one can introduce \(\omega^{\mathcal{S}}\) and formally analyze the vortex stretching equation of \(\omega^{\mathcal{S}}\), which is \(D_{t}(\omega^{\mathcal{S}})\simeq(\nabla u^{\mathcal{L}}+\nabla u^{\mathcal{I }})\omega^{\mathcal{S}}\) if one neglects the nonlinear self-interaction. Applying the bound (1.8) and integrating, one obtains the desired estimate (1.9). However, in this case, not only one needs to handle the self-interaction term, but also needs to deal with the linear feedback of \(\omega^{\mathcal{S}}\) onto the intermediate scale \(\omega^{\mathcal{I}}\). Indeed, as soon as \(\omega^{\mathcal{S}}\) is introduced, \(\omega^{\mathcal{S}}\) and \(\omega^{\mathcal{I}}\) satisfy a coupled system of PDEs, even at the linearized level. In particular, it becomes tricky to obtain the bound (1.8) in the first place.
Therefore, our proof consists of a two-step comparison procedure: we introduce the "pseudo-solution" pair \((\omega^{\mathcal{I},P},\omega^{\mathcal{S},P})\) where \(\omega^{\mathcal{I},P}\) corresponds to the intermediate scale in the absense of the small scale. Then, we compare \(\omega^{\mathcal{S}}\) with \(\omega^{\mathcal{S},P}\) which is then compared in turn with the linearized dynamics around \(\omega^{\mathcal{I},P}\).
**Notation**.: We employ the letters \(C\), \(C_{1}\), \(C_{2},\cdots\) to denote any constants which may change from line to line in a given computation. In particular, the constants depend only on \(L\), \(\psi\) and \(\varphi\). We sometimes use \(A\approx B\) and \(A\lesssim B\), which mean \(A=CB\) and \(A\leq CB\), respectively, for some constant \(C\).
## 2. Preliminaries
The aim of this section is to establish two principles for comparing perturbed solutions of 3D Euler equations in \(\mathbb{T}^{3}\).
Let \(\bar{u}\in L^{\infty}([0,\bar{T}];H^{s+2}(\mathbb{T}^{3}))\) with \(s>\frac{5}{2}\) be a solution to the following Cauchy problem for the 3D incompressible Euler equations:
\[\left\{\begin{aligned} \partial_{t}\bar{u}+\bar{u}\cdot\nabla\bar{u}+ \nabla\bar{p}&=0,\\ \nabla\cdot\bar{u}&=0,\\ \bar{u}(t&=0)&=\bar{u}_{0},\end{aligned}\right. \tag{2.1}\]
where \(\bar{u}_{0}\) is a function in \(H^{s+2}(\mathbb{T}^{3})\). Then we consider a perturbation problem of (2.1). To be precise, let \(u\) be the solution of the following problem:
\[\left\{\begin{aligned} \partial_{t}u+u\cdot\nabla u+\nabla p& =0,\\ \nabla\cdot u&=0,\\ u(t&=0)&=\bar{u}_{0}+\varepsilon\tilde{u}_{0},\end{aligned}\right. \tag{2.2}\]
where \(\varepsilon>0\) and \(\tilde{u}_{0}\) is a function belonging to \(H^{s+1}(\mathbb{T}^{3})\).
Now we introduce and prove our two principles.
### Principle 1
Defining \(\tilde{u}=u-\bar{u}\) and \(\tilde{p}=p-\bar{p}\), we have
\[\left\{\begin{aligned} \partial_{t}\tilde{u}+\bar{u}\cdot\nabla\tilde{u} +\tilde{u}\cdot\nabla\tilde{u}+\tilde{u}\cdot\nabla\tilde{u}+\nabla\tilde{p}& =0,\\ \nabla\cdot\tilde{u}&=0,\\ \tilde{u}(t=0)&=\varepsilon\tilde{u}_{0}.\end{aligned}\right. \tag{2.3}\]
Next, we consider linearization around \(\bar{u}\): writing \(u^{Lin}=\bar{u}+\tilde{u}^{Lin}\), and dropping quadratic terms in the perturbation, we arrive at
\[\left\{\begin{aligned} \partial_{t}\bar{u}^{Lin}+\bar{u}\cdot \nabla\tilde{u}^{Lin}+\tilde{u}^{Lin}\cdot\nabla\bar{u}+\nabla\tilde{p}^{Lin}& =0,\\ \nabla\cdot\tilde{u}^{Lin}&=0,\\ \tilde{u}^{Lin}(t=0)&=\varepsilon\tilde{u}_{0}.\end{aligned}\right. \tag{2.4}\]
Then we estimate \(\left\|\omega(t,\cdot)-\omega^{Lin}(t,\cdot)\right\|_{L^{\infty}(\mathbb{T}^{ 3})}\) on \([0,\bar{T}]\) for sufficiently small \(\varepsilon>0\), where \(\omega=\nabla\times u\) and \(\omega^{Lin}=\nabla\times u^{Lin}\) are corresponding vortices.:
**Proposition 2.1**.: _Under the above setting, there exists a constant \(C>0\) such that if \(\varepsilon>0\) satisfies_
\[\varepsilon\leq\frac{1}{C\|\tilde{u}_{0}\|_{H^{s+1}(\mathbb{T}^{3})}\bar{T}} \exp\left(-C\int_{0}^{\bar{T}}\left\|\bar{u}(\tau,\cdot)\right\|_{H^{s+2}( \mathbb{T}^{3})}d\tau\right), \tag{2.5}\]
_then on \([0,\bar{T}]\), we have_
\[\left\|\omega(t,\cdot)-\omega^{Lin}(t,\cdot)\right\|_{L^{\infty}(\mathbb{T}^{ 3})}\leq\varepsilon^{2}\|\tilde{u}_{0}\|_{H^{s+1}(\mathbb{T}^{3})}^{2}C\bar{T} \exp\left(C\int_{0}^{\bar{T}}\left\|\bar{u}(\tau,\cdot)\right\|_{H^{s+2}( \mathbb{T}^{3})}d\tau\right). \tag{2.6}\]
_Remark 2.2_.: The point is that the right hand side of (2.6) is \(O(\varepsilon^{2})\), while a priori the left hand side is only \(O(\varepsilon)\) a priori. Furthermore, the right hand side depends only on \(\tilde{u}_{0}\) and \(\bar{u}\).
Proof.: From an elementary estimate
\[\left\|\omega(t,\cdot)-\omega^{Lin}(t,\cdot)\right\|_{L^{\infty}(\mathbb{T}^{ 3})}\lesssim\left\|\omega(t,\cdot)-\omega^{Lin}(t,\cdot)\right\|_{H^{s-1}( \mathbb{T}^{3})}\lesssim\left\|u(t,\cdot)-u^{Lin}(t,\cdot)\right\|_{H^{s}( \mathbb{T}^{3})},\]
it suffices to prove
\[\left\|u(t,\cdot)-u^{Lin}(t,\cdot)\right\|_{H^{s}(\mathbb{T}^{3})}\leq \varepsilon^{2}\|\tilde{u}_{0}\|_{H^{s+1}(\mathbb{T}^{3})}^{2}C\bar{T}\exp \left(C\int_{0}^{\bar{T}}\left\|\bar{u}(\tau,\cdot)\right\|_{H^{s+2}(\mathbb{ T}^{3})}d\tau\right)\]
for \(\varepsilon>0\) satisfying (2.5). Our first step is to estimate \(\left\|\tilde{u}(t,\cdot)\right\|_{H^{s}}\) on \([0,\bar{T}]\). Denoting \(J=(I-\Delta)^{\frac{1}{2}}\), (2.3) gives
\[\frac{1}{2}\frac{d}{dt}\|\tilde{u}\|_{H^{s+1}}^{2} =-\int J^{s+1}(\bar{u}\cdot\nabla\tilde{u})\cdot J^{s+1}\tilde{u} -\int J^{s+1}(\tilde{u}\cdot\nabla\bar{u})\cdot J^{s+1}\tilde{u}\] \[\quad-\int J^{s+1}(\tilde{u}\cdot\nabla\tilde{u})\cdot J^{s+1} \tilde{u}-\int J^{s+1}\nabla\tilde{p}\cdot J^{s+1}\tilde{u}\] \[=\mathrm{I}+\mathrm{II}+\mathrm{III}+\mathrm{IV}.\]
Since \(\tilde{u}\) is divergence-free, \(\mathrm{IV}=0\). Using the fact that \(H^{s+1}(\mathbb{T}^{3})\) is a Banach algebra, we have
\[\mathrm{II}\lesssim\left\|\tilde{u}\cdot\nabla\bar{u}\right\|_{H^{s+1}}\|\tilde {u}\|_{H^{s+1}}\lesssim\left\|\bar{u}\right\|_{H^{s+2}}\|\tilde{u}\|_{H^{s+1}}^{2}.\]
For \(\mathrm{I}\), we note that \(\nabla\cdot\tilde{u}=0\) yields \(\int\left(\bar{u}\cdot\nabla J^{s+1}\tilde{u}\right)\cdot J^{s+1}\tilde{u}=0\), so that
\[\mathrm{I}=-\int\left(\left[J^{s+1},\bar{u}\cdot\right]\nabla\tilde{u}\right) \cdot J^{s+1}\tilde{u}.\]
Recalling the Kato-Ponce commutator estimate ([10]):
\[\left\|\left[J^{s^{\prime}},f\right]g\right\|_{L^{2}(\mathbb{T}^{3})}\lesssim \left(\left\|f\right\|_{H^{\frac{5}{2}+s^{\prime}}(\mathbb{T}^{3})}\left\|J^{s^ {\prime}-1}g\right\|_{L^{2}(\mathbb{T}^{3})}+\left\|J^{s^{\prime}}f\right\|_{L^ {2}(\mathbb{T}^{3})}\|g\|_{H^{\frac{3}{2}+s^{\prime}}(\mathbb{T}^{3})}\right)\]
for any \(\varepsilon^{\prime}>0\) and \(s^{\prime}>0\), we obtain
\[\mathrm{I}\lesssim\left\|\left[J^{s+1},\bar{u}\cdot\right]\nabla\tilde{u} \right\|_{L^{2}}\left\|J^{s+1}\tilde{u}\right\|_{L^{2}}\lesssim\left\|\bar{u} \right\|_{H^{s+1}}\|\tilde{u}\|_{H^{s+1}}^{2}.\]
Replacing \(\bar{u}\) with \(\tilde{u}\) and proceeding in the same way, we can also estimate \(\text{III}\lesssim\|\tilde{u}\|_{H^{s+1}}^{3}.\) Combining all, we arrive at
\[\frac{d}{dt}\|\tilde{u}\|_{H^{s+1}}\leq C\left(\|\bar{u}\|_{H^{s+2}}\|\tilde{u} \|_{H^{s+1}}+\|\tilde{u}\|_{H^{s+1}}^{2}\right).\]
Introducing the quantity \(y(t)=\|\tilde{u}(t)\|_{H^{s+1}}\exp(-C\int_{0}^{t}\|\bar{u}(\tau)\|_{H^{s+2}}d\tau)\), we have
\[\begin{cases}\frac{d}{dt}y(t)\leq C\exp\left(C\int_{0}^{T}\|\bar{u}(\tau)\|_{H ^{s+2}}d\tau\right)y^{2}(t),\\ y(0)=\varepsilon\|\tilde{u}_{0}\|_{H^{s+1}}\end{cases}\]
on \([0,\bar{T}]\). Then for \(\varepsilon>0\) satisfying (2.5) (by adjusting \(C\) if necessary), we have
\[y(t)\leq\frac{\varepsilon\|\tilde{u}_{0}\|_{H^{s+1}}}{1-\varepsilon\|\tilde{u }_{0}\|_{H^{s+1}}Ct\exp\left(C\int_{0}^{\bar{T}}\|\bar{u}(\tau)\|_{H^{s+2}}d \tau\right)}\leq 2\varepsilon\|\tilde{u}_{0}\|_{H^{s+1}},\]
and consequently on \([0,\bar{T}]\), we have
\[\|\tilde{u}(t)\|_{H^{s+1}}\leq 2\varepsilon\|\tilde{u}_{0}\|_{H^{s+1}}\exp \left(C\int_{0}^{\bar{T}}\|\bar{u}(\tau)\|_{H^{s+2}}d\tau\right). \tag{2.7}\]
Next, we consider the equation of \(\tilde{u}^{D}=\tilde{u}-\tilde{u}^{Lin}\):
\[\begin{cases}\partial_{t}\bar{u}^{D}+\bar{u}\cdot\nabla\bar{u}^{D}+\tilde{u}^{ D}\cdot\nabla\bar{u}+\tilde{u}\cdot\nabla\bar{u}+\nabla(\tilde{p}-\tilde{p}^{Lin})=0, \\ \nabla\cdot\tilde{u}^{D}=0,\\ \tilde{u}^{D}(t=0)=0,\end{cases}\]
which gives
\[\frac{1}{2}\frac{d}{dt}\big{\|}\tilde{u}^{D}\big{\|}_{H^{s}}^{2} =-\int J^{s}(\bar{u}\cdot\nabla\tilde{u}^{D})\cdot J^{s}\tilde{u} ^{D}-\int J^{s}(\tilde{u}^{D}\cdot\nabla\bar{u})\cdot J^{s}\tilde{u}^{D}\] \[\quad-\int J^{s}(\tilde{u}\cdot\nabla\tilde{u})\cdot J^{s}\tilde{ u}^{D}-\int J^{s}\nabla(\tilde{p}-\tilde{p}^{Lin})\cdot J^{s}\tilde{u}^{D}\] \[=\text{V}+\text{VI}+\text{VII}+\text{VIII}.\]
To estimate V, VI, and VIII, proceeding in the same way as I, II, and IV, respectively, we have
\[\text{V}\lesssim\|\tilde{u}\|_{H^{s}}\big{\|}\tilde{u}^{D}\big{\|}_{H^{s}}^{2},\qquad\text{VI}\lesssim\|\tilde{u}\|_{H^{s+1}}\big{\|}\tilde{u}^{D}\big{\|}_ {H^{s}}^{2},\qquad\text{VIII}=0.\]
For VII, the fact that \(H^{s}(\mathbb{T}^{3})\) is a Banach algebra implies
\[\text{VII}\lesssim\|\tilde{u}\cdot\nabla\tilde{u}\|_{H^{s}}\big{\|}\tilde{u}^ {D}\big{\|}_{H^{s}}\lesssim\|\tilde{u}\|_{H^{s+1}}^{2}\big{\|}\tilde{u}^{D} \big{\|}_{H^{s}}.\]
Combining all, we obtain
\[\frac{d}{dt}\big{\|}\tilde{u}^{D}\big{\|}_{H^{s}}\leq C\left(\|\bar{u}\|_{H^{ s+1}}\big{\|}\tilde{u}^{D}\big{\|}_{H^{s}}+\|\tilde{u}\|_{H^{s+1}}^{2}\right).\]
Thus, for \(\varepsilon>0\) satisfying (2.5), the Gronwall's inequality and (2.7) give us
\[\big{\|}\tilde{u}^{D}(t)\big{\|}_{H^{s}}\leq\exp\left(C\int_{0}^{t}\|\bar{u}( \tau)\|_{H^{s+1}}d\tau\right)C\int_{0}^{t}\|\tilde{u}(\tau)\|_{H^{s+1}}^{2}d \tau\leq 4\varepsilon^{2}\|\tilde{u}_{0}\|_{H^{s+1}}^{2}C\bar{T}\exp\left(3C \int_{0}^{\bar{T}}\|\bar{u}(\tau)\|_{H^{s+2}}d\tau\right)\]
on \([0,\bar{T}]\). Since \(\tilde{u}^{D}=\tilde{u}-\tilde{u}^{Lin}=u-u^{Lin}\), we are done.
### Principle 2
Let \(u\) be the solution of (2.2). We consider the following PDE:
\[\left\{\begin{aligned} \partial_{t}\bar{u}^{*}+u\cdot\nabla\bar{u}^{* }+\nabla\bar{p}^{*}-\nabla\bar{u}^{*}\cdot(u-\bar{u}^{*})&=0,\\ \nabla\cdot\bar{u}^{*}&=0,\\ \bar{u}^{*}(t=0)&=\bar{u}_{0},\end{aligned}\right. \tag{2.8}\]
where \((\nabla\bar{u}^{*}\cdot(u-\bar{u}^{*}))_{i}:=\sum_{j=1}^{3}\partial_{i}\bar{u }_{j}^{*}(u-\bar{u}^{*})_{j}\) with \(i=1,2,3\). (This is different from \((u-\bar{u}^{*})\cdot\nabla\bar{u}^{*}\).) This time, we compare \(\bar{u}^{*}\) with \(\bar{u}\) of (2.1). Note that (2.1) and (2.8) share the same initial data. We have the following:
**Proposition 2.3**.: _Under the above setting, for \(t\in[0,\bar{T}]\) and \(\varepsilon>0\) satisfying (2.5), there exists a constant \(C>0\) such that_
\[\|\bar{u}^{*}(t,\cdot)-\bar{u}(t,\cdot)\|_{H^{s}(\mathbb{T}^{3})}\leq \varepsilon\|\bar{u}_{0}\|_{H^{s+1}}C\bar{T}\left(\sup_{t\in[0,\bar{T}]}\|\bar {u}(t)\|_{H^{s+1}(\mathbb{T}^{3})}\right)\exp\left(C\int_{0}^{T}\left(1+\|\bar{ u}(\tau)\|_{H^{s+2}(\mathbb{T}^{3})}\right)d\tau\right).\]
Proof.: Denoting \(\tilde{u}=u-\bar{u}\) and \(\bar{u}^{D*}=\bar{u}^{*}-\bar{u}\), we obtain the equation of \(\bar{u}^{D*}\):
\[\begin{cases}\partial_{t}\bar{u}^{D*}+\tilde{u}\cdot\nabla\bar{u}^{D*}+\bar{u }\cdot\nabla\bar{u}^{D*}+\tilde{u}\cdot\nabla\bar{u}-\nabla\bar{u}^{D*}\cdot \tilde{u}+\nabla\bar{u}^{D*}\cdot\bar{u}^{D*}-\nabla\bar{u}\cdot\tilde{u}+ \nabla\bar{u}\cdot\bar{u}^{D*}+\nabla(\bar{p}^{*}-\bar{p})=0,\\ \nabla\cdot\bar{u}^{D*}=0,\\ \bar{u}^{D*}(t=0)=0.\end{cases}\]
This gives
\[\frac{1}{2}\frac{d}{dt}\|\bar{u}^{D*}\|_{H^{s}}^{2} =-\int J^{s}(\tilde{u}\cdot\nabla\bar{u}^{D*})\cdot J^{s}\bar{u}^ {D*}-\int J^{s}(\bar{u}\cdot\nabla\bar{u}^{D*})\cdot J^{s}\bar{u}^{D*}-\int J ^{s}(\tilde{u}\cdot\nabla\bar{u})\cdot J^{s}\bar{u}^{D*}\] \[\quad+\int J^{s}(\nabla\bar{u}^{D*}\cdot\tilde{u})\cdot J^{s}\bar {u}^{D*}-\int J^{s}(\nabla\bar{u}^{D*}\cdot\bar{u}^{D*})\cdot J^{s}\bar{u}^{D* }+\int J^{s}(\nabla\bar{u}\cdot\tilde{u})\cdot J^{s}\bar{u}^{D*}\] \[\quad-J^{s}(\nabla\bar{u}\cdot\bar{u}^{D*})\cdot J^{s}\bar{u}^{D* }-\int J^{s}\nabla(\tilde{p}-\tilde{p}^{Lin})\cdot J^{s}\bar{u}^{D*}\] \[=\mathrm{I}+\mathrm{II}+\mathrm{III}+\mathrm{IV}+\mathrm{V}+ \mathrm{VI}+\mathrm{VIII}.\]
For \(\mathrm{I}\), \(\mathrm{II}\), and \(\mathrm{VIII}\), we proceed in the same way as the proof of Proposition 2.1 to obtain
\[\mathrm{I}\lesssim\|\tilde{u}\|_{H^{s}}\big{\|}\bar{u}^{D*}\big{\|}_{H^{s}}^{2},\qquad\mathrm{II}\lesssim\|\bar{u}\|_{H^{s}}\big{\|}\bar{u}^{D*}\big{\|}_{H^{s} }^{2},\qquad\mathrm{VIII}=0.\]
For \(\mathrm{III}+\mathrm{VI}+\mathrm{VII}\), we use the fact that \(H^{s+1}(\mathbb{T}^{3})\) is a Banach algebra to have
\[\mathrm{III}+\mathrm{VI}+\mathrm{VII}\lesssim\left(\|\bar{u}\|_{H^{s}}+\big{\|} \bar{u}^{D*}\big{\|}_{H^{s}}\right)\|\bar{u}\|_{H^{s+1}}\big{\|}\bar{u}^{D*} \big{\|}_{H^{s}}.\]
For \(\mathrm{IV}\) and \(\mathrm{V}\), \(\nabla\cdot\bar{u}^{D*}=0\) and the integration by parts give us
\[\mathrm{IV}=-\int J^{s}(\bar{u}_{j}^{D*}\partial_{i}\tilde{u}_{j})J^{s}\bar{u} _{i}^{D*}\lesssim\|\tilde{u}\|_{H^{s+1}}\big{\|}\bar{u}^{D*}\big{\|}_{H^{s}}^{2 },\qquad\mathrm{V}=-\frac{1}{2}\int J^{s}\partial_{i}|\bar{u}^{D*}|^{2}J^{s} \bar{u}_{i}^{D*}=0.\]
Combining all, we arrive at
\[\frac{d}{dt}\big{\|}\bar{u}^{D*}\big{\|}_{H^{s}}\leq C\left(\|\tilde{u}\|_{H^{s+ 1}}+\|\bar{u}\|_{H^{s+1}}\right)\big{\|}\bar{u}^{D*}\big{\|}_{H^{s}}+\|\tilde{u} \|_{H^{s+1}}\|\bar{u}\|_{H^{s+1}}.\]
Thus, for \(\varepsilon>0\) satisfying (2.5), the Gronwall's inequality and (2.7) yield
\[\big{\|}\bar{u}^{D*}(t)\big{\|}_{H^{s}} \leq\exp\left(C\int_{0}^{t}\left(\|\tilde{u}(\tau)\|_{H^{s+1}}+\| \bar{u}(\tau)\|_{H^{s+1}}\right)d\tau\right)C\int_{0}^{t}\|\tilde{u}(\tau)\|_{H^{ s+1}}\|\bar{u}(\tau)\|_{H^{s+1}}d\tau\] \[\leq 2\varepsilon\|\tilde{u}_{0}\|_{H^{s+1}}C\bar{T}\left(\sup_{t \in[0,\bar{T}]}\|\bar{u}(t)\|_{H^{s+1}}\right)\exp\left(2C\int_{0}^{\bar{T}} \left(1+\|\bar{u}(\tau)\|_{H^{s+2}}\right)d\tau\right)\]
on \([0,\bar{T}]\).
## 3. Proof of the main result
The aim of this section is to show Theorem A. In the proof, we shall always assume that initial data \(\omega_{0}^{\mathcal{I}}\) and \(\omega_{0}^{\mathcal{S}}\) satisfy (1.5) and (1.6), respectively. We note that by our definitions of \(\omega^{\mathcal{I}}\) and \(\omega^{\mathcal{S}}\), they solve the following coupled system: \((\omega^{\mathcal{I}},\omega^{\mathcal{S}})(t=0)=(\omega_{0}^{\mathcal{I}}, \omega_{0}^{\mathcal{S}})\) and
\[\begin{cases}\partial_{t}\omega^{\mathcal{I}}+(u^{\mathcal{L}}+u^{\mathcal{I}} +u^{\mathcal{S}})\cdot\nabla\omega^{\mathcal{I}}=\nabla(u^{\mathcal{L}}+u^{ \mathcal{I}}+u^{\mathcal{S}})\omega^{\mathcal{I}},\\ \partial_{t}\omega^{\mathcal{S}}+(u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{ S}})\cdot\nabla\omega^{\mathcal{S}}=\nabla(u^{\mathcal{L}}+u^{\mathcal{I}}+u^{ \mathcal{S}})\omega^{\mathcal{S}},\\ (u^{\mathcal{I}},u^{\mathcal{S}})=\nabla\times(-\Delta)^{-1}(\omega^{\mathcal{ I}},\omega^{\mathcal{S}}).\end{cases} \tag{3.1}\]
Recalling the definition of \(u^{\mathcal{L}}\) in (1.2), we have \(\omega^{\mathcal{L}}:=\nabla\times u^{\mathcal{L}}=0\) in \(|x|\leq 10\). Thus, setting \(\bar{u}^{*}:=u^{\mathcal{L}}+u^{\mathcal{I}}\) and \(u:=u^{\mathcal{L}}+u^{\mathcal{I}}+u^{\mathcal{S}}\), and noticing
\[\nabla\times(u\cdot\nabla\bar{u}^{*}-\nabla\bar{u}^{*}\cdot(u- \bar{u}^{*})) =\nabla\times(\bar{u}^{*}\cdot\nabla\bar{u}^{*}+(u-\bar{u}^{*}) \cdot\nabla\bar{u}^{*}-\nabla\bar{u}^{*}\cdot(u-\bar{u}^{*}))\] \[=\nabla\times\left((\bar{\omega}^{*}\times\bar{u}^{*})+\frac{1}{ 2}\nabla|\bar{u}^{*}|^{2}+\bar{\omega}^{*}\times(u-\bar{u}^{*})\right)\qquad( \bar{\omega}^{*}:=\nabla\times\bar{u}^{*})\] \[=\bar{u}^{*}\cdot\nabla\bar{\omega}^{*}-\nabla\bar{u}^{*}\bar{ \omega}^{*}+(u-\bar{u}^{*})\cdot\nabla\bar{\omega}^{*}-\nabla(u-\bar{u}^{*}) \bar{\omega}^{*}=u\cdot\nabla\bar{\omega}^{*}-\nabla u\,\bar{\omega}^{*},\]
we can check that \(\bar{u}^{*}\) and \(u\) solve (2.8) and (2.2), respectively. On the other hand, we shall introduce pseudo-solutions \((\omega^{\mathcal{I},P},\omega^{\mathcal{S},P})\) as the solutions of
\[\begin{cases}\partial_{t}\omega^{\mathcal{I},P}+(u^{\mathcal{L}}+u^{\mathcal{ I},P})\cdot\nabla\omega^{\mathcal{I},P}=\nabla(u^{\mathcal{L}}+u^{\mathcal{ I},P})\omega^{\mathcal{I},P},\\ u^{\mathcal{I},P}=\nabla\times(-\Delta)^{-1}\omega^{\mathcal{I},P},\end{cases} \tag{3.2}\]
and
\[\begin{cases}\partial_{t}\omega^{\mathcal{S},P}+(u^{\mathcal{L}}+u^{\mathcal{ I},P}+u^{\mathcal{S},P})\cdot\nabla\omega^{\mathcal{S},P}=\nabla(u^{ \mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P})\omega^{\mathcal{S},P}+\nabla u ^{\mathcal{S},P}\omega^{\mathcal{I},P}-u^{\mathcal{S},P}\cdot\nabla\omega^{ \mathcal{I},P},\\ u^{\mathcal{S},P}=\nabla\times(-\Delta)^{-1}\omega^{\mathcal{S},P},\end{cases} \tag{3.3}\]
with initial data \(\omega^{\mathcal{I},P}(t=0)=\omega_{0}^{\mathcal{I}}\) and \(\omega^{\mathcal{S},P}(t=0)=\omega_{0}^{\mathcal{S}}\). Then we can check that \(\bar{u}:=u^{\mathcal{L}}+u^{\mathcal{I},P}\) and \(u:=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}\) are solutions of (2.1) and (2.2), respectively. Note that \(u=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}=u^{\mathcal{L}}+u^{ \mathcal{I}}+u^{\mathcal{S}}\), which implies \(u^{\mathcal{I},P}-u^{\mathcal{I}}=u^{\mathcal{S}}-u^{\mathcal{S},P}\). Our strategy is first to analyze pseudo-solutions \((\omega^{\mathcal{I},P},\omega^{\mathcal{S},P})\), and then to compare them with real solutions \((\omega^{\mathcal{I}},\omega^{\mathcal{S}})\) using the principles introduced in the previous section.
### Behavior of \(\omega^{\mathcal{I},P}\) and \(\nabla u^{\mathcal{I},P}\)
Abusing the notation for simplicity, we denote pseudo-solutions \(\omega^{\mathcal{I},P}\) and \(u^{\mathcal{I},P}\) by \(\omega^{\mathcal{I}}\) and \(u^{\mathcal{I}}\), respectively. We note that the the symmetry \(\omega^{\mathcal{I}}(t,x_{1},x_{2},x_{3})=(\omega_{1}^{\mathcal{I}}(t,x_{2},x_{ 3}),0,0)\) propagates in time for the solutions to (3.2). Namely, the assumptions
\[\partial_{1}\omega_{1}^{\mathcal{I}}\equiv 0,\quad\omega_{2}^{\mathcal{I}} \equiv 0,\quad\omega_{3}^{\mathcal{I}}\equiv 0 \tag{3.4}\]
hold for all times if they are valid at \(t=0\). This is not trivial since \(u^{\mathcal{L}}\) depends on \(x_{1}\). To see this, first note that (3.4) implies
\[u_{1}^{\mathcal{I}}\equiv 0,\quad\partial_{1}u_{2}^{\mathcal{I}}\equiv 0,\quad \partial_{1}u_{3}^{\mathcal{I}}\equiv 0 \tag{3.5}\]
by the Biot-Savart law. Together this implies \(\nabla u^{\mathcal{I}}\omega^{\mathcal{I}}\equiv 0\). Denoting \(D_{t}=\partial_{t}+(u^{\mathcal{L}}+u^{\mathcal{I}})\cdot\nabla\), from
\[D_{t}(\omega_{2}^{\mathcal{I}})=(\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})_{2}- M\omega_{2}^{\mathcal{I}},\quad D_{t}(\omega_{3}^{\mathcal{I}})=(\nabla u^{ \mathcal{I}}\omega^{\mathcal{I}})_{3},\]
we see that \((\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})\equiv 0\) is consistent with \(\omega_{2}^{\mathcal{I}},\omega_{3}^{\mathcal{I}}\) being zero for all times. Lastly,
\[D_{t}(\omega_{1}^{\mathcal{I}})=(\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})_{1}+M \omega_{1}^{\mathcal{I}},\qquad D_{t}(\partial_{1}\omega_{1}^{\mathcal{I}})= \partial_{1}((\nabla u^{\mathcal{I}}\omega^{\mathcal{I}})_{1}),\]
which shows that \(\partial_{1}\omega_{1}^{\mathcal{I}}\equiv 0\) propagates in time. This shows that the equation for \(\omega_{1}^{\mathcal{I}}\) is given by
\[\partial_{t}\omega_{1}^{\mathcal{I}}+(u_{2}^{\mathcal{I}}-Mx_{2})\partial_{2} \omega_{1}^{\mathcal{I}}+u_{3}^{\mathcal{I}}\partial_{3}\omega_{1}^{\mathcal{I}}= M\omega_{1}^{\mathcal{I}}. \tag{3.6}\]
Comparing (3.6) with 2D Euler equation which has global well-posedness of smooth solutions, we can check that there exists the unique global smooth solution \(\omega_{1}^{\mathcal{I}}\) of (3.6) with initial data given in (1.5). Furthermore, we can also observe that \(\omega_{1}^{\mathcal{I}}\) keeps the odd symmetry in both \(x_{2}\) and \(x_{3}\).
To begin with, we observe temporal behaviors of \(\left\|\omega^{\mathcal{I}}(t,\cdot)\right\|_{L^{p}}\) for \(p\in[1,\infty]\).
**Lemma 3.1**.: _For \(t\in[0,\infty)\), \(\omega^{\mathcal{I}}(t,\cdot)=(\omega_{1}^{\mathcal{I}}(t,\cdot),0,0)\) satisfies_
\[\big{\|}\omega_{1}^{\mathcal{I}}(t,\cdot)\big{\|}_{L^{p}}=\big{\|}\omega_{0}^{ \mathcal{I}}\big{\|}_{L^{p}}e^{\frac{M(p-1)}{p}t}\left(1\leq p<\infty\right) \quad\text{and}\quad\big{\|}\omega_{1}^{\mathcal{I}}(t,\cdot)\big{\|}_{L^{ \infty}}=\big{\|}\omega_{0}^{\mathcal{I}}\big{\|}_{L^{\infty}}e^{Mt}. \tag{3.7}\]
Proof.: Taking the \(L^{2}\) inner product of (3.6) with \(\omega_{1}^{\mathcal{I}}|\omega_{1}^{\mathcal{I}}|^{p-2}\), we have
\[\frac{1}{p}\frac{d}{dt}\big{\|}\omega_{1}^{\mathcal{I}}\big{\|}_{L^{p}}^{p}=- \int(u_{2}^{\mathcal{I}}\partial_{2}\omega_{1}^{\mathcal{I}}+u_{3}^{\mathcal{ I}}\partial_{3}\omega_{1}^{\mathcal{I}})\omega_{1}^{\mathcal{I}}|\omega_{1}^{ \mathcal{I}}|^{p-2}+\int Mx_{2}\partial_{2}\omega_{1}^{\mathcal{I}}\omega_{1}^ {\mathcal{I}}|\omega_{1}^{\mathcal{I}}|^{p-2}+M\big{\|}\omega_{1}^{\mathcal{I} }\big{\|}_{L^{p}}^{p}.\]
After the integration by parts, the first integral vanishes since \(u_{1}^{\mathcal{I}}=0\) and \(\nabla\cdot u^{\mathcal{I}}=0\), and the second integral is equal to \(-\frac{M}{p}\big{\|}\omega_{1}^{\mathcal{I}}\big{\|}_{L^{p}}^{p}\). Thus we have
Passing \(p\to\infty\), we can also obtain \(L^{\infty}\) estimate.
Next, using (3.7), we estimate \(\big{\|}u^{\mathcal{I}}(t,\cdot)\big{\|}_{H^{s}}\) with \(s>\frac{5}{2}\) for \(t\in[0,T_{M}]\) with \(T_{M}=\frac{\log(M+1)}{M}\). This estimate will be used in Section 3.3.
**Lemma 3.2**.: _For \(s>\frac{5}{2}\) and \(t\in[0,T_{M}]\), \(\big{\|}u^{\mathcal{I}}(t,\cdot)\big{\|}_{H^{s}(\mathbb{T}^{3})}\lesssim e^{CMt}\)._
Proof.: Fix \(s>\frac{5}{2}\). Noticing that \(u^{\mathcal{I}}\) solves
\[\begin{cases}\partial_{t}u^{\mathcal{I}}+(u^{\mathcal{L}}+u^{\mathcal{I}}) \cdot\nabla u^{\mathcal{I}}+\nabla p^{\mathcal{I}}=0,\\ \nabla\cdot u^{\mathcal{I}}=0.\end{cases}\]
and denoting \(J=(I-\Delta)^{\frac{1}{2}}\), we have
\[\frac{1}{2}\frac{d}{dt}\big{\|}u^{\mathcal{I}}\big{\|}_{H^{s}}^{2}=-\int J^{s }(u^{\mathcal{L}}\cdot\nabla u^{\mathcal{I}})\cdot J^{s}u^{\mathcal{I}}-\int J ^{s}(u^{\mathcal{I}}\cdot\nabla u^{\mathcal{I}})\cdot J^{s}u^{\mathcal{I}}- \int J^{s}\nabla p^{\mathcal{I}}\cdot J^{s}u^{\mathcal{I}}=\mathrm{I}+ \mathrm{II}+\mathrm{III}.\]
\(\mathrm{III}=0\) follows from \(\nabla\cdot u^{\mathcal{I}}=0\). Noticing \(\int\big{(}u^{\mathcal{L}}\cdot\nabla J^{s}u^{\mathcal{I}}\big{)}\cdot J^{s}u ^{\mathcal{I}}=\int\big{(}u^{\mathcal{I}}\cdot\nabla J^{s}u^{\mathcal{I}} \big{)}\cdot J^{s}u^{\mathcal{I}}=0\), we obtain
\[\mathrm{I}=-\int\big{(}\big{[}J^{s},u^{\mathcal{L}}\cdot\big{]}\nabla u^{ \mathcal{I}}\big{)}\cdot J^{s}u^{\mathcal{I}}\quad\text{and}\quad\Pi=-\int \big{(}\big{[}J^{s},u^{\mathcal{I}}\cdot\big{]}\,\nabla u^{\mathcal{I}}\big{)} \cdot J^{s}u^{\mathcal{I}},\]
where we recall that \([\cdot,\cdot]\) denotes the commutator. Using \(\big{\|}u^{\mathcal{L}}\big{\|}_{H^{s}(\mathbb{T}^{3})}\lesssim M\) and the Sobolev embedding \(H^{s}(\mathbb{T}^{3})\hookrightarrow W^{1,\infty}(\mathbb{T}^{3})\), we obtain
\[\mathrm{I}\lesssim\big{\|}\big{[}J^{s},u^{\mathcal{L}}\cdot\big{]}\,\nabla u ^{\mathcal{I}}\big{\|}_{L^{2}}\big{\|}J^{s}u^{\mathcal{I}}\big{\|}_{L^{2}} \lesssim M\big{\|}u^{\mathcal{I}}\big{\|}_{H^{s}}^{2},\quad\mathrm{II} \lesssim\big{\|}\big{[}J^{s},u^{\mathcal{I}}\cdot\big{]}\,\nabla u^{\mathcal{ I}}\big{\|}_{L^{2}}\big{\|}J^{s}u^{\mathcal{I}}\big{\|}_{L^{2}}\lesssim \big{\|}\nabla u^{\mathcal{I}}\big{\|}_{L^{\infty}}\big{\|}u^{\mathcal{I}} \big{\|}_{H^{s}}^{2},\]
which lead to
\[\frac{d}{dt}\big{\|}u^{\mathcal{I}}\big{\|}_{H^{s}}\lesssim\big{(}M+\big{\|} \nabla u^{\mathcal{I}}\big{\|}_{L^{\infty}}\big{)}\,\big{\|}u^{\mathcal{I}} \big{\|}_{H^{s}}.\]
According to the Calderon-Zygmund theory, we have
\[\big{\|}\nabla u^{\mathcal{I}}\big{\|}_{L^{\infty}(\mathbb{T}^{3})}\lesssim \big{\|}\omega^{\mathcal{I}}\big{\|}_{L^{\infty}(\mathbb{T}^{3})}\log\left(10+ \frac{\big{\|}u^{\mathcal{I}}\big{\|}_{H^{s}(\mathbb{T}^{3})}}{\|\omega^{ \mathcal{I}}\|_{L^{\infty}(\mathbb{T}^{3})}}\right)\lesssim e^{Mt}\log\left(10+ \big{\|}u^{\mathcal{I}}\big{\|}_{H^{s}}\right),\]
where we used (3.7) in the last inequality. This gives
\[\frac{d}{dt}\log\left(10+\big{\|}u^{\mathcal{I}}\big{\|}_{H^{s}}\right)\lesssim M +e^{Mt}\log\left(10+\big{\|}u^{\mathcal{I}}\big{\|}_{H^{s}}\right).\]
Using Gronwall's inequality, we obtain the desired estimate on \([0,T_{M}]\).
Our next aim is to estimate the maximum of \(\nabla u^{\mathcal{I}}(t,\cdot)\) in a small region near the origin up to time \(T_{M}=\frac{\log(M+1)}{M}\). The corresponding velocity \(u^{\mathcal{I}}=(0,u_{2}^{\mathcal{I}},u_{3}^{\mathcal{I}})\) to \(\omega^{\mathcal{I}}=(\omega_{1}^{\mathcal{I}},0,0)\) has explicit formula:
\[\begin{split} u_{2}^{\mathcal{I}}(t,x_{2},x_{3})=\frac{1}{2\pi} \sum_{n=(n_{2},n_{3})\in\mathbb{Z}^{2}}\iint_{[-L,L)^{2}}\frac{-x_{3}+y_{3}+2 Ln_{3}}{|x-y-2Ln|^{2}}\omega_{1}^{\mathcal{I}}(t,y_{2},y_{3})dy_{2}dy_{3},\\ u_{3}^{\mathcal{I}}(t,x_{2},x_{3})=\frac{1}{2\pi}\sum_{n=(n_{2},n _{3})\in\mathbb{Z}^{2}}\iint_{[-L,L)^{2}}\frac{x_{2}-y_{2}-2Ln_{2}}{|x-y-2Ln|^{2}} \omega_{1}^{\mathcal{I}}(t,y_{2},y_{3})dy_{2}dy_{3}.\end{split} \tag{3.8}\]
Using this, we can prove a log-Lipschitz estimate of \(u^{\mathcal{I}}\).
**Lemma 3.3**.: _Let \(x,x^{\prime}\in\mathbb{T}^{2}:=[-L,L)^{2}\). Then we have_
\[\left|u^{\mathcal{I}}(t,x)-u^{\mathcal{I}}(t,x^{\prime})\right|\lesssim e^{tM} |x-x^{\prime}|\left(1+\log\frac{3L}{|x-x^{\prime}|}\right). \tag{3.9}\]
Note that the argument of the logarithm in (3.9) is always greater than \(1\) because \(|x-x^{\prime}|\leq 2\sqrt{2}L\). We omit the proof since it follows directly from the standard log-Lipschitz estimate for 2d Euler (see for instance [11]).
Now we consider a characteristic curve \(A^{\mathcal{I}}(t,a_{2},a_{3})=\left(A^{\mathcal{I}}_{2}(t,a_{2},a_{3}),A^{ \mathcal{I}}_{3}(t,a_{2},a_{3})\right):[0,\infty)\times\mathbb{T}^{2}\to \mathbb{T}^{2}\) of (3.6) defined by \(A^{\mathcal{I}}(0,a_{2},a_{3})=(a_{2},a_{3})\) and
\[\left\{\begin{aligned} &\frac{d}{dt}A^{\mathcal{I}}_{2}(t,a_{2},a_{3})=u^{ \mathcal{I}}_{2}(t,A^{\mathcal{I}}(t,a_{2},a_{3}))-MA^{\mathcal{I}}_{2}(t,a_{ 2},a_{3}),\\ &\frac{d}{dt}A^{\mathcal{I}}_{3}(t,a_{2},a_{3})=u^{\mathcal{I}}_{ 3}(t,A^{\mathcal{I}}(t,a_{2},a_{3})).\end{aligned}\right. \tag{3.10}\]
Evaluating along this curve, we have from (3.6)
\[\omega^{\mathcal{I}}_{1}(t,A^{\mathcal{I}}_{2}(t,a),A^{\mathcal{I}}_{3}(t,a) )=e^{Mt}\omega^{\mathcal{I}}_{1,0}(a_{2},a_{3}). \tag{3.11}\]
We need the following two lemmas for \(A^{\mathcal{I}}\).
**Lemma 3.4**.: _The determinant of \(\nabla_{a}A^{\mathcal{I}}\) with \(a=(a_{2},a_{3})\) satisfies_
\[\det(\nabla_{a}A^{\mathcal{I}})=e^{-Mt}. \tag{3.12}\]
Proof.: Using \(u^{\mathcal{I}}_{1}=0\) and \(\nabla\cdot u^{\mathcal{I}}=0\), we compute
\[\frac{d}{dt}\det(\nabla_{a}A^{\mathcal{I}})(t,a)=(\partial_{2}u^{\mathcal{I}} _{2}(t,A^{\mathcal{I}}(t,a))-M+\partial_{3}u^{\mathcal{I}}_{3}(t,A^{\mathcal{ I}}(t,a)))\det(\nabla_{a}A^{\mathcal{I}})(t,a)=-M\det(\nabla_{a}A^{\mathcal{I}})(t,a). \tag{3.13}\]
Since \(\det(\nabla_{a}A^{\mathcal{I}})(0,a)=1\), we obtain (3.12).
**Lemma 3.5**.: _Let \(1\leq a_{2},a_{3}\leq 2\). Then for \(0\leq t\leq T_{M}\), there exist constants \(C_{1},\,C_{2}>0\), \(C_{3}\geq 1\) independent of \(M\) such that_
\[C_{1}e^{-C_{3}Mt}\leq A^{\mathcal{I}}_{2}(t,a_{2},a_{3})\leq C_{2}e^{-C_{3}^{ -1}Mt}, \tag{3.14}\]
_and_
\[C_{1}\leq A^{\mathcal{I}}_{3}(t,a_{2},a_{3})\leq C_{2}. \tag{3.15}\]
Proof.: Note that \(u^{\mathcal{I}}_{2}(t,0,A^{\mathcal{I}}_{3}(t,a_{2},a_{3}))=0\) for all \(t\) by odd symmetry of \(\omega^{\mathcal{I}}_{1}\), which gives
\[\frac{d}{dt}A^{\mathcal{I}}_{2}(t,a_{2},a_{3})=u^{\mathcal{I}}_{2}(t,A^{ \mathcal{I}}(t,a_{2},a_{3}))-u^{\mathcal{I}}_{2}(t,0,A^{\mathcal{I}}_{3}(t,a_ {2},a_{3}))-MA^{\mathcal{I}}_{2}(t,a_{2},a_{3}). \tag{3.16}\]
To begin with, we show the upper bound of \(A^{\mathcal{I}}_{2}(t,a_{2},a_{3})\) in (3.14). Applying (3.9) and noticing \(A^{\mathcal{I}}_{2}(t,a_{2},a_{3})>0\), we have
\[\frac{d}{dt}A^{\mathcal{I}}_{2}(t,a_{2},a_{3})\leq Ce^{tM}A^{\mathcal{I}}_{2}( t,a_{2},a_{3})\left(1+\log\frac{3L}{A^{\mathcal{I}}_{2}(t,a_{2},a_{3})} \right)-MA^{\mathcal{I}}_{2}(t,a_{2},a_{3}),\]
which leads to
\[\frac{d}{dt}\left(\log A^{\mathcal{I}}_{2}(t,a_{2},a_{3})-(1+\log 3L)\right) \leq-Ce^{tM}\left(\log A^{\mathcal{I}}_{2}(t,a_{2},a_{3})-(1+\log 3L) \right)-M.\]
From
\[\frac{d}{dt}\left(\left(\log A^{\mathcal{I}}_{2}(t,a_{2},a_{3})-(1+\log 3L) \right)\exp\left(\int_{0}^{t}Ce^{sM}ds\right)\right)\leq-M\exp\left(\int_{0}^{ t}Ce^{sM}ds\right)\leq-M,\]
we see that for \(0\leq t\leq T_{M}\),
\[\log A^{\mathcal{I}}_{2}(t,a_{2},a_{3}) \leq 1+\log 3L+\exp\left(-\int_{0}^{t}Ce^{sM}ds\right)\left(\log a _{2}-(1+\log 3L)-Mt\right)\] \[\leq 1+\log 3L+\exp(-C)\left(\log a_{2}-(1+\log 3L)-Mt\right), \tag{3.17}\]
where we used \(\log a_{2}-(1+\log 3L)-Mt<0\) and \(t\leq T_{M}=\frac{\log(M+1)}{M}\). Since \(a_{2}\leq 2\), we arrive at
\[A_{2}^{\mathcal{I}}(t,a_{2},a_{3})\lesssim\exp\left(\exp(-C)(\log 2-(1+\log 3L)-Mt)\right)\]
for \(t\in[0,T_{M}]\). We take \(C_{3}:=\exp(C)\), which satisfies \(C_{3}\geq 1\).
Now we prove the lower bound of \(A_{2}^{\mathcal{I}}(t,a_{2},a_{3})\) in (3.13). Recalling (3.15) and (3.9), we have
\[-\frac{d}{dt}A_{2}^{\mathcal{I}}(t,a_{2},a_{3}) =-u_{2}^{\mathcal{I}}(t,A^{\mathcal{I}}(t,a_{2},a_{3}))+u_{2}^{ \mathcal{I}}(t,0,A_{3}^{\mathcal{I}}(t,a_{2},a_{3}))+MA_{2}^{\mathcal{I}}(t,a_ {2},a_{3})\] \[\leq Ce^{tM}A_{2}^{\mathcal{I}}(t,a_{2},a_{3})\left(1+\log\frac{ 3L}{A_{2}^{\mathcal{I}}(t,a_{2},a_{3})}\right)+MA_{2}^{\mathcal{I}}(t,a_{2},a _{3}),\]
which yields
\[\frac{d}{dt}\left(1+\log\frac{3L}{A_{2}^{\mathcal{I}}(t,a_{2},a_{3})}\right) \leq Ce^{tM}\left(1+\log\frac{3L}{A_{2}^{\mathcal{I}}(t,a_{2},a_{3})}\right)+M.\]
Noticing
\[\frac{d}{dt}\left(\left(1+\log\frac{3L}{A_{2}^{\mathcal{I}}(t,a_{2},a_{3})} \right)\exp\left(-\int_{0}^{t}Ce^{sM}ds\right)\right)\leq M\exp\left(-\int_{0} ^{t}Ce^{sM}ds\right)\leq M,\]
we see that for \(0\leq t\leq T_{M}\),
\[\log\frac{3L}{A_{2}^{\mathcal{I}}(t,a_{2},a_{3})}\leq-1+\exp\left(\int_{0}^{t }Ce^{sM}ds\right)\left(1+\log\frac{3L}{a_{2}}+Mt\right)\leq-1+C_{3}\left(1+ \log\frac{3L}{a_{2}}+Mt\right) \tag{3.17}\]
where \(C_{3}:=\exp(C)\). Since \(a_{2}\geq 1\), we have \(A_{2}^{\mathcal{I}}(t,a_{2},a_{3})\geq 3L\exp\left(1-C_{3}(1+\log 3L+Mt) \right).\) To obtain the bounds of \(A_{3}^{\mathcal{I}}(t,a_{2},a_{3})\) in (3.14), we note that \(u_{3}^{\mathcal{I}}(t,A_{2}^{\mathcal{I}}(t,a_{2},a_{3}),0)=0\) (by the odd symmetry of \(\omega_{1}^{\mathcal{I}}\)) and \(A_{3}^{\mathcal{I}}(t,a_{2},a_{3})>0\) for all \(t>0\), and proceed as we have shown the case of \(A_{2}^{\mathcal{I}}(t,a_{2},a_{3})\).
Henceforth, let \(C_{i}\) (\(i=1,2,3\)) denote the constants in Lemma 3.5. Now we are ready to estimate \(\left\|\nabla u^{\mathcal{I}}(t,\cdot)\right\|_{L^{\infty}}\) near the origin.
**Lemma 3.6**.: _Let \(\delta:=\frac{C_{1}}{10(M+1)^{C_{3}}}\). Then for \(t\in[0,T_{M}]\), we have_
\[\left\|\nabla u^{\mathcal{I}}(t,\cdot)\right\|_{L^{\infty}(B_{0}(\delta))} \lesssim e^{-C_{3}^{-1}Mt}. \tag{3.18}\]
_Remark 3.7_.: We note that \(\delta\leq\frac{C_{1}e^{-C_{3}Mt}}{10}\) for \(0\leq t\leq T_{M}\) by the definitions of \(\delta\) and \(T_{M}\).
Proof.: Let \(x=(x_{2},x_{3})\in B_{0}(\delta)\). We recall explicit formulas
\[\begin{split}\partial_{2}u_{2}^{\mathcal{I}}(t,x_{2},x_{3})=- \partial_{3}u_{3}^{\mathcal{I}}(t,x_{2},x_{3})\\ =\frac{1}{\pi}\sum_{n=(n_{2},n_{3})\in\mathbb{Z}^{2}}p.v.\iint_{[ -L,L)^{2}}\frac{(x_{2}-y_{2}-2Ln_{2})(x_{3}-y_{3}-2Ln_{3})^{2}}{|x-y-2Ln|^{4} }\omega_{1}^{\mathcal{I}}(t,y_{2},y_{3})dy_{2}dy_{3}\end{split} \tag{3.19}\]
Moreover, since \(\omega_{1}^{\mathcal{I}}=0\) in \([0,T_{M}]\times B_{0}(\delta)\) by Lemma 3.5,
\[\begin{split}\partial_{2}u_{3}^{\mathcal{I}}(t,x_{2},x_{3})=- \partial_{3}u_{2}^{\mathcal{I}}(t,x_{2},x_{3})\\ =\frac{1}{2\pi}\sum_{n=(n_{2},n_{3})\in\mathbb{Z}^{2}}p.v.\iint_{ [-L,L)^{2}}\frac{(x_{2}-y_{2}-2Ln_{2})^{2}-(x_{3}-y_{3}-2Ln_{3})^{2}}{|x-y-2 Ln|^{4}}\omega_{1}^{\mathcal{I}}(t,y_{2},y_{3})dy_{2}dy_{3}\end{split} \tag{3.20}\]
in \([0,T_{M}]\times B_{0}(\delta)\). (See e.g. [11] for derivations of (3.19) and (3.20).) To begin with, we estimate \(\left\|\partial_{2}u_{2}^{\mathcal{I}}\right\|_{L^{\infty}(B_{0}(\delta))}= \left\|\partial_{3}u_{3}^{\mathcal{I}}\right\|_{L^{\infty}(B_{0}(\delta))}\).
By the odd symmetry of \(\omega_{1}^{\mathcal{I}}\) in both \(x_{2}\) and \(x_{3}\), (3.19) yields
\[\partial_{2}u_{2}^{\mathcal{I}}(t,x_{2},x_{3})\] \[\approx\sum_{n\in\mathbb{Z}^{2}}\iint_{\{0\leq y_{2},y_{3}\leq L \}}\left[\frac{(x_{2}-y_{2}-2Ln_{2})(x_{3}-y_{3}-2Ln_{3})}{|x-y-2Ln|^{4}}- \frac{(x_{2}+y_{2}-2Ln_{2})(x_{3}-y_{3}-2Ln_{3})}{\left((x_{2}+y_{2}-2Ln_{2})^ {2}+(x_{3}-y_{3}-2Ln_{3})^{2}\right)^{2}}\right.\] \[\left.-\frac{(x_{2}-y_{2}-2Ln_{2})(x_{3}+y_{3}-2Ln_{3})}{\left((x _{2}-y_{2}-2Ln_{2})^{2}+(x_{3}+y_{3}-2Ln_{3})^{2}\right)^{2}}+\frac{(x_{2}+y_ {2}-2Ln_{2})(x_{3}+y_{3}-2Ln_{3})}{\left((x_{2}+y_{2}-2Ln_{2})^{2}+(x_{3}+y_ {3}-2Ln_{3})^{2}\right)^{2}}\right]\omega_{1}^{\mathcal{I}}(t,y_{2},y_{3})dy_{ 2}y_{3}\] \[=\mathrm{I}+\mathrm{II}+\mathrm{III}+\mathrm{IV}.\]
To estimate \(\mathrm{I}\), we divide it into \(\mathrm{I}_{0}\) and \(\mathrm{I}_{\neq 0}\) which correspond to the term with \(n=(0,0)\) and the sum of all other terms with \(n\neq(0,0)\), respectively. For \(\mathrm{I}_{0}\), we make a change of variables \((y_{2},y_{3})\mapsto(a_{2},a_{3})\) by \(y=A^{\mathcal{I}}(t,a)\) and use (3.12) to have
\[\mathrm{I}_{0}\approx\iint_{\{0\leq a_{2},a_{3}\leq L\}}\frac{(x_{2}-A_{2}^{ \mathcal{I}}(t,a))(x_{3}-A_{3}^{\mathcal{I}}(t,a))}{|x-A^{\mathcal{I}}(t,a)|^{ 4}}\omega_{1}^{\mathcal{I}}(t,A_{2}^{\mathcal{I}}(t,a),A_{3}^{\mathcal{I}}(t, a))e^{-Mt}da_{2}da_{3}.\]
By (3.11) and the assumption that \(\omega_{1,0}^{\mathcal{I}}\) is supported on \(\{1\leq a_{2},a_{3}\leq 2\}\) (see (1.5)), we obtain
\[\mathrm{I}_{0}\leq\left\|\omega_{1,0}^{\mathcal{I}}\right\|_{L^{\infty}}\iint _{\{1\leq a_{2},a_{3}\leq 2\}}\frac{\left|(x_{2}-A_{2}^{\mathcal{I}}(t,a))(x_{3}-A_{3}^{ \mathcal{I}}(t,a))\right|}{|x-A^{\mathcal{I}}(t,a)|^{4}}da_{2}da_{3}.\]
Since \(x\in B_{0}(\delta)\), Lemma 3.5 and Remark 3.7 imply
\[\left\{\begin{aligned} \frac{9C_{1}e^{-C_{3} Mt}}{10}&\leq A_{2}^{\mathcal{I}}(t,a)-|x_{2}|\leq A_{2}^{\mathcal{I}}(t,a)-x_{2}\leq A_{2}^{ \mathcal{I}}(t,a)+|x_{2}|\leq\frac{11C_{2}e^{-C_{3}^{-1}Mt}}{10},\\ \frac{9C_{1}}{10}&\leq A_{3}^{\mathcal{I}}(t,a)-|x_{3 }|\leq A_{3}^{\mathcal{I}}(t,a)-x_{3}\leq A_{3}^{\mathcal{I}}(t,a)+|x_{3}|\leq \frac{11C_{2}}{10}.\end{aligned}\right. \tag{3.21}\]
Consequently,
\[\frac{\left|(x_{2}-A_{2}^{\mathcal{I}}(t,a))(x_{3}-A_{3}^{\mathcal{I}}(t,a)) \right|}{|x-A^{\mathcal{I}}(t,a)|^{4}}\leq\frac{\left|(x_{2}-A_{2}^{\mathcal{ I}}(t,a))(x_{3}-A_{3}^{\mathcal{I}}(t,a))\right|}{(x_{3}-A_{3}^{\mathcal{I}}(t,a))^{4}} \lesssim e^{-C_{3}^{-1}Mt},\]
which gives \(\mathrm{I}_{0}\lesssim e^{-C_{3}^{-1}Mt}\). To estimate \(\mathrm{I}_{\neq 0}\), denoting \(\tilde{n}:=(-n_{2},n_{3})\), we compute
\[\frac{(x_{2}-y_{2}-2Ln_{2})(x_{3}-y_{3}-2Ln_{3})}{|x-y-2Ln|^{4}} +\frac{(x_{2}-y_{2}+2Ln_{2})(x_{3}-y_{3}-2Ln_{3})}{|x-y-2Ln_{3}|^{4}}\] \[=\frac{(x_{3}-y_{3}-2Ln_{3})\left((x_{2}-y_{2})(|x-y-2L\tilde{n}| ^{4}+|x-y-2Ln|^{4})-2L_{2}(|x-y-2L\tilde{n}|^{4}-|x-y-2Ln|^{4})\right)}{|x-y-2 Ln|^{4}|x-y-2L\tilde{n}|^{4}}.\]
Since \(|x-y-2L\tilde{n}|^{4}-|x-y-2Ln|^{4}=16Ln_{2}(x_{2}-y_{2})\left((x_{2}-y_{2})^ {2}+4L^{2}n_{2}^{2}\right)+(x_{3}-y_{3}-2Ln_{3})^{2}\right)\) and \(|x-y|\leq|x|+|y|\leq\delta+L\) implies
\[|x-y-2Ln|,\,|x-y-2L\tilde{n}|\gtrsim|n|, \tag{3.22}\]
we have
\[\left|\frac{(x_{2}-y_{2}-2Ln_{2})(x_{3}-y_{3}-2Ln_{3})}{|x-y-2Ln|^{4}}+\frac{( x_{2}-y_{2}+2Ln_{2})(x_{3}-y_{3}-2Ln_{3})}{|x-y-2L\tilde{n}|^{4}}\right|\lesssim\frac{|x_{2}-y_ {2}|}{|n|^{3}}.\]
Hence, making again the change of variables \((y_{2},y_{3})\mapsto(a_{2},a_{3})\) by \(y=A^{\mathcal{I}}(t,a)\) and using (3.21), we proceed in the same argument to obtain \(\mathrm{I}_{\neq 0}\lesssim e^{-C_{3}^{-1}Mt}.\) In a similar manner, we can also show that all of \(\mathrm{II}\), \(\mathrm{III}\), and \(\mathrm{IV}\) have the same upper bound, which implies \(\left\|\partial_{2}u_{2}^{\mathcal{I}}\right\|_{L^{\infty}(B_{0}(\delta))} \lesssim e^{-C_{3}^{-1}Mt}.\)
Next, we estimate \(\left\|\partial_{3}u_{2}^{\mathcal{I}}\right\|_{L^{\infty}(B_{0}(\delta))}\left(= \left\|\partial_{2}u_{3}^{\mathcal{I}}\right\|_{L^{\infty}(B_{0}(\delta))}\right)\). By the odd symmetry of \(\omega_{1}^{\mathcal{I}}\) in both \(x_{2}\) and \(x_{3}\), (3.20) yields
\[\partial_{3}u_{2}^{\mathcal{I}}(t,x_{2},x_{3})\] \[\approx\sum_{n\in\mathbb{Z}^{2}}\iint_{\{0\leq y_{2},y_{3}\leq L \}}\left[\frac{(x_{2}-y_{2}-2Ln_{2})^{2}-(x_{3}-y_{3}-2Ln_{3})^{2}}{|x-y-2Ln |^{4}}-\frac{(x_{2}+y_{2}-2Ln_{2})^{2}-(x_{3}-y_{3}-2Ln_{3})^{2}}{((x_{2}+y_{2 }-2Ln_{2})^{2}+(x_{3}-y_{3}-2Ln_{3})^{2})^{2}}\right.\] \[\left.-\frac{(x_{2}-y_{2}-2Ln_{2})^{2}-(x_{3}+y_{3}-2Ln_{3})^{2} }{\left((x_{2}-y_{2}-2Ln_{2})^{2}+(x_{3}+y_{3}-2Ln_{3})^{2}\right)^{2}}+ \frac{(x_{2}+y_{2}-2Ln_{2})^{2}-(x_{3}+y_{3}-2Ln_{3})^{2}}{\left((x_{2}+y_{2} -2Ln_{2})^{2}+(x_{3}+y_{3}-2Ln_{3})^{2}\right)^{2}}\right]\omega_{1}^{ \mathcal{I}}(t,y_{2},y_{3})dy_{2}y_{3}\] \[=\mathrm{I}+\Pi+\mathrm{III}+\mathrm{IV}.\]
for \((t,x)\in[0,T_{M}]\times B_{0}(\delta)\). To bound \(\mathrm{I}+\mathrm{II}\), we again divide it into \((\mathrm{I}+\mathrm{II})_{0}\) and \((\mathrm{I}+\mathrm{II})_{\neq 0}\) which correspond to the term with \(n=(0,0)\) and the sum of all other terms with \(n\neq(0,0)\), respectively. For \((\mathrm{I}+\mathrm{II})_{0}\), we again make a change of variables \((y_{2},y_{3})\mapsto(a_{2},a_{3})\) by \(y=A^{\mathcal{I}}(t,a)\) and use (3.12) to have
\[(\mathrm{I}+\mathrm{II})_{0}=\frac{1}{2\pi}\iint_{\{0\leq a_{2},a_{3}\leq L\}} K_{x_{2},x_{3}}(A_{2}^{\mathcal{I}}(t,a),A_{3}^{\mathcal{I}}(t,a))\,\omega_{1}^{ \mathcal{I}}(t,A_{2}^{\mathcal{I}}(t,a),A_{3}^{\mathcal{I}}(t,a))e^{-Mt}da_{2}da _{3},\]
where
\[K_{x_{2},x_{3}}(z_{2},z_{3})=\frac{\left((x_{2}-z_{2})^{2}-(x_{3}-z_{3})^{2} \right)\left((x_{2}+z_{2})^{2}+(x_{3}-z_{3})^{2}\right)^{2}-\left((x_{2}+z_{2} )^{2}-(x_{3}-z_{3})^{2}\right)|x-z|^{4}}{|x-z|^{4}\left((x_{2}+z_{2})^{2}+(x_{ 3}-z_{3})^{2}\right)^{2}}\]
for \(z=(z_{2},z_{3})\). From (3.11) and the assumption that \(\omega_{1,0}^{\mathcal{I}}\) is supported on \(\{1\leq a_{2},a_{3}\leq 2\}\), we have
\[(\mathrm{I}+\mathrm{II})_{0}\leq\left\|\omega_{1,0}^{\mathcal{I}}\right\|_{L^{ \infty}}\iint_{\{1\leq a_{2},a_{3}\leq 2\}}\left|K_{x_{2},x_{3}}(A_{2}^{\mathcal{I}}(t,a),A_{3}^{ \mathcal{I}}(t,a))\right|da_{2}da_{3}.\]
Note that the numerator of \(K_{x_{2},x_{3}}(z_{2},z_{3})\) can be written as
\[\sum_{\begin{subarray}{c}\alpha_{1},\alpha_{2},\alpha_{4}\geq 0,\alpha_{3} \geq 1\\ \alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}=6,\alpha_{3}\geq 1\end{subarray}}C_{( \alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})}x_{2}^{\alpha_{1}}x_{3}^{\alpha_ {2}}z_{2}^{\alpha_{3}}z_{3}^{\alpha_{4}} \tag{3.23}\]
for some constants \(C_{(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})}\)'s. Moreover, the denominator of \(K_{x_{2},x_{3}}(z_{2},z_{3})\) is bounded below by \((x_{3}-z_{3})^{8}\). Thus, using Lemma 3.5 and Remark 3.7, we have
\[\left|K_{x_{2},x_{3}}(A_{2}^{\mathcal{I}}(t,a),A_{3}^{\mathcal{I} }(t,a))\right| \lesssim\sum_{\begin{subarray}{c}\alpha_{1},\alpha_{2},\alpha_{4} \geq 0,\alpha_{3}\geq 1\\ \alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}=6,\alpha_{3}\geq 1\end{subarray}}\frac{ \left|x_{2}^{\alpha_{1}}x_{3}^{\alpha_{2}}(A_{2}^{\mathcal{I}})^{\alpha_{3}}(A_ {3}^{\mathcal{I}})^{\alpha_{4}}\right|}{(x_{3}-A_{3}^{\mathcal{I}}(t,a))^{8}}\] \[\lesssim\sum_{\begin{subarray}{c}\alpha_{1},\alpha_{2},\alpha_{4} \geq 0,\alpha_{3}\geq 1\\ \alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}=6,\alpha_{3}\geq 1\end{subarray}}\left(e^{-C_{3}Mt} \right)^{\alpha_{1}}\left(e^{-C_{3}Mt}\right)^{\alpha_{2}}\left(e^{-C_{3}^{-1}Mt }\right)^{\alpha_{3}}\lesssim e^{-C_{3}^{-1}Mt}\]
for \(t\in[0,T_{M}]\). In the last line, we used \(\alpha_{3}\geq 1\). This gives \((\mathrm{I}+\mathrm{II})_{0}\lesssim e^{-C_{3}^{-1}Mt}\). For \((\mathrm{I}+\mathrm{II})_{\neq 0}\), we recall (3.23) and (3.22), which give
\[\left|K_{x_{2}-2Ln_{2},x_{3}-2Ln_{3}}(y_{2},y_{3})\right|\lesssim\sum_{ \begin{subarray}{c}\alpha_{1},\alpha_{2},\alpha_{4}\geq 0,\alpha_{3}\geq 1\\ \alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}=6,\alpha_{3}\geq 1\end{subarray}} \frac{|(x_{2}-2Ln_{2})^{\alpha_{1}}(x_{3}-2Ln_{3})^{\alpha_{2}}y_{2}^{\alpha_ {3}}y_{3}^{\alpha_{4}}|}{|n|^{8}}\lesssim\frac{y_{2}}{|n|^{3}},\]
where in the last inequality, we used \(\alpha_{1}+\alpha_{2}\leq 5\) and \(\alpha_{3}\geq 1\). Hence proceeding as before, we obtain \((\mathrm{I}+\mathrm{II})_{\neq 0}\lesssim e^{-C_{3}^{-1}Mt}\). In the same way, we can also show that \(\mathrm{III}+\mathrm{IV}\lesssim e^{-C_{3}^{-1}Mt}\), and consequently, we obtain \(\left\|\partial_{3}u_{2}^{\mathcal{I}}\right\|_{L^{\infty}(B_{0}(\delta))} \lesssim e^{-C_{3}^{-1}Mt}\)
### Linearization of the equation for \(\omega^{\mathcal{S},P}\)
Abusing the notation as in the last section, we denote pseudo-solutions \(\omega^{\mathcal{S},P}\) and \(u^{\mathcal{S},P}\) by \(\omega^{\mathcal{S}}\) and \(u^{\mathcal{S}}\), respectively. Dropping nonlinear terms in (3.3), we obtain the following linearized equation of \(\omega^{\mathcal{S}}\):
\[\partial_{t}\omega^{\mathcal{S}}+(u^{\mathcal{L}}+u^{\mathcal{I}})\cdot\nabla \omega^{\mathcal{S}}=\nabla(u^{\mathcal{L}}+u^{\mathcal{I}})\omega^{\mathcal{S }}+\nabla u^{\mathcal{S}}\omega^{\mathcal{I}}-u^{\mathcal{S}}\cdot\nabla \omega^{\mathcal{I}}. \tag{3.24}\]
Now recalling (3.10) and abusing the notation, we denote a characteristic curve
\[A^{\mathcal{I}}(t,a_{1},a_{2},a_{3})=\left(A^{\mathcal{I}}_{1}(t,a_{1},a_{2}, a_{3}),A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3}),A^{\mathcal{I}}_{3}(t,a_{1},a_{2},a_ {3})\right):[0,\infty)\times\mathbb{T}^{3}\to\mathbb{T}^{3}\]
defined by \(A^{\mathcal{I}}(0,a_{1},a_{2},a_{3})=(a_{1},a_{2},a_{3})\) and
\[\begin{cases}\dfrac{d}{dt}A^{\mathcal{I}}_{1}(t,a_{1},a_{2},a_{3})=MA^{ \mathcal{I}}_{1}(t,a_{1},a_{2},a_{3}),\\ \dfrac{d}{dt}A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3})=u^{\mathcal{I}}_{2}(t,A^{ \mathcal{I}}_{2}(t,a_{1},a_{2},a_{3}),A^{\mathcal{I}}_{3}(t,a_{1},a_{2},a_{3}) )-MA^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3}),\\ \dfrac{d}{dt}A^{\mathcal{I}}_{3}(t,a_{1},a_{2},a_{3})=u^{\mathcal{I}}_{3}(t,A^{ \mathcal{I}}_{2}(t,a_{1},a_{2},a_{3}),A^{\mathcal{I}}_{3}(t,a_{1},a_{2},a_{3}) ).\end{cases} \tag{3.25}\]
Then we can show the following.
**Lemma 3.8**.: _Let \(\delta=\frac{C_{1}}{10(M+1)^{C_{3}}}\) as in Lemma 3.6. Then for \(a=(a_{1},a_{2},a_{3})\in\mathbb{T}^{3}\) and \(\ell\leq\min\left\{\frac{\delta}{M+1},(3Le)^{1-C_{3}}\delta^{C_{3}}\right\}\), the following statements hold:_
* _if_ \(a\in B_{0}(\ell)\)_, then_ \(A^{\mathcal{I}}(t,a)\in B_{0}(\delta)\) _for_ \(t\in[0,T_{M}]\)_,_
* _if_ \(a\in\mathbb{T}^{3}\backslash B_{0}(\ell)\)_, then_ \(A^{\mathcal{I}}(t,a)\in\mathbb{T}^{3}\backslash B_{0}\left(C_{4}\left(\frac{ \ell}{M+1}\right)^{C_{3}}\right)\) _for_ \(t\in[0,T_{M}]\)_, where_ \(C_{4}>0\) _is a constant._
Proof.: Let us prove the first statement. Suppose that \(a\in B_{0}(\ell)\). Then \(|a_{1}|\leq\ell\) implies that \(\left|A^{\mathcal{I}}_{1}(t,a_{1},a_{2},a_{3})\right|=|a_{1}|e^{Mt}\leq\ell(M+ 1)\leq\delta\) for \(t\in[0,T_{M}]\). For \(A^{\mathcal{I}}_{2}(t,a)\) and \(A^{\mathcal{I}}_{3}(t,a)\), we only need to consider the case when \(a_{2},a_{3}\geq 0\) by the odd symmetry of \(\omega^{\mathcal{I}}\) in both \(x_{2}\) and \(x_{3}\). We claim that if \(a_{2},a_{3}\in B_{0}(\ell)\) with \(a_{2},a_{3}\geq 0\), then
\[0\leq A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3})\leq\delta e^{-C_{3}^{-1}Mt}, \tag{3.26}\]
and
\[0\leq A^{\mathcal{I}}_{3}(t,a_{1},a_{2},a_{3})\leq\delta. \tag{3.27}\]
Recalling (3.15), (3.9), and \(A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3})\geq 0\), we have
\[\dfrac{d}{dt}A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3})\leq Ce^{tM}A^{\mathcal{ I}}_{2}(t,a_{1},a_{2},a_{3})\left(1+\log\frac{3L}{A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_ {3})}\right)-MA^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3}),\]
Then proceeding as we did to derive (3.16), we see that for \(0\leq t\leq T_{M}\),
\[\log A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3})\leq 1+\log 3L+C_{3}^{-1}\left(\log a _{2}-(1+\log 3L)-Mt\right),\]
which is equivalent to
\[A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3})\leq 3Le\left(\frac{a_{2}}{3Le}\right)^{C_{3} ^{-1}}e^{-C_{3}^{-1}Mt}.\]
But since we assumed \(a_{2}\leq\ell\leq(3Le)^{1-C_{3}}\delta^{C_{3}}\), (3.26) holds. Then, (3.27) can be handled by a parallel argument.
Next, we prove our second statement. Suppose that \(a\in\mathbb{T}^{3}\backslash B_{0}(\ell)\). Then \(|a_{1}|\geq\ell\) implies that \(\left|A^{\mathcal{I}}_{1}(t,a_{1},a_{2},a_{3})\right|=|a_{1}|e^{Mt}\geq\ell\) for all \(t\). For \(A^{\mathcal{I}}_{2}(t,a)\) and \(A^{\mathcal{I}}_{3}(t,a)\), we only need to consider the case when \(a_{2},a_{3}\geq 0\) by the odd symmetry of \(\omega^{\mathcal{I}}\) in both \(x_{2}\) and \(x_{3}\). We claim that if \(a_{2},a_{3}\in\mathbb{T}^{3}\backslash B_{0}(\ell)\) with \(a_{2},a_{3}\geq 0\), then
\[A^{\mathcal{I}}_{2}(t,a_{1},a_{2},a_{3})\geq(3Le)^{1-C_{3}}\ell^{C_{3}}e^{-C_{3} Mt}, \tag{3.28}\]
and
\[A^{\mathcal{I}}_{3}(t,a_{1},a_{2},a_{3})\geq(3Le)^{1-C_{3}}\ell^{C_{3}}. \tag{3.29}\]
From (3.15), (3.9), and \(A_{2}^{\mathcal{I}}(t,a_{1},a_{2},a_{3})\geq 0\), we have
\[-\frac{d}{dt}A_{2}^{\mathcal{I}}(t,a_{1},a_{2},a_{3})\leq Ce^{tM}A_{2}^{ \mathcal{I}}(t,a_{2},a_{3})\left(1+\log\frac{3L}{A_{2}^{\mathcal{I}}(t,a_{1},a _{2},a_{3})}\right)+MA_{2}^{\mathcal{I}}(t,a_{1},a_{2},a_{3}).\]
With the same argument as the derivation of (3.17), we obtain for \(0\leq t\leq T_{M}\),
\[\log\frac{3L}{A_{2}^{\mathcal{I}}(t,a_{1},a_{2},a_{3})}\leq-1+C_{3}\left(1+ \log\frac{3L}{a_{2}}+Mt\right),\]
so that
\[A_{2}^{\mathcal{I}}(t,a_{1},a_{2},a_{3})\geq 3Le\left(\frac{a_{2}}{3Le}\right)^ {C_{3}}e^{-C_{3}Mt}\geq(3Le)^{1-C_{3}}\ell^{C_{3}}e^{-C_{3}Mt},\]
where we used the assumption \(a_{2}\geq\ell\) in the last inequality. Similarly, we can show (3.29). Noticing \(T_{M}=\frac{\log(M+1)}{M}\), our second statement follows.
Now we are ready to estimate \(\left\|\omega^{\mathcal{S}}(t,\cdot)\right\|_{L^{\infty}}\) near the origin.
**Lemma 3.9**.: _Let \(\ell\) in (1.6) satisfy \(\ell\leq\min\left\{\frac{\delta}{M+1},(3Le)^{1-C_{3}}\delta^{C_{3}}\right\}\). Then for \(0\leq t\leq T_{M}\), we have_
\[\left\|\omega^{\mathcal{S}}(t,\cdot)\right\|_{L^{\infty}\left(B_{0}\left(C_{4 }\left(\frac{\ell}{M+1}\right)^{C_{3}}\right)\right)}\lesssim ee^{-Mt}.\]
Proof.: Recalling \(\omega^{\mathcal{I}}\equiv 0\) in \([0,T_{M}]\times B_{\delta}(0)\), the previous lemma reduces (3.24) to
\[\partial_{t}\omega^{\mathcal{S}}+(u^{\mathcal{L}}+u^{\mathcal{I}})\cdot \nabla\omega^{\mathcal{S}}=\nabla(u^{\mathcal{L}}+u^{\mathcal{I}})\omega^{ \mathcal{S}} \tag{3.30}\]
in \([0,T_{M}]\times B_{\delta}(0)\). First of all, we claim that \(\omega_{1}^{\mathcal{S}}(t,\cdot)=0\) in \([0,T_{M}]\times B_{0}\left(C_{4}\left(\frac{\ell}{M+1}\right)^{C_{3}}\right)\). Indeed, noticing \(u_{1}^{\mathcal{I}}=0\) (see (3.5)), (3.30) gives
\[\partial_{t}\omega_{1}^{\mathcal{S}}+Mx_{1}\partial_{1}\omega_{1}^{\mathcal{S }}+(-Mx_{2}+u_{2}^{\mathcal{I}})\partial_{2}\omega_{1}^{\mathcal{S}}+u_{3}^{ \mathcal{I}}\partial_{3}\omega_{1}^{\mathcal{S}}=M\omega_{1}^{\mathcal{S}}.\]
Evaluating along the characteristic \(A^{\mathcal{I}}\), Lemma 3.8 implies \(\omega_{1}^{\mathcal{S}}=0\) in \([0,T_{M}]\times B_{0}\left(C_{4}\left(\frac{\ell}{M+1}\right)^{C_{3}}\right)\) because \(\omega_{1,0}^{\mathcal{S}}=0\) in \(B_{0}(\ell)\). Next, (3.5) reduces the equations of \(\omega_{2}^{\mathcal{S}}\) and \(\omega_{3}^{\mathcal{S}}\) as follows:
\[\begin{cases}\partial_{t}\omega_{2}^{\mathcal{S}}+Mx_{1}\partial_{1}\omega_{ 2}^{\mathcal{S}}+(-Mx_{2}+u_{2}^{\mathcal{I}})\partial_{2}\omega_{2}^{ \mathcal{S}}+u_{3}^{\mathcal{I}}\partial_{3}\omega_{2}^{\mathcal{S}}=(-M+ \partial_{2}u_{2}^{\mathcal{I}})\omega_{2}^{\mathcal{S}}+\partial_{3}u_{2}^{ \mathcal{I}}\omega_{3}^{\mathcal{S}},\\ \partial_{t}\omega_{3}^{\mathcal{S}}+Mx_{1}\partial_{1}\omega_{3}^{\mathcal{S }}+(-Mx_{2}+u_{2}^{\mathcal{I}})\partial_{2}\omega_{3}^{\mathcal{S}}+u_{3}^{ \mathcal{I}}\partial_{3}\omega_{3}^{\mathcal{S}}=\partial_{2}u_{3}^{\mathcal{I }}\omega_{2}^{\mathcal{S}}+\partial_{3}u_{3}^{\mathcal{I}}\omega_{3}^{ \mathcal{S}},\end{cases} \tag{3.31}\]
so that in \([0,T_{M}]\times B_{0}\left(C_{4}\left(\frac{\ell}{M+1}\right)^{C_{3}}\right)\). Hence using (3.18) and Lemma 3.8, we derive
\[\begin{split}\partial_{t}|\omega_{2}^{\mathcal{S}}(t,A^{ \mathcal{I}}(t,a))|&=\frac{\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))\partial_{t}\left(\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))\right)} {|\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|}\\ &\leq\left(-M+Ce^{-C_{3}^{-1}Mt}\right)|\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|+Ce^{-C_{3}^{-1}Mt}|\omega_{3}^{\mathcal{S}}(t,A^{ \mathcal{I}}(t,a))|\end{split} \tag{3.32}\]
and similarly
\[\partial_{t}|\omega_{3}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|\leq Ce^{-C_{3}^ {-1}Mt}\left(|\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|+|\omega_{3}^{ \mathcal{S}}(t,A^{\mathcal{I}}(t,a))|\right). \tag{3.33}\]
Thus, from
\[\partial_{t}\left(|\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|+|\omega_{3}^ {\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|\right)\lesssim e^{-C_{3}^{-1}Mt}\left(| \omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|+|\omega_{3}^{\mathcal{S}}(t,A^ {\mathcal{I}}(t,a))|\right),\]
we have
\[|\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|+|\omega_{3}^{\mathcal{S}}(t,A^ {\mathcal{I}}(t,a))|\lesssim\left|\omega_{2,0}^{\mathcal{S}}(a)+\omega_{3,0}^{ \mathcal{S}}(a)\right|\lesssim\varepsilon,\]
where we used (1.6) in the last inequality. Inserting \(|\omega_{2}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|\lesssim\varepsilon\) into (3.33), we obtain
\[\partial_{t}|\omega_{3}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|\leq Ce^{-C_{3}^ {-1}Mt}|\omega_{3}^{\mathcal{S}}(t,A^{\mathcal{I}}(t,a))|+C\varepsilon e^{-C_{3}^ {-1}Mt}.\]
Noticing \(\omega^{\mathcal{S}}_{3,0}(a)=0\) in \(B_{0}(\ell)\), Gronwall's inequality gives
\[|\omega^{\mathcal{S}}_{3}(t,A^{\mathcal{I}}(t,a))| \leq\exp\left(\int_{0}^{t}Ce^{-C_{3}^{-1}Ms}ds\right)\int_{0}^{t} C\varepsilon e^{-C_{3}^{-1}Ms}ds\] \[=\exp\left(\frac{C\left(1-e^{-C_{3}^{-1}Mt}\right)}{C_{3}^{-1}M} \right)\frac{C\varepsilon\left(1-e^{-C_{3}^{-1}Mt}\right)}{C_{3}^{-1}M}\lesssim \frac{\varepsilon}{M}\lesssim\varepsilon e^{-Mt}.\]
for \(t\in[0,T_{M}]\). Inserting this into (3.32), we obtain
\[\partial_{t}|\omega^{\mathcal{S}}_{2}(t,A^{\mathcal{I}}(t,a))|\leq\left(-M+ Ce^{-C_{3}^{-1}Mt}\right)|\omega^{\mathcal{S}}_{2}(t,A^{\mathcal{I}}(t,a))|+C \varepsilon e^{-(C_{3}^{-1}+1)Mt},\]
which yields
\[\partial_{t}\left(|\omega^{\mathcal{S}}_{2}(t,A^{\mathcal{I}}(t,a))|\exp \left(\int_{0}^{t}M-Ce^{-C_{3}^{-1}Ms}ds\right)\right)\leq C\varepsilon e^{-( C_{3}^{-1}+1)Mt}\exp\left(\int_{0}^{t}M-Ce^{-C_{3}^{-1}Ms}ds\right) \lesssim\varepsilon e^{-C_{3}^{-1}Mt}.\]
Integrating from \(0\) to \(t\), we obtain
\[|\omega^{\mathcal{S}}_{2}(t,A^{\mathcal{I}}(t,a))|\leq\exp\left(\int_{0}^{t}- M+Ce^{-C_{3}^{-1}Ms}ds\right)\left(\omega^{\mathcal{S}}_{2,0}(a)+\int_{0}^{t}C \varepsilon e^{-C_{3}^{-1}Ms}ds\right)\lesssim\varepsilon e^{-Mt}.\qed\]
### Comparison between solutions
In this section, we compare our pseudo-solutions with real solutions as we mentioned in the beginning of this section. In order to distinguish solutions of (3.3) and (3.24), we denote the solution of the linearized equation (3.24) by \((\omega^{\mathcal{S},P,Lin},u^{\mathcal{S},P,Lin})\). We fix \(s>\frac{5}{2}\) and set \(\bar{u}=u^{\mathcal{L}}+u^{\mathcal{I},P}\) in (2.1), \(u=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}\) in (2.2), \(\tilde{u}=u-\bar{u}\) in (2.3), and \(\tilde{u}^{Lin}=u^{\mathcal{S},P,Lin}\) in (2.4) to employ Proposition 2.1. (1.6) implies that \(\tilde{u}_{0}\) in (2.3) satisfies
\[\left\|\tilde{u}_{0}\right\|_{H^{s+1}(\mathbb{T}^{3})}\lesssim\left\|\tilde{ \psi}\right\|_{H^{s}(\mathbb{T}^{3})}=\ell^{-s+\frac{3}{2}}\|\psi\|_{H^{s}( \mathbb{T}^{3})}, \tag{3.34}\]
and Lemma 3.2 gives
\[\int_{0}^{T_{M}}\left\|(u^{\mathcal{L}}+u^{\mathcal{I},P})(\tau,\cdot)\right\| _{H^{s+2}}ds\lesssim MT_{M}+\frac{e^{CM_{M}}-1}{CM}\lesssim M^{C}, \tag{3.35}\]
by adjusting the value of absolute constant \(C>1\) from an inequality to another. Therefore, Proposition 2.1 implies
\[\left\|\omega^{\mathcal{S},P}(t,\cdot)-\omega^{\mathcal{S},P,Lin}(t,\cdot) \right\|_{L^{\infty}}\lesssim\varepsilon^{2}\ell^{-2s+3}e^{CM^{C}}\]
on \([0,T_{M}]\), whenever \(\varepsilon>0\) satisfies
\[\varepsilon\leq\ell^{s-\frac{3}{2}}e^{-CM^{C}}. \tag{3.36}\]
Hence, it follows from Lemma 3.9 that
\[\left\|\omega^{\mathcal{S},P}(t,\cdot)\right\|_{L^{\infty}\left(B_{0}\left(C _{4}\left(\frac{\ell}{M+1}\right)^{C_{3}}\right)\right)}\lesssim\varepsilon e ^{-Mt} \tag{3.37}\]
on \([0,T_{M}]\) if we pick \(\ell\) and \(\varepsilon\) satisfying
\[\ell\leq\min\left\{\frac{\delta}{M+1},(3Le)^{1-C_{3}}\delta^{C_{3}}\right\}, \qquad\varepsilon\leq\ell^{2s-3}e^{-M^{C}} \tag{3.38}\]
with \(C\) adjusted. (Here, we have used that \(M\gg 1\).)
Next, we set \(\bar{u}^{*}:=u^{\mathcal{L}}+u^{\mathcal{I}}\), so that \(\bar{u}^{*}\) solves (2.8). Thus, using (3.18), (3.34), and (3.35), Proposition 2.3 give us (1.8) for \(\varepsilon,\,\ell\) satisfying (3.38) of which \(C\) is adjusted if necessary. To derive (1.9), noticing \(u=u^{\mathcal{L}}+u^{\mathcal{I},P}+u^{\mathcal{S},P}=u^{\mathcal{L}}+u^{ \mathcal{I}}+u^{\mathcal{S}}\) and recalling (2.7), (3.34), and (3.35), there exists a constant \(C>0\) such that
\[\frac{d}{dt}\left|\Phi(t,a)-A^{\mathcal{I}}(t,a)\right|\leq\left\|u^{\mathcal{ S},P}\right\|_{L^{\infty}}\lesssim\left\|u^{\mathcal{S},P}\right\|_{H^{s+1}} \lesssim\varepsilon\ell^{-s+\frac{3}{2}}e^{CM^{C}}\]
for \(\varepsilon>0\) satisfying (3.36), where \(\Phi,A^{\mathcal{I}}\) are from (1.7), (3.25), respectively. Thus if \(\varepsilon,\,\ell\) satisfy (3.38) of which \(C\) is adjusted if necessary, then the estimate
\[\left|\Phi(t,a)-A^{\mathcal{I}}(t,a)\right|\lesssim\varepsilon\ell^{-s+\frac{3 }{2}}e^{CM^{C}}t\lesssim\sqrt{\varepsilon}\]
on \([0,T_{M}]\) and Lemma 3.8 imply
\[\left|\Phi(t,a)\right|\leq\left|A^{\mathcal{I}}(t,a)\right|+\left|\Phi(t,a)-A^{ \mathcal{I}}(t,a)\right|\leq\delta+\sqrt{\varepsilon}\leq 2\delta\]
for \((t,a)\in[0,T_{M}]\times B_{0}(\ell)\) while
\[\left|\Phi(t,a)\right|\geq\left|A^{\mathcal{I}}(t,a)\right|-\left|\Phi(t,a)-A^ {\mathcal{I}}(t,a)\right|\geq C_{4}\left(\frac{\ell}{M+1}\right)^{C_{3}}- \sqrt{\varepsilon}\geq C_{5}\left(\frac{\ell}{M+1}\right)^{C_{3}}\]
for \((t,a)\in[0,T_{M}]\times\mathbb{T}^{3}\backslash B_{0}(\ell)\) and some \(C_{5}>0\). Recall that \(\omega^{\mathcal{I},P}=0\) in \(B_{0}(2\delta)\times[0,T_{M}]\) (see Lemma 3.5 and Remark 3.7). Hence by (3.3), \(\omega^{\mathcal{S},P}\) solves
\[\partial_{t}\left(\omega^{\mathcal{S},P}(t,\Phi(t,a))\right)=\nabla u(t,\Phi (t,a))\omega^{\mathcal{S},P}(t,\Phi(t,a))\]
for \((t,a)\in[0,T_{M}]\times B_{0}(\ell)\). Since \(\omega^{\mathcal{S}}\) also solves
\[\partial_{t}\left(\omega^{\mathcal{S}}(t,\Phi(t,a))\right)=\nabla u(t,\Phi(t,a ))\omega^{\mathcal{S}}(t,\Phi(t,a)),\]
we have
\[\partial_{t}\left|\omega^{\mathcal{S},P}(t,\Phi(t,a))-\omega^{\mathcal{S}}(t, \Phi(t,a))\right|\leq\left\|\nabla u\right\|_{L^{\infty}}\left|\omega^{ \mathcal{S},P}(t,\Phi(t,a))-\omega^{\mathcal{S}}(t,\Phi(t,a))\right|\]
for \((t,a)\in[0,T_{M}]\times B_{0}(\ell)\). But \(\left\|\nabla u\right\|_{L^{\infty}}<\infty\) up to time \(T_{M}\), and \(\omega^{\mathcal{S},P}\) and \(\omega^{\mathcal{S}}\) have the same initial data \(\omega^{\mathcal{S}}_{0}\), so that \(\omega^{\mathcal{S},P}(t,\Phi(t,a))=\omega^{\mathcal{S}}(t,\Phi(t,a))\) for \((t,a)\in[0,T_{M}]\times B_{0}(\ell)\). This implies \(\omega^{\mathcal{S},P}(t,x)=\omega^{\mathcal{S}}(t,x)\) for \((t,x)\in[0,T_{M}]\times B_{0}\left(C_{5}\left(\frac{\ell}{M+1}\right)^{C_{3}}\right)\), and therefore (1.9) follows from (3.37). This completes our proof of Theorem A. \(\square\)
**Acknowledgments**. Research of TY was partially supported by Grant-in-Aid for Scientific Research B (20H01819), Japan Society for the Promotion of Science (JSPS). IJ has been supported by the National Research Foundation of Korea(NRF) grant No. 2022R1C1C1011051.
|
2309.11338 | TRAVID: An End-to-End Video Translation Framework | In today's globalized world, effective communication with people from diverse
linguistic backgrounds has become increasingly crucial. While traditional
methods of language translation, such as written text or voice-only
translations, can accomplish the task, they often fail to capture the complete
context and nuanced information conveyed through nonverbal cues like facial
expressions and lip movements. In this paper, we present an end-to-end video
translation system that not only translates spoken language but also
synchronizes the translated speech with the lip movements of the speaker. Our
system focuses on translating educational lectures in various Indian languages,
and it is designed to be effective even in low-resource system settings. By
incorporating lip movements that align with the target language and matching
them with the speaker's voice using voice cloning techniques, our application
offers an enhanced experience for students and users. This additional feature
creates a more immersive and realistic learning environment, ultimately making
the learning process more effective and engaging. | Prottay Kumar Adhikary, Bandaru Sugandhi, Subhojit Ghimire, Santanu Pal, Partha Pakray | 2023-09-20T14:13:05Z | http://arxiv.org/abs/2309.11338v1 | # TRAVID: An End-to-End Video Translation Framework
###### Abstract
In today's globalized world, effective communication with people from diverse linguistic backgrounds has become increasingly crucial. While traditional methods of language translation, such as written text or voice-only translations, can accomplish the task, they often fail to capture the complete context and nuanced information conveyed through nonverbal cues like facial expressions and lip movements. In this paper, we present an end-to-end video translation system that not only translates spoken language but also synchronizes the translated speech with the lip movements of the speaker. Our system focuses on translating educational lectures in various Indian languages, and it is designed to be effective even in low-resource system settings. By incorporating lip movements that align with the target language and matching them with the speaker's voice using voice cloning techniques, our application offers an enhanced experience for students and users. This additional feature creates a more immersive and realistic learning environment, ultimately making the learning process more effective and engaging.
## 1 Introduction
Face-to-Face (F2F) translation is a sub-field within the research domain of Machine Translation (MT). MT refers to the process of utilizing machines to translate text or speech from one language to another [23]. F2F translation specifically focuses on translating spoken language in real-time during face-to-face conversations or interactions. The objective is to bridge language barriers and facilitate seamless communication between individuals who speak different languages.
F2F translation is also a part of the broader field of multi-modal machine translation, which integrates videos or visual information along with translation. This approach aims to enhance engagement among native language speakers during sessions. Visual cues, such as lip synchronization according to the native languages, contribute to a more realistic and immersive translated lecture session. These visual elements provide valuable context information that aids in the translation process. Compared to image-guided multi-modal machine translation, videos provide visual and acoustic modalities with rich embedded information, such as actions, objects, and temporal transitions. From the past few years, image-based multi-modal models [22] only had marginal performance gains compared to their text-only counterparts, although very few of them are F2F translation [21].
F2F translation goes beyond traditional text-to-text or speech-to-speech translation methods. In a simple cascade-based F2F translation approach, several steps are involved: (i) **Capturing original speech:** the source video of a person delivering a speech is recorded or obtained, (ii) **Translating the captured speech:** the captured spoken language in the source video is translated to the desired language using machine translation techniques, (iii) **Generating an output video:** Based on the translated text, an output video is generated where the same person appears to be speaking in the translated language, and (iv) **Maintaining lip synchronization:** during the generation of the output video, efforts are made to ensure that the lip movements of the person in the video match the target language, providing lip synchronization as per the translated language. By following these steps, cascade-based F2F translation aims to deliver translated videos with synchronized lip movements, enhancing the authenticity and naturalness of the translated speech [14]. The intermediate steps i.e., **Translating the captured speech** can be modelled either direct [1] or cascade-based approach [1]. The cascade-based approach first performs a speech-to-text through an automatic speech
recognizer (ASR) then the transcribed source text to desired target text using a text-to-text machine translation system and finally a text-to-speech system transform the translated text to speech in the desired language.
In addition to managing the individual components of our cascade-based system, we face significant challenges with F2F translation, particularly in the areas of lip synchronization and voice or tone alignment. The process involves recording a speech, converting it to text using speech-to-text technologies, translating this text from the original to the target language, and then converting the translated text back to speech via text-to-speech systems. This process can be achieved using either a cascade or direct approach. A major challenge in this end-to-end F2F translation framework is ensuring that the lip movements sync with the translated speech track. This can be complex, as the duration of the translated speech may be longer or shorter than the original, depending on the distinct grammatical structures of the two languages. Additionally, the lips must move in a manner consistent with the frequency of the generated sound and must maintain the speaker's original voice or tones. Failing to do so can result in dubbing that appears off and unrealistic Prajwal et al. (2020).
F2F translation can have a huge impact on bridging the language gap in the educational sector. Numerous educational organisations create content to reach a global audience. Unfortunately, the lack of language intelligibility often prevents content consumers from fully utilizing the material at hand. While some videos provide manually executed dubs, these have their own set of challenges. It's true that manual translation tends to be more accurate than machine translation, but it also faces unavoidable limitations. These include cost, availability, efficiency, and most importantly, the quality of lip synchronization, which often falls short of the mark Chung and Zisserman (2017). Additionally, manual dubbing may be available in many but not all languages. The goal of the F2F translation system is to automate this dubbing process effectively and efficiently and make the online content available in whichever preferred language, thus overcoming the linguistic barrier between audio-visual content and the corresponding non-native consumer. This technology could also be used to assist language learning by giving students realistic and immersive opportunities to practise speaking and listening in a foreign language Jha et al. (2019). Through this paper, we contribute to creating a more equitable and accessible education landscape that enables native individuals to learn and grow without any language barrier. Our main objective is to motivate every individual by providing a platform, through which one can grasp knowledge from videos in an unfamiliar language. To the best of our knowledge, our F2F translation framework is the first online end-to-end video translation system we bring up to the community.
## 2 Related Work
In this section, we present part of previous studies conducted in this field and summarise our learning and inspiration to better complement our research. Prajwal et al. (2020) in their study explores the use of machine learning algorithms for lip-to-speech synthesis. The authors propose a new approach that takes into account individual speaking styles, resulting in increased accuracy. They use audio-visual data to train deep neural networks to capture unique lip movements and speaking styles, resulting in speech synthesis that is close to the original. The results show that their method outperforms current methods and produces speech that is similar to natural speech. K R et al. (2019) outlines a system for automatically translating speech between two people speaking different languages in real-time. The authors propose a multi-modal approach to translation that makes use of both audio and visual cues. This is accomplished by incorporating a novel visual module, LipGAN, for generating realistic talking faces in real-time from the translated audio. Their approach outperforms existing methods, demonstrating the potential for real-time F2F translation in practical applications. Ritter et al. (1999) in their research examines the development of a translation agent capable of performing real-time F2F speech translation. The authors present a multi-modal approach to translation that combines audio and visual information. They use machine learning algorithms to analyse each speaker's lip movements, speech, and facial expressions to produce a real-time audio-visual output with the speaker's face and synchronised lip movement. The results show that their method produces accurate translations and has the potential for practical applications in real-world scenarios. For translation, Chiralekha1 is a valuable tool because it efficiently
creates multi-lingual subtitles and voice-overs for informative videos. However, it may not be a efficient for longer-length videos. Lastly, Huang et al. (2017) presented a novel problem of unpaired face translation between static photos and dynamic videos, which could be used to predict and improve video faces. To accomplish this task, the authors propose using a CycleGAN model with an identity-aware constraint. The model is trained on a large face dataset and tested on a variety of face images and videos. The results show that the proposed method can effectively translate faces between images and videos while preserving the individual's identity, outperforming existing methods.
## 3 The TRAVID Framework
Our framework 'TRAVID' is capable of generating translated videos from English to four Indian languages: Bengali, Hindi, Nepali, and Telugu. Flask2 has been used as the foundation of our application, providing various built-in functionalities for building a Python-based web application. For the server-side and database, we utilize Python 3.9. In terms of audio and video processing, we primarily rely on the libraries Librosa3 and ffmpeg4. These libraries provide extensive capabilities for audio and video processing, manipulation, and rendering. The primary objective of this work is to effectively and efficiently translate spoken language from an input video. Additionally, we aim to generate audio that resembles the speaker's voice and synchronize the translated speech with the speaker's lip movements. The entire process begins by obtaining the source video, target language, and speaker's gender (for voice model selection) as input from the user through our web interface. Behind the scenes, the task is divided into three sub-tasks: (1) Audio-to-Text Processing, (2) Text-to-Audio Processing, and (3) Video Processing. The steps involved in this process are depicted in Figure 1.
Footnote 2: [https://flask.palletsprojects.com/](https://flask.palletsprojects.com/)
Footnote 3: [https://librosa.org/doc/latest/index.html](https://librosa.org/doc/latest/index.html)
Footnote 4: [https://pypi.org/project/ffmpeg-python/](https://pypi.org/project/ffmpeg-python/)
### Audio to Text Processing
The input video, in our case an MPEG-4 (.mp4) file, is initially converted to a Waveform Audio (.wav) file using FFmpeg. This conversion enables us to perform text detection from the audio rather than the video file. Subsequently, we employ Librosa to identify non-mute sections within the'start' and 'end' frame indexes, which are stored in a silence array. Each element of the silence array represents a small audio chunk, aiding in reducing system load and enhancing the overall efficiency of the framework during audio processing. Next, we convert each audio chunk from the silence array into an individual text chunk using Speech Recognition5. This library utilizes Google's Cloud speech API6 to covert text from speech. Finally, Deep-translator7 is employed to translate the generated text into the target language. Deep Translator utilizes the state-of-the-art Google Translate Ajax API8 to generate the desired target language translation. The translated texts are stored and subsequently passed to the audio speech engine for further processing.
Footnote 5: [https://pypi.org/project/SpeechRecognition/](https://pypi.org/project/SpeechRecognition/)
Footnote 6: www.pypi.org/project/google-cloud-speech
Footnote 7: [https://pypi.org/project/deep-translator/](https://pypi.org/project/deep-translator/)
Footnote 8: [https://pypi.org/project/googletrans/](https://pypi.org/project/googletrans/)
### Text to Audio Processing
The translated text in the target language is inserted to the gTTS9 library, which converts the text into speech and saves it as an audio file. This marks the completion of the speech generation process and initiates the speech refinement process. In order to match the audio target length with the source audio length, adjustments are made. The length of the translated speech may differ from that of the original speech. To address this, the speech speed is modified to align with the original audio file. The "Fixed Pitch-Shifting" technique is employed to ensure that the generated speech closely resembles the voice of the original speaker. Librosa provides the capability to detect the frequency of the audio and shift the pitch of the audio time series from one musical note to another (Rosenzweig et al., 2021). In the context of voice cloning, the mean frequency of the audio is determined, with the lower note con
Figure 1: Steps involved in Video Translation System
sidered as F2 (87.31 Hz) and the higher note as G6 (98.00 Hz). This frequency range represents the average range of human speech. The calculation of the steps required for shifting (\(n\_steps\)) is performed using Equation 1.
\[n\_steps=\log_{2}(\frac{f_{src}}{f_{tgt}})^{2} \tag{1}\]
The variable \(f_{src}\) refers to the frequency of the source audio and \(f_{tgt}\) refers to the corresponding target audio. With this, the Text to Audio Processing engine gives the desired audio to the video-processing engine.
### Video Processing for Lip Synchronization
We have utilized a lip-synchronization network called Wav2Lip [11] for the purpose of lip-syncing and generating talking face videos. This model has been trained on the LRS2 training set and demonstrates an approximate accuracy of 91% on the LRS2 test set. The video sub-network of the model examines each frame of the source video and identifies faces, with a particular focus on the lip region. The relevant audio segment is then fed into the speech sub-network component of Wav2Lip, which modifies the input face crop to emphasize the lips area and produces the final video output. Throughout this process, the lip portion of the source video is replaced by concatenating the current face crop with the lower half of the detected face. By leveraging the translated speech and the source video, Wav2Lip generates lip-synced translated videos. The resulting translated video is subsequently presented on our front-end for display.
## 4 Demo Scenarios
Our framework TRAVID has a visually appealing landing page10, which has an overview of the framework (cf. Figure 2). A demonstration video of our system is available on YouTube11. The demo User Interface (UI) is designed a landing page, have been carefully crafted to provide a seamless and intuitive navigation experience. The landing page is effectively communicates its purpose and functionality without the need for extensive instructions or guidance. Users can easily understand what the page offers and how to navigate it intuitively. The page is organized into distinct sections that make it easy for users to locate and access the information they are looking for. This organization achieved through the use of clear headings, visually distinct sections, or a logical flow of content. The top menu bar is mentioned as a key element of the landing page, providing menu options that direct users to different feature pages. This menu bar remains accessible and visible to users across different sections of the landing page, allowing them to navigate to specific areas of interest easily. Upon signing in or signing up as a new user, the statement states that the user will be directed to the core section of the demo. This core section is the central part of the landing page, where users can access the main functionality and key features of TRAVID.
Footnote 10: [https://github.com/human71/TRAVID](https://github.com/human71/TRAVID)
Footnote 11: [https://youtu.be/XNNp1xf5HW](https://youtu.be/XNNp1xf5HW)
The upload page shown in Figure 3 includes two drop-down menus: one for selecting the desired language for translation and another for choosing the output voice model (speaker). There are two options available for video input on the upload page: live recording, which allows users to capture real-time audio-visual input using their device's camera and microphone, or accepting pre-saved audio-visual content from the system. After receiving the input, the back-end framework, discussed in Section 3, initiates the translation process. Once the text, audio, and video processing are complete, the output page displays the translated video alongside the source video. Users have the option to download the source video and translated text. Additionally, they can provide reviews based on the output they received, which can help us improve and enhance the user-friendliness of our system.
The output page, depicted in Figure 4, provides a clear presentation of the original input video and the generated output video side-by-side. It offers convenient options to play and review both videos simultaneously. Additionally, users can save and download both the translated video and a translated text document. Furthermore, users can explore the demo section, which displays test case videos, to
Figure 2: Homepage of TRAVID
gain an understanding of the translation quality that TRAVID's translation model can produce. In addition to the demo section, TRAVID includes a feedback page where users can rate videos alongside their translated output according to specific criteria and provide feedback to enhance our framework (refer to Figure 5).
## 5 Evaluation
To gauge the effectiveness of our method, We conducted a user study to assess the quality of our lip-synced translations, with participants asked to rate the translation quality, lip synchronization, and audio clarity. Evaluators compared the target video with the source language video clip and provided rankings for the quality of the output video on a scale of 1 to 5. The collected ratings were used to calculate inter-annotator agreement using Cohen's \(\kappa\)[14], Fleiss' \(\kappa\)[13], and Pearson's \(r\)[12] scores. Inter-annotator agreements were computed for all four languages: Bengali, Hindi, Nepali, and Telugu. Table 2 displays the agreement scores for each language based on Lip Synchronization (Lip Sync), Translation Quality (TQ), and Audio Quality (AQ). The ratings were collected by comparing the translated videos to source videos from 5 indigenous users for each of the selected languages. Moreover, a manual examination was conducted by professional evaluators, and the results are presented in Table 1. Further details regarding inter-rater agreements can be found in Appendix A, B, C.
The core component of TRAVID is based on the CNLP_NITS system, which emerged as a top performer in the Lip-Sync 2021 Challenge shared task12. The objective of this challenge was to convert English input videos into Hindi or Tamil output videos while ensuring lip synchronization. The quality of the Hindi Task-1 was assessed using various evaluation metrics such as Lip-Sync Quality (LSQ), Fluency Consistency (FC), Semantic Consistency (SC), and Overall User Experience (UX). Evaluators rated the quality of the output videos on a scale of 1 to 5, with higher scores indicating better quality when compared to the source language video clip. Our system, CNLP-NITS (NIT Silchar), achieved the top position with a final score of 3.84, surpassing the Baseline system (IIT Madras) with a score of 3.68 and TeamCSRL (CS RESEARCH LABS) with a score of 3.46. The comparison of the three evaluation matrices revealed a high degree of similarity. The results indicate that the translations were perceived as reasonable and easy to understand by the majority of participants, leading to fair to moderate agreement and a positive correlation among their assessments. The overall scores of the Lip-Sync Challenge 2021 are presented in Table 1.
Footnote 12: Leaderboard of Lip-Sync Challenge 2021
## 6 Limitation
There is a constraint when uploading huge videos; the system may require a lot of computational re
\begin{table}
\begin{tabular}{c c c|c} \hline \hline & Hindi Task 1 & & Hindi Task 2 \\ \hline CNLP\_NITS & Baseline & TeamCSRL & CNLP-NITS \\ \hline
3.86 & 3.49 & 3.08 & 3.37 \\
3.63 & 3.52 & 3.32 & 3.29 \\
3.94 & 3.87 & 3.92 & 3.40 \\
3.94 & 3.83 & 3.51 & 3.38 \\
3.84 & 3.68 & 3.46 & 3.36 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Leadership Positions Based on NLP Challenges
Figure 4: Output page of TRAVID
Figure 5: Survey page of TRAVID
Figure 3: Upload page of TRAVID
sources and data to process and render the translated video. Also, so far we have trained our models only on a single speaker, so videos with multiple speakers may yield poor results. The quality of speech recognition and translation may vary depending on factors such as noise, accent, dialect, etc. The generated faces may not look natural or convincing enough for some applications or scenarios such as low lighting, moving background, etc. Considering the state-of-the-art ASR system in use, the ASR results were already deemed satisfactory, thus not necessitating the utilization of lip sync from the video as an additional multimodal input for accuracy enhancement. Still, the system may be unable to handle linguistic challenges such as idioms, metaphors, slang, etc. The method may not be able to capture cultural nuances and context that affect the meaning and tone of speech, as the syntheses is machine generated. The biggest bottleneck in our current system which uses cascade approach is time complexity, due to the need for extensive computation and audiovisual processing.
## 7 Conclusion
In this paper, we presented an end-to-end video translation system that effectively translates the speaker's native language into the local language of the audience while synchronizing the translated speech with the speaker's lip movements. Our proposed system demonstrates the potential of lip-synced Face-to-Face video translation in enhancing communication between individuals from diverse linguistic backgrounds.
Moreover, our video translation system represents a significant advancement in overcoming the limitations of traditional language translation methods. By incorporating lip synchronization and matching translated speech with the speaker's lip movements, we created an immersive and realistic experience for users. This additional feature, along with the ability to capture nonverbal cues, adds depth and context to the translated content, making it more effective and engaging, especially in educational settings.
Through our system's participation and success in the Lip-Sync 2021 Challenge, we have demonstrated its capability and superiority in achieving accurate lip synchronization and high-quality translations. The evaluations and ratings obtained from both users and professional evaluators validate the effectiveness of our approach, further emphasizing its potential for real-world applications. The positive feedback received through human assessments, as discussed in the evaluation section above, validates the effectiveness of our system. However, further research is necessary to enhance the quality of lip-syncing and explore the system's applicability in different languages and more naturalistic settings. With the ongoing advancements in technology and the increasing demand for multilingual communication, our system has the potential to revolutionize the way language translation is approached. Its adaptability to low-resource system settings makes it accessible and valuable in diverse environments.
Moving forward, we envision further enhancements and refinements to our video translation system, leveraging the advancements in natural language processing, computer vision, and machine learning. To boost video translation efficiency, the videos can be broken into smaller segments, leverage GPUs for parallel processing, batch translate frames, subsample for reduced load, and implement caching for reused translations. By continuously improving the accuracy, fluency, and naturalness of translated content, we aim to provide an unparalleled experience for users, fostering effective cross-cultural communication and knowledge sharing.
In summary, our video translation system stands as a promising solution to the challenges of multilingual communication, offering a comprehensive and immersive experience that unlocks new possibilities for global connectivity and understanding.
## Ethics Statement
We honour the Code of Ethics set by IJCNLP-AACL in our paper and abide by them. We have
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline \multirow{2}{*}{**Language**} & \multicolumn{3}{c|}{**Cohen’s \(\kappa\)**} & \multicolumn{3}{c|}{**Feliss’ \(\kappa\)**} & \multicolumn{3}{c}{**Pearson’s \(r\)**} \\ \cline{2-10} & **Lip Sync** & **TO** & **AQ** & **Lip Sync** & **TO** & **AQ** & **Lip Sync** & **TO** & **AQ** \\ \hline
**Bengali** & 0.600 & 0.276 & 0.296 & 0.698 & 0.379 & 0.215 & 0.586 & 0.295 & 0.258 \\
**Hindi** & 0.510 & 0.390 & 0.391 & 0.595 & 0.291 & 0.345 & 0.400 & 0.292 & 0.318 \\
**Nepali** & 0.170 & 0.576 & 0.328 & 0.112 & 0.541 & 0.218 & 0.171 & 0.501 & 0.256 \\
**Telugu** & 0.287 & 0.330 & 0.388 & 0.214 & 0.291 & 0.376 & 0.212 & 0.271 & 0.331 \\ \hline \end{tabular}
\end{table}
Table 2: Average agreement Scores for evaluation of TRAVID generated Videos
used open-source materials in our development to produce new, better and useful resources, which will be made open-source for the keen mind to feed upon and make more improvements in the future. We have not written or in any way propagated false knowledge, hateful speech and anything controversial that may give rise to conflict. We intend good for the brighter future of mankind. We have not stolen anybody's work, and have properly cited and credited where credit is due. Our website does not feature any harmful content or advertisement. Our website is solely educational and is to be used for educational purposes only. Even though we reserve the right to use our paper and production in any way we see fit, we promise to extend them ethically and in an innovative manner.
|
2309.16331 | The Structure of the warped Io Plasma Torus constrained by the Io
Footprint | Standard models of force balance along Jovian field lines predict the
location of the Io plasma torus to be the centrifugal equator of Jupiter's
magnetosphere, i.e. the position along the magnetic field lines farthest away
from Jupiter's rotational axis. In many models, the centrifugal equator is
assumed to lay on a plane, calculated from a (shifted) dipole magnetic field,
rather than on a warped surface which incorporates Jupiter's higher magnetic
field moments. In this work, we use Hubble Space Telescope observations of the
Io Main Footprint to constrain density, scale height and lateral position of
the Io Plasma Torus. Therefore, we employ the leading angle of the footprints
to calculate expected travel times of Alfven waves and carry out an inversion
of the observations. For the magnetic field we use the JRM33 magnetic field
model. The inversion results show peak densities between 1830 and 2032
particles per cubic centimenter and scale heights between 0.92 and 0.97 Jupiter
radii consistent with current literature values. Using a warped multipole
centrifugal equator instead of a planar dipole increases the quality of the fit
by about twenty-five percent. We additionally develop two tests to confirm that
the multipole centrifugal equator from the JRM33 model fits explains the
applied data set better than the dipole centrifugal equator. The quadropole
moments alter Io's relative position to the torus, which changes the plasma
density around Io by up to twenty percent. | Stephan Schlegel, Joachim Saur | 2023-09-28T10:44:10Z | http://arxiv.org/abs/2309.16331v1 | # The Structure of the warped Io Plasma Torus constrained by the Io Footprint
###### Abstract
We have studied the influence of the Io Footprint on the
###### Abstract
Standard models of force balance along Jovian field lines predict the location of the Io plasma torus to be the centrifugal equator of Jupiter's magnetosphere, i.e. the position along the magnetic field lines farthest away from Jupiter's rotational axis. In many models, the centrifugal equator is assumed to lay on a plane, calculated from a (shifted) dipole magnetic field, rather than on a warped surface which incorporates Jupiter's higher magnetic field moments. In this work, we use Hubble Space Telescope observations of the Io Main Footprint to constrain density, scale height and lateral position of the Io Plasma Torus. Therefore, we employ the leading angle of the footprints to calculate expected travel times of Alfven waves and carry out an inversion of the observations. For the magnetic field we use the JRM33 magnetic field model. The inversion results show peak densities between \(\rho_{0}\,=\,1830\) cm\({}^{-3}\) and \(\rho_{0}\,=\,2032\) cm\({}^{-3}\) and scale heights between \(H\,=\,0.92R_{J}\) and \(H=0.97R_{J}\) consistent with current literature values. Using a warped multipole centrifugal equator instead of a planar dipole increases the quality of the fit by about 25%. We additionally develop two tests to confirm that the multipole centrifugal equator from the JRM33 model fits explains the applied data set better than the dipole centrifugal equator. The quadropole moments alter Io's relative position to the torus, which changes the plasma density around Io by up to \(\Delta\rho/\rho=20\%\).
## 1 Introduction
Io's interaction with the surrounding plasma is an important feature of Jupiter's inner magnetosphere. On the one hand it feeds the Io Plasma Torus by atmospheric sputtering (e.g. Haff et al. (1981); McGrath and Johnson (1987); Saur et al. (2004); Bagenal and Dols (2020) and references therein), where ion-neutral collisions eject particles from Io's atmosphere that generate a neutral torus in Io's orbit. This neutral torus gets successively ionized, forming the Io Plasma Torus. Furthermore, the plasma locally around Io is perturbed by the collision with Io and its atmosphere. These perturbations travel as Alfven waves along the magnetic field lines and accelerate particles close to Jupiter's ionosphere (Crary, 1997; Damiano et al., 2019; Szalay et al., 2018, 2020; Janser et al., 2022). The accelerated particles travel along the magnetic field lines, generating aurora at both hemispheres (Hess et al., 2010; Bonfond et al., 2015; Saur et al., 2013; Schlegel and Saur, 2022), called the Io Footprint. The location of these footprints depends on the magnetic field model and density model along the magnetic field line and have been used to constrain the VIP4 magnetic field model (J. E. Connerney et al., 1998). With the in-situ magnetic field measurements from the Juno spacecraft, a precise magnetic field model for the inner Jovian magnetosphere up to 30\({}^{th}\) degree is available now in the form of the JRM33 (J. Connerney et al., 2022). Therefore, the position of the Io Footprint can now be used to constrain the density profile along the magnetic field lines and give insight about the density structure and location of the Io Plasma Torus.
The torus is often considered to lie at the centrifugal equator, the position along the magnetic field line farthest away from the rotational axis (Khurana et al., 2004; Thomas et al., 2004). In the case of a dipolar magnetic field, the centrifugal equator is planar, roughly 2/3 on the way from the rotational equator to the magnetic equator. However, higher order moments warp the centrifugal equator "like a potato chip" (P. H. Phipps et al., 2020; Herbert et al., 2008). Other previous observation also show a more complex structure of the torus, not consistent with a dipole centrifugal equator (Bagenal, 1994; Schneider and Trauger, 1995; P. H. Phipps et al., 2020). However, the previous work did not demonstrate with quantitative measures that the torus is located at the multipole centrifugal equator.
The aim of this work is to quantitatively demonstrate that the plasma torus is centered around the multipole centrifugal equator. Therefore, we use the positions of the Io Footprint to constrain a density model of the Io Plasma Torus and its location depending on System III longitude. For that, we map Alfven waves along the magnetic field lines and compare the resulting expected location of the footprint to Hubble Space Telescope ob
servations and infer Alfven wave travel times. We use these travel times as an input for an inversion and analyze the output regarding the hypothesis of a dipole or multipole centrifugal equator.
## 2 Model and Methodology of the Inversion
### Location of the Io footprints
When Jupiter's co-rotating plasma collides with Io and its tenuous atmosphere it gets perturbed. These perturbations propagate as Alfven waves along the magnetic field lines that are frozen into the plasma. Close to Jupiter in the acceleration region, these waves cause wave-particle interaction and accelerate particles towards and away from Jupiter. The accelerated particles collide with molecules in Jupiter's upper atmosphere and create auroral emissions. Since the accelerated particles travel along the magnetic field lines and the Alfven velocity close to Jupiter approaches the speed of light, the exact height of the acceleration region or the emissions does not affect the travel time significantly and we can assume that the emissions are created at the location where the Alfven waves connect to Jupiter's atmosphere. Therefore, we assume that Io's main footprint is located at the position of Io's main Alfven wing (MAW) on Jupiter's 1 bar level. Since the Alfven waves get reflected at phase velocity gradients, which are most prominent at Jupiter's ionosphere and the Io torus boundary, there is a multitude of secondary footprints. Furthermore, the particles in the acceleration region are also accelerated away from Jupiter, creating footprints on the opposing hemisphere, which can results in leading spots that are upstream from the MAW-footprint. This work only focuses on the location of the MAW-footprints, since the secondary spots are dependent on the reflection pattern and the leading spot is affected by broadening due to electron drifting of about \(\Delta\varphi\approx 0.7^{\circ}\) corresponding to \(\Delta l\approx 200\) km broadening of the leading spot on Jupiter's surface for high energy electrons with energies of \(E_{e}=1\) MeV (Mauk et al., 1997). This results in a difficult determination of the exact position of the leading spot and its corresponding magnetic field line.
The location of the MAW-footprint can be calculated with the Alfven characteristic
\[z^{\pm}=\mathbf{v}\pm\mathbf{v}_{A}, \tag{1}\]
with the plasma velocity \(\mathbf{v}\) and the Alfven phase velocity
\[\mathbf{v}_{A}=\frac{\mathbf{B}}{\sqrt{\rho\mu_{0}}}, \tag{2}\]
depending on the magnetic field strength \(\mathbf{B}\) and the plasma mass density \(\rho\). Since at high latitudes, the plasma is very dilute a relativistic correction for the Alfven velocity has to be implemented:
\[v_{A}^{*}=\frac{v_{A}}{\sqrt{1+v_{A}^{2}/c^{2}}} \tag{3}\]
#### 2.1.1 HST observations
The position of the footprints relative to Io can be described as leading angle \(\varphi=\varphi_{Io}-\varphi_{F}\), which is the longitudinal difference between Io's orbital position \(\varphi_{Io}\) and the Io footprint \(\varphi_{F}\) in System-3 coordinates. The positions here are projected to a height of 900 km above the 1 bar level of Jupiter. The data used here has been published as supplementary material by Bonfond et al. (2017) and is shown in Figure 1. The observations have been mostly conducted between February and June 2007. For the northern footprint additional data from 2005 and 2006 has been used. The errors \(\varepsilon_{\varphi}\) are mostly
due to inaccuracies in the determination of Jupiter's position using the limb fitting method as described in Bonfond et al. (2009). This likely leads to systematic errors in the longitudinal position of the footprints. Furthermore, the observations of the same visit can not necessarily be regarded as independent from each other. This would mean that the errors of clustered data might be correlated. Close to the limb of Jupiter, the error bars grow larger on account of projection effects.
#### 2.1.2 The magnetic field model
The Alfven waves travel along the magnetic field lines that in this model are assumed to be fixed in Jupiter's rotating frame. Therefore, the location of the footprints only depend on the magnetic field lines connecting Jupiter's ionosphere to Io's orbit. This leads to all Io footprints to be confined to one line on the surface of each Jovian hemisphere. Though the magnetic field in Io's vicinity can often be regarded as a dipole of strength \(M=4.177\) G and a latitudinal tilt of \(\vartheta_{D}=10.25^{\circ}\) in \(\varphi_{D}=196.38^{\circ}\) western longitude, the magnetic field closer to Jupiter is more complex. We calculated the footprint trajectories as shown as black lines in Figure 2 using the JRM33 magnetic field model by J. Connerney et al. (2022). This model has been created using the magnetic field data of the first 33 Juno flybys. we used all available Gauss-coefficients \(g_{l}^{m}\) and \(h_{l}^{m}\) up to degree \(l=30\) to map Io's orbit to the dynamically flattened (1/15.4) surface of Jupiter along the magnetic field lines to Jupiter's 1 bar level (J. Connerney et al., 2022).
As can be seen, the footprints are generally drawn towards higher magnetic field strength. Since the magnetic field in the northern hemisphere is more complex than in the southern hemisphere, the trajectory there spans over a broader range of latitude (\(45^{\circ}<\vartheta_{F}<83^{\circ}\)). Furthermore, the separation between the footprint mappings is smaller where the magnetic field is stronger, which implies a slower movement of the Io footprint over Jupiter's surface as shown in Figure 3. There, the travel time has a lower influence on the leading angle \(\varphi\) than at locations where the spacing is larger. The leading angles \(\varphi_{B}\) that only result from the magnetic field model are shown in Figure 4, where no travel time of the Alfven waves are assumed. Here, the change of the leading angles \(\dot{\varphi}_{B}=\dot{\varphi}_{Io}-\dot{\varphi}_{F}\) only depends on the difference of the angular velocities of the footprints \(\dot{\varphi}_{F}\) (solid
Figure 1: Leading angles of the northern (blue) and southern (red) footprint, calculated from the observations published in Bonfond et al. (2017). Many of the data points are clustered, especially visible for the southern footprint. The lack of observations between \(0^{\circ}\) and \(70^{\circ}\) for the northern footprint is because of the high angular velocity of the footprint in this area. Therefore, the footprint remains at this range only for a short time (\(\approx 50\) min).
Figure 3: The longitudinal angular velocity of the northern (blue) and southern (red) footprint. The synodical angular velocity of Io is shown as a reference as yellow dashed line at about \(0.0077^{\circ}\)/s. The northern magnetic field is more protected leading to a more variable angular velocity of the northern footprint.
Figure 2: The magnetic field strength on the flattened surface of Jupiter, calculated with the JRM33 model (J. Connerney et al., 2022). The black dots indicate the trajectory of the Io footprint in the northern and southern hemisphere in \(1^{\circ}\) longitudinal separations along Io’s orbit. The grey squares are the observational positions of the Io main footprints.
lines in Figure 3) and Io \(\dot{\varphi_{Io}}\) (yellow dashed line in Figure 3). Qualitatively, the observations (black with error bars) match the behaviour of the results of the calculations (solid lines). Since no travel time is included here, the calculations are generally underestimating the leading angles. Where the travel time has low influence, e.g. between \(150^{\circ}\) and \(200^{\circ}\) for the northern footprint (blue), the observations are fairly well matched already. On the other hand, where travel time has a strong influence, e.g. close to \(0^{\circ}\) for the northern footprint, the mapping strongly overestimates the observations.
#### Influence of the Io Plasma Torus mass density
The Io plasma torus is generally assumed to be centered around the centrifugal equator of Jupiter's magnetosphere, i.e. the position along the magnetic field lines that map towards Io's orbit and is the farthest away from Jupiter's rotation axis. A tilted or an offset tilted dipole results in the torus to be confined on a plane tilted by \(\theta_{C}\,=\,6.83^{\circ}\) in the direction of \(\varphi_{D}\,=\,196.28^{\circ}\) western longitude. However, moments of higher degree, especially quadropole, still have an influence of the magnetic field at Io's orbit (P. H. Phipps et al., 2020). The discrepancy between of the latitudinal position of the centrifugal equator using the JRM33 full magnetic field model and only the dipole components can be up to \(1.5^{\circ}\), which translates to about \(\approx 0.15R_{J}\) or \(\approx 6R_{Io}\). The centrifugal equator is a good estimate for the position of the plasma torus as it is derived from the force balance between pressure force and centrifugal force along the magnetic field lines.
Figure 4: The Leading Angle without travel time assumed for the northern (red) and southern (blue) footprint. The leading angle mostly underestimates the data (black with error bars), since the travel time increases the leading angle. This is especially apparent between 270 and \(90^{\circ}\) for the northern and 150 and \(270^{\circ}\) for the southern footprint, where Io should be closer to the southern and northern torus boundary, respectively.
The torus itself is often regarded to be split in three parts (e.g. Bagenal and Dols (2020); P. H. Phipps et al. (2018) and references therein). The cold torus inside the orbit of Io, the ribbon region, where the plasma density is highest and warm torus starting roughly at the orbit of Io and decreases in density outwards. Io itself mostly is located inside the warm torus, but due to a dawn-dusk asymmetry, Io's orbit can cross into the ribbon region (Barbosa and Kivelson, 1983).
The most widely used model for the density distribution \(\rho\) is in the form of
\[\rho(s)=\rho_{0}\exp[-s^{2}/H^{2}], \tag{4}\]
with a peak density \(\rho_{0}\) at the centrifugal equator and a Gaussian decrease with distance \(s\) to the torus center along the magnetic field line (Gledhill, 1967; P. H. Phipps et al., 2018, 2021; Bagenal, 1994). This coincides with a force balance between centrifugal force and pressure gradient for an isothermal plasma. The scale height \(H\) and plasma temperature \(T\) are related (Thomas et al., 2004) and can be approximated by
\[H=\sqrt{\frac{2k_{B}T}{3\Omega_{J}\langle m\rangle}}, \tag{5}\]
with Jupiter's rotational frequency \(\Omega_{J}\) and the mean ion mass \(\langle m\rangle\). Dougherty et al. (2017) also use pressure anistropy, ambivalent electric fields and multiple species to derive a density distribution along the magnetic field line. However, in this work we will use a simplified density model of the form of Equation (4) in order to reduce the amount of fitting parameters for the inversion.
Hinton et al. (2019) used the JRM09 magnetic field model (J. Connerney et al., 2018) and the CAN model (J. Connerney et al., 1981) together with the density model by Dougherty et al. (2017) to calculate travel times from Io's orbit towards Jupiter. The authors fitted the travel times with a third degree Fourier series corresponding to
\[t_{Fit}(\lambda_{III})=\underbrace{A_{0}+A_{1}\cos(\lambda_{III}+a_{1})}_{1^{ \tau t}}+\underbrace{A_{2}\cos(2\lambda_{III}+a_{2})}_{2^{nd}}+\underbrace{A _{3}\cos(3\lambda_{III}+a_{3})}_{3^{\tau d}} \tag{6}\]
and found average travel times of 433 s and 401 s for the northern and southern hemisphere, respectively. The difference is due to the asymmetry of the magnetic field. In this work, we use the travel times to constrain a density model corresponding to Equation (4) with a peak density at the centrifugal equator. To visualize the data shown in Figure 1 for that purpose more clearly, the leading angles have been converted to travel times \(t_{0}\) using the synodic angular velocity \(\Omega_{syn}\) of Io around Jupiter with
\[t_{0}=\frac{\varphi_{Io}-\varphi_{F}}{\Omega_{syn}}. \tag{7}\]
Furthermore, the errors are due to inaccuracies in the determination of the footprint positions, but not the position of Io. Therefore, the error in travel time \(\varepsilon_{t}=\varepsilon_{\varphi}/\dot{\varphi}_{F}\) has to be weighted corresponding to the current longitudinal velocity of the footprint according to Figure 3. The calculated travel time data are depicted in Figure 5. The data has been fitted using a Fourier fit up to degree three corresponding to Equation (6). The misfits \(\chi=\sqrt{1/N\sum(t_{0}-t_{Fit})^{2}/\varepsilon_{t}^{2}}\) are 0.76, 0.68 and 0.65 for the northern and 0.63, 0.38 and 0.30 for the southern footprint for the fits of degree one, two and three, respectively. The fitting values are shown in Table 1 together with the values calculated from the model of Dougherty et al. (2017) by Hinton et al. (2019).
Overall the average travel time calculated from the footprint positions is slightly higher and the travel times are more variable compared the values calculated from the
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Fit / Model & \(A_{0}\) [s] & \(A_{1}\) [s] & \(a_{1}\) [\({}^{\circ}\)] & \(A_{2}\) [s] & \(a_{2}\) [\({}^{\circ}\)] & \(A_{3}\) [s] & \(a_{3}\) [\({}^{\circ}\)] \\ \hline First Degree North & 579.4 & -579.3 & -170.44 & 0 & 0 & 0 & 0 \\ Second Degree North & 603.6 & -728.2 & -178.3 & 175.0 & -12.3 & 0 & 0 \\ Third Degree North & 534.5 & -526.6 & 178.9 & 246.0 & 2.8 & 23.0 & -100.5 \\ Hinton et al. North & 432.9 & 289.3 & -104.3 & 21.2 & 77.0 & 8.4 & 46.7 \\ \hline First Degree South & 507.1 & 360.7 & 142.5 & 0 & 0 & 0 \\ Second Degree South & 478.1 & 456.4 & 161.8 & 236.0 & -51.0 & 0 & 0 \\ Third Degree South & 479.2 & 440.1 & 146.8 & 266.5 & -45.4 & 11.3 & -162.4 \\ Hinton et al. South & 400.7 & 260.7 & 65.2 & 19.4 & -87.9 & 10.5 & -155.9 \\ \hline \end{tabular}
\end{table}
Table 1: Fits of the travel time up to third degree according to Equation (6), corresponding to the curves shown in Figure 5. As a reference, the third degree fit of the travel times calculated by Hinton et al. (2019), based on the model by Dougherty et al. (2017) are given.
Figure 5: The calculated travel times for the northern (blue) and southern (red) footprints with their corresponding error bar \(\varepsilon_{t}\). The solid line is a first degree fit and the dashed lines is a second degree Fourier fit using Equation (6). The values are computed from the JRM33 (J. Connerney et al., 2022) mapping and the footprint data published by (Bonfond et al., 2017).
model by Dougherty et al. (2017). The higher travel times indicate slower Alfven velocities and therefore an overall higher plasma content of the torus. The higher variability of the travel times imply a larger influence of Io's relative distance to the torus center, which could either be explained by a more variable torus position or smaller scale height. Another interesting fact is that the southern travel times are generally shorter and overall less variable due to the more homogeneous magnetic field in the South. Therefore, the southern travel times reflect the plasma density along the field line better than the northern travel times. The variation of travel times can mostly be explained by a relative shift of Io's position with respect to the torus center. Therefore, the strong decrease in misfit from first to second degree Fourier series already shows that a warped centrifugal equator due to quadropole moments fit the data much better than an offset dipole centrifugal equator. The fit of the northern footprints are mostly constrained by the observational data between \(130^{\circ}\) and \(200^{\circ}\) which has fairly small error bars. However, all fits show an \(a_{1}\) value of around \(180^{\circ}\), which indicates that the torus is tilted in line with the dipole tilt of the JRM33 model of \(\varphi_{D}=196.38^{\circ}\). The fairly small decrease in misfit from second to third degree fits (0.03 for the northern and 0.08 for the southern footprints) hints that the position of the torus is mostly constrained by dipole and quadropole moments.
### Cost function and inversion method
The travel time data, converted according to Equation (7), is now fitted using a density model corresponding to Equation (4). The cost function \(\Phi\) of this inversion scheme can be written as
\[\Phi=\sum_{i}\left(\frac{t_{0,i}-t_{\rho,i}}{\varepsilon_{t,i}}\right)^{2}, \tag{8}\]
with the calculated travel times
\[t_{\rho}(\rho_{0},H)=\int\limits_{I\alpha}^{J}\frac{1}{v_{A}^{*}}ds \tag{9}\]
mapped along the magnetic field line. It is important to note that the field line connected to the footprint is used since this is the field line that the Alfven waves propagate on starting from Io's position. To minimize the cost function, a Monte-Carlo inversion method has been used to sweep the parameter space. For the scale height \(H\) values between \(H_{min}=0.4R_{J}\) and \(H_{max}\,=\,1.6R_{J}\) and for the peak number density \(n_{0}\,=\,\rho_{0}/\langle m\rangle\) values between \(n_{min}=500\) cm\({}^{-3}\) and \(n_{max}=3500\) cm\({}^{-3}\) have been used. With this approach, the sensitivity of the inversion towards the fitting parameters \(H\) and \(\rho\) as well as the correlation between them can be analyzed.
## 3 Inversion Results
In a first step the travel times are fitted for the peak density located at both, the dipole and the JRM33 multipole centrifugal equator and compared to values in the literature. In a second step, the position of the torus is fitted separately in another inversion to evaluate, whether the dipole or multipole centrifugal equator explains the data better.
### Best fit models
For the first inversion the peak density \(n_{0}\) is located at the JRM33 dipole and multipole centrifugal equator. The resulting leading angles are shown in Figure 6. For the dipole model, the values for peak density and scale height are \(n_{0}=1900\) cm\({}^{-3}\) and
\(1.01R_{J}\), while for the multipole model the values are \(n_{0}=2133\) cm\({}^{-3}\) and \(H=1.07R_{J}\), respectively. The two models do not differ much in travel times and therefore in leading angle. However, the misfit of \(\chi=0.58\) of the multipole best fit is considerably improved compared to the misfit of \(\chi=0.78\) of the dipole model. This is mostly due to some very low error observations of the southern footprint between \(50^{\circ}\) and \(100^{\circ}\) eastern longitude. For the southern footprint, the density model has a more consistent impact on the travel time and leading angle due to the longitudinal more homogeneous magnetic field. In Figure 7 the misfit for the whole Monte-Carlo inversion parameter domain is shown for both models. Since the errors of the observation are considerably large and comparable to the overall travel time (compare Figure 5), a large parameter space can fit the observations with a misfit of \(\chi<1\). This allows us to estimate an uncertainty to the best fit parameters. For the dipole model, we find \(\Delta n_{0}=321\) cm\({}^{-3}\) and \(\Delta H=0.13R_{J}\). For the multipole model, the uncertainties are larger due to the overall better fit, and we get \(\Delta n_{0}=413\) cm\({}^{-3}\) and \(\Delta H=0.17R_{J}\). We can further compare the best fit models to the values given by P. H. Phipps et al. (2018) for the warm torus and ribbon region and Dougherty et al. (2017) and Bagenal (1994) for the vicinity of Io's orbit, shown as stars in Figure 7. Generally the values in the literature are higher in both peak density and scale height, but are mostly inside the \(\chi<1\) region for the multipole centrifugal equator. The results of the inversion overall show a good agreement with the literature, especially the results of the multipole model inversion.
To quantify the improvement of the multipole centrifugal equator, a Monte-Carlo Test was performed. In this test, each data point has been randomized with Gaussian noise corresponding to their calculated error added to their value. The number of data points that are fitted by one model rather than the other has been counted. This procedure has been repeated \(N=100000\) times. In the end, 91.4% of randomized data points are fitted better by the multipole centrifugal equator and only 8.6% of data points are fitted better by the dipole centrifugal equator model.
Figure 6: The best fit models for the northern (blue) and southern (red) leading angles for both, the dipole (solid line) and multipole (dashed) centrifugal equator model. The multipole model generally fits the data better.
### Position of the Io Plasma Torus
We conducted a study to investigate to what degree the JRM33 multipole moments influence the position of the Io plasma torus and therefore the density in Io's vicinity.
In this study, we first calculated change in the position of the Io Plasma torus with each additional degree of the Gauss coefficients of the JRM33 model as can be seen in the upper left panel in Figure 8. From that, the variation of Io's relative position to the torus center due to each additional degree up to \(l~{}=~{}5\) has been calculated (blue on upper right panel). We then used a torus density model according to Equation (4) with a peak density of \(\rho_{0}\) = 2000 cm\({}^{-3}\) and a scale height of \(H\) = \(1R_{J}\) to calculate the maximum density change in Io's vicinity due to each additional degree. As can be seen, the density changes less then \(\Delta\rho<20\) cm\({}^{-3}\) for higher moments \(l>3\). We therefore conclude that the quadropole moments are sufficient to describe the position of the torus.
To estimate the effect of the shift in position of the plasma torus due to the quadropole moments on the plasma density in Io's vicinity, we calculated the density at Io's orbit for a dipole and quadropole model. The results are shown in the lower panel of Figure 8. The largest discrepancy between the two models is around \(\lambda_{III}\) = 180\({}^{\circ}\), where the density differs about \(\Delta\rho\approx 250\) cm\({}^{-3}\) or \(\Delta\rho/\rho\approx 20\%\).
#### 3.2.1 Inversion of the Plasma Torus Position
To test, whether the multipole centrifugal equator generally fits the data better, the position of the torus is also inverted. Since the data can be fitted by a large parameter space for two parameters already, we refrain from adding more inversion parameters. Instead, we use the values of peak density and scale height from the best fit models in the last section and use the amplitude \(\theta_{0}\) and phase \(\Delta\lambda\) of the pi-periodicity of the location of the torus corresponding to the quadropole moments as new inversion param
Figure 7: Misfit contour of the Monte Carlo inversion for the dipole (left) and multipole (right) centrifugal equator model. The peak density and scale height for the warm torus (purple) and ribbon (green) of the model of P. H. Phipps et al. (2018) as well the model by Bagenal (1994) (yellow) and Dougherty et al. (2017) (red) are indicated as diamonds. The scale height of the latter two are calculated with Equation (5).
eters. The lateral displacement \(\theta\) of the torus to the rotational equator can be written as
\[\theta(\lambda_{III})=\theta_{D}(\lambda_{III})+\theta_{0}\sin(2\lambda_{III}+ \Delta\lambda), \tag{10}\]
where \(\theta_{D}\) is the tilt of the dipole centrifugal equator with \(\theta_{D}(196.38^{\circ})=-6.83^{\circ}\). The displacement from dipole centrifugal equator resulting from the inversions are shown in Figure 9. The dipole model ( \(n_{0}\ =\ 1900\) cm\({}^{-3}\), \(H\ =\ 1.01R_{J}\)) best fit parameters are \(\theta_{0}=1.13^{\circ}\) and \(\Delta\lambda=81^{\circ}\) with a misfit of \(\chi=0.61\) compared to the previous misfit with \(\theta_{0}=0^{\circ}\) of \(\chi=0.78\). The multipole model (\(n_{0}=2133\) cm\({}^{-3}\) and \(H=1.07R_{J}\)) best fit parameters are \(\theta_{0}=1.04^{\circ}\) and \(\Delta\lambda=62^{\circ}\) with a misfit of \(\chi=0.52\). Generally the fit improves, however not significantly for the multipole model, where the position of the torus already seems to be sufficient. The new best fit torus positions are comparable to the JRM33 multipole centrifugal equator position (blue line in Figure 9) in phase and amplitude. This and the significant decrease in misfit for the dipole centrifugal equator model indicates that the torus is indeed located at the centrifugal equator of the JRM33 magnetic field model rather than a simple dipole centrifugal equator.
## 4 Summary & Conclusion
We used Hubble Space Telescope observations of the Io Main Footprint as data to constrain a density model for the Io Plasma Torus. In this model we used the JRM33 magnetic field model by J. Connerney et al. (2022) to map the magnetic field lines connecting the footprints to Io's orbit to calculate leading angle and Alfven wave travel time. The travel time has then been used as data for a Monte-Carlo inversion to constrain peak density and scale height of the torus. In the first two inversions the position of the plasma torus is fixed once at the dipole centrifugal equator and once the multipole centrifugal equator of the JRM33 magnetic field model. The results show peak densities of \(n_{0}\ =\ (1900\ \pm\ 321)\) cm\({}^{-3}\) and \(n_{0}\ =\ (2133\ \pm\ 413)\) cm\({}^{-3}\) and scale heights of \(H\ =\ (1.01\ \pm\ 0.13)R_{J}\) and \(H\ =\ (1.07\ \pm\ 0.17)R_{J}\) for the dipole and multipole model, respectively. These values are in agreement, albeit generally lower than those of other models in the literature. Both models fit the data well. However, the misfit \(\chi\ =\ 0.58\) of the multipole model is significantly lower than the misfit \(\chi=0.78\) of the dipole model. This agrees with a Monte Carlo test, where \(91.4\%\) of the data points are better fitted by the multipole model.
In a second set of inversions the position of the plasma torus is fitted. The amplitude and phase shift of the lateral displacement is used as inversion parameter while scale height and peak density is kept fixed. The results show an agreement with the predicted JRM33 multipole centrifugal equator location of the Io Plasma Torus.
It could be shown that this method is suitable to constrain peak density and scale height of the Io Plasma Torus and yields results comparable to literature values. We demonstrate quantitatively, that the torus is warped along the multipole centrifugal equator and the data can not sufficiently be explained by a simple dipole centrifugal equator. The latitudinal shift from a dipolar compared to a multipole centrifugal equator can differ by up to \(1.5^{\circ}\) which translates to a change of Io's relative position to the torus center to up to \(0.15R_{J}\approx 6R_{Io}\). In addition of the synodic period variation of \(\Delta\rho\approx 800\) cm\({}^{-3}\), Io is exposed to a half synodic density variation of \(\Delta\rho\approx 250\) cm\({}^{-3}\), which corresponds to a maximum in relative change of \(\Delta\rho/\rho\ =\ 20\%\). This needs to be included in high precision models of the Io plasma interaction to, for example, model the atmospheric sputtering processes or the evolution of the Io footprint brightness. The latter might be less faint near the minimum around 180 degrees compared to minimum around 330 degree (Wannawichian et al., 2010).
The method presented here uses the integrated travel times of the Alfven waves and is therefore able to constrain the mass density along the Io flux tube. However, the currently available data is not sufficient to distinguish between different species and scale
Figure 8: The position of the centrifugal equator has been calculated with different degrees \(l\) of the JRM33 model. The position of the centrifugal equator relative to the rotational equator for dipole (l=1), quadropole (l=2) and full JRM33 model (l=30) is shown on the right upper panel and compared to the model by P. Phipps and Bagenal (2021) (Equation (2)) at the distance of Io as shown by the purple dashed line. From the variation for different degrees of Io’s relative position to the torus center (blue in left panel), the maximum density variation due to the higher degrees in Io’s vicinity has been calculated (red in left upper panel) using a scale height density model according to Equation (4). As can be seen, the quadropole moment of the JRM33 model is sufficient to calculate the position of the centrifugal equator at Io’s orbit. The lower panel shows the plasma number density at Io’s orbit for the dipole and quadropole centrifugal equator model. A peak density of \(n_{0}~{}=~{}2000\) cm\({}^{-3}\) and a scale height of \(H~{}=~{}1R_{J}\) is used. The maximum difference between the two models is at \(\lambda_{III}=180^{\circ}\) at about \(\Delta\rho\approx 250\) cm\({}^{-3}\).
## 6 Conclusion
Figure 9: Best fit models for the torus positions for the best fit peak density and scale height of the dipole (red) and multipole (yellow) model inversions. Phase and amplitude of both best fit models are comparable to the location of the JRM33 multipole centrifugal equator, shown in blue. Therefore, the location of Io’s footprint clearly indicates a pi-periodicity in the Alfvén wave travel times and therefore in Io’s relative position to the torus center. A purely dipole centrifugal equator is not sufficient to explain the data.
heights of different populations. Furthermore, the non-uniqueness of the inversion method hinders an interpretation regarding a more complex density model. Nevertheless, with additional observations and more accurate positions of the Io main and reflected footprint this method could provide further insights into the density structure along the Io flux tube. Additional data could be used to constrain longitudinal and time variability and the density model could be adapted to incorporate the effect of different species and scale heights.
## 5 Open Research
The processed travel times according to Equation (7) and Figure 5, the used magnetic field mapping using the JRM33 model and the inversion results as shown in Figure 7 are available and published in Schlegel and Saur (2023).
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 884711).
|
2309.09211 | Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation | We propose Neural Gradient Learning (NGL), a deep learning approach to learn
gradient vectors with consistent orientation from 3D point clouds for normal
estimation. It has excellent gradient approximation properties for the
underlying geometry of the data. We utilize a simple neural network to
parameterize the objective function to produce gradients at points using a
global implicit representation. However, the derived gradients usually drift
away from the ground-truth oriented normals due to the lack of local detail
descriptions. Therefore, we introduce Gradient Vector Optimization (GVO) to
learn an angular distance field based on local plane geometry to refine the
coarse gradient vectors. Finally, we formulate our method with a two-phase
pipeline of coarse estimation followed by refinement. Moreover, we integrate
two weighting functions, i.e., anisotropic kernel and inlier score, into the
optimization to improve the robust and detail-preserving performance. Our
method efficiently conducts global gradient approximation while achieving
better accuracy and generalization ability of local feature description. This
leads to a state-of-the-art normal estimator that is robust to noise, outliers
and point density variations. Extensive evaluations show that our method
outperforms previous works in both unoriented and oriented normal estimation on
widely used benchmarks. The source code and pre-trained models are available at
https://github.com/LeoQLi/NGLO. | Qing Li, Huifang Feng, Kanle Shi, Yi Fang, Yu-Shen Liu, Zhizhong Han | 2023-09-17T08:35:11Z | http://arxiv.org/abs/2309.09211v1 | # Neural Gradient Learning and Optimization for Oriented Point Normal Estimation
###### Abstract.
We propose Neural Gradient Learning (NGL), a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation. It has excellent gradient approximation properties for the underlying geometry of the data. We utilize a simple neural network to parameterize the objective function to produce gradients at points using a global implicit representation. However, the derived gradients usually drift away from the ground-truth oriented normals due to the lack of local detail descriptions. Therefore, we introduce Gradient Vector Optimization (GVO) to learn an angular distance field based on local plane geometry to refine the coarse gradient vectors. Finally, we formulate our method with a two-phase pipeline of coarse estimation followed by refinement. Moreover, we integrate two weighting functions, i.e., anisotropic kernel and inlier score, into the optimization to improve the robust and detail-preserving performance. Our method efficiently conducts global gradient approximation while achieving better accuracy and generalization ability of local feature description. This leads to a state-of-the-art normal estimator that is robust to noise, outliers and point density variations. Extensive evaluations show that our method outperforms previous works in both unoriented and oriented normal estimation on widely used benchmarks. The source code and pre-trained models are available at [https://github.com/LeoQLi/NGLO](https://github.com/LeoQLi/NGLO).
Geometric Deep Learning, Point Clouds, Normal Estimation, Neural Gradient, Surface Reconstruction +
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
+
Footnote † †: The corresponding author is Yu-Shen Liu.
+
Footnote †: The corresponding author is Yu-Shen Liu.
More importantly, the stability and effectiveness of the integrated algorithm cannot be guaranteed. In our experiments, we evaluate the combinations of different algorithms for unoriented normal estimation and normal orientation. A key observation is that, for the same normal orientation algorithm, integrating a better unoriented normal estimation algorithm does not lead to better orientation results. That is, using higher precision unoriented normals does not necessarily result in more accurate oriented normals using existing propagation strategies. In Fig. 2, we use a simple example to illustrate that judging whether to invert the direction of neighborhood normals based on a propagation rule will lead to unreasonable results. The propagation strategy is affected by the direction distribution of the unoriented normal vectors. Therefore, it is necessary to design a complete and unified pipeline for oriented normal estimation.
In a data-driven manner, the workflow of our proposed method is an inversion of the traditional pipeline (see Fig. 1). We start by solving normals with consistent orientation but possibly moderate accuracy, and then we further refine the normals. We introduce _Neural Gradient Learning_ (NGL) and _Gradient Vector Optimization_ (GVO), defined by a family of loss functions that can be used with point cloud data with noise, outliers and point density variations, and efficiently produce high accurate oriented normals for each point. Specifically, the NGL learns gradient vectors from global geometry representation, while the GVO optimizes vectors based on an insight into the local property. A series of qualitative and quantitative evaluation experiments are conducted to demonstrate the effectiveness of the proposed method.
To summarize, our main contributions include:
* A technique of neural gradient learning, which can derive gradient vectors with consistent orientations from implicit representations of point cloud data.
* A gradient vector optimization strategy, which learns an angular distance field based on local geometry to further optimize the gradient vectors.
* We report the state-of-the-art performance for both unoriented and oriented normal estimation on point clouds with noise, density variations and complex geometries.
## 2. Related Work
### Unoriented Normal Estimation
The most widely used unoriented normal estimation method for point clouds is Principle Component Analysis (PCA) (Hoppe et al., 1992). Later, PCA variants (Alexa et al., 2001; Huang et al., 2009; Lange and Polthier, 2005; Mitra and Nguyen, 2003; Pauly et al., 2002), Voronoi-based paradigms (Alliez et al., 2007; Amenta and Bern, 1999; Dey and Goswami, 2006; Merigot et al., 2010), and methods based on complex surfaces (Aroudj et al., 2017; Cazals and Pouget, 2005; Guennebaud and Gross, 2007; Levin, 1998; Oztireli et al., 2009) have been proposed to improve the performance. These traditional methods (Cazals and Pouget, 2005; Hoppe et al., 1992) are usually based on geometric prior of point cloud data itself, and require complex pre-processing and parameter fine-tuning according to different types of data. Recently, some studies proposed to use neural networks to directly or indirectly map high-dimensional features of point clouds into 3D normal vectors. For example, the regression-based methods directly estimate normals from structured data (Boulch and Marlet, 2016; Lu et al., 2020; Roveri et al., 2018) or unstructured point clouds (Ben-Shabat et al., 2019; Guerrero et al., 2018; Hashimoto and Saito, 2019; Li et al., 2022a, 2023b; Zhou et al., 2020, 2022, 2020b). In contrast, the surface fitting-based methods first employ a neural network to predict point weights, then they derive normal vectors through weighted plane fitting (Cao et al., 2021; Lenssen et al., 2020) or polynomial surface fitting (Ben-Shabat and Gould, 2020; Li et al., 2022b; Zhang et al., 2022; Zhou et al., 2023; Zhu et al., 2021) on local neighborhoods. In our experiments, we observe that regression-based methods train models more stably and perform optimization
Figure 1. For oriented normal estimation, previous methods usually conduct a two-stage pipeline, _i.e._, (1) unoriented normal estimation and (2) normal orientation, while our method achieves this through Neural Gradient Learning (NGL) and Gradient Vector Optimization (GVO). We introduce effective novel designs into our method that enable it to improve the SOTA results.
Figure 2. Different cases of flipping (or not) vector \(n_{2}\) based on vector \(n_{1}\). Given a reference vector \(n_{1}\), we propagate its orientation to vector \(n_{2}\). The classic criteria is that we flip the sign of \(n_{2}\) if \(n_{1}n_{2}<0\). We can observe that there are many wrong cases according to this naive rule. The blue semicircle denotes the angle range, and any vector \(n_{i}\) within it satisfies \(n_{1}\cdot n_{i}>0\). The surface is shown as a gray line and its ground-truth normal as a red arrow. We let two normal vectors be on the same point for better illustration. We only change \(n_{2}\) in each row and \(n_{1}\) in each column.
more efficiently without coupling the fitting step used in fitting-based methods. In contrast, our method finds the optimal point normal through a classification strategy.
### Consistent Normal Orientation
The normals estimated by the above methods do not preserve a consistent orientation since they only look for lines perpendicular to the surface. Based on local consistency strategy, the pioneering work (Hoppe et al., 1992) and its improved methods (Jakob et al., 2019; Schertler et al., 2017; Seversky et al., 2011; Wang et al., 2012; Xu et al., 2018) propagate seed point's normal orientation to its adjacent points via a Minimum Spanning Tree (MST). More recent work (Metzer et al., 2021) introduces a dipole propagation strategy across the partitioned patches to achieve global consistency. However, these methods are limited by error propagation during the orientation process. Some other methods show that normal orientation can benefit from reconstructing surfaces from unoriented points. They usually adopt different volumetric representation techniques, such as signed distance functions (Mello et al., 2003; Mullen et al., 2010), variational formulations (Alliez et al., 2007; Huang et al., 2019; Walder et al., 2005), visibility (Chen et al., 2010; Katz et al., 2007), isovalue constraints (Xiao et al., 2023), active contours (Xie et al., 2004) and winding-number field (Xu et al., 2023). The correctly-oriented normals can be achieved from their solved representations, but their normals are not accurate in the vertical direction. Furthermore, a few approaches (Guerrero et al., 2018; Hashimoto and Saito, 2019; Li et al., 2023; Wang et al., 2022) focus on using neural networks to directly learn a general mapping from point clouds to oriented normals. Different from the above methods, we solve the oriented normal estimation by first determining the global orientation and then improving its direction accuracy based on local geometry.
## 3. Preliminary
In general, the gradient of a real-valued function \(f(x,y,z)\) in a 3D Cartesian coordinate system (also called gradient field) is given by a vector whose components are the first partial derivatives of \(f\), _i.e._, \(\nabla f(x,y,z)\!=\!f_{x}\!\mathbf{i}+f_{y}\mathbf{j}+f_{z}\mathbf{k}\), where \(\mathbf{i},\mathbf{j}\) and \(\mathbf{k}\) are the standard unit vectors in the directions of the \(x,y\) and \(z\) coordinates, respectively. If the function \(f\) is differentiable at a point \(\mathbf{p}\) and suppose that \(\nabla f(\mathbf{p})\!\neq\!0\), then there are two important properties of the gradient field: (1) The maximum value of the directional derivative, _i.e._, the maximum rate of change of the function \(f\), is defined by the magnitude of the gradient \(\|\nabla f\|\) and occurs in the direction given by \(\nabla f\). (2) The gradient vector \(\nabla f\) is perpendicular to the level surface \(f(\mathbf{p})\!=\!0\).
Recently, deep neural networks have been used to reconstruct surfaces from point cloud data by learning implicit functions. These approaches represent a surface as the zero level-set of an implicit function \(f\), _i.e._,
\[\mathcal{S}=\left\{\mathbf{x}\in\mathbb{R}^{3}\mid f(\mathbf{x};\mathbf{\theta})=0\right\}, \tag{1}\]
where \(f\colon\mathbb{R}^{3}\!\to\!\mathbb{R}\) is a neural network with parameter \(\mathbf{\theta}\), such as multi-layer perceptron (MLP). Implicit function learning methods adopt either signed distance function (Park et al., 2019) or binary occupancy (Mescheder et al., 2019) as the shape representation. If the function \(f\) is continuous and differentiable, the formula of normal vector (perpendicular to the surface) at a point \(\mathbf{p}\) is \(\mathbf{n}_{\mathbf{p}}\!=\!\nabla f(\mathbf{p})/\|\nabla f(\mathbf{p})\|\), where \(\|\cdot\|\) means vector norm. Using neural networks as implicit representations of surfaces can benefit from their adaptability and approximation capability (Atzmon et al., 2019). Meanwhile, we can obtain the gradient \(\nabla f\) in the back-propagation process of training \(f\).
## 4. Method
As shown in Fig. 3, our method consists of two parts: (1) the neural gradient learning (\(P\!\to\!f\!\to\!\nabla f\)) to estimate inaccurate but correctly-oriented gradients, and (2) the gradient vector optimization (\(\nabla f\!\to\!g\!\to\!\mathbf{n}\)) to refine the coarse gradients to obtain accurate normals, which will be introduced in the following sections.
### Neural Gradient Learning
Consider a point set \(X\!=\!\{\mathbf{x}_{i}\}_{i=1}^{M_{1}}\) that is sampled from raw point cloud \(\mathbf{P}\) (possibly distorted) through certain probability distribution \(\mathcal{D}\), we explore training a neural network \(f\) with parameter \(\mathbf{\theta}\) to derive the gradient during the optimization. First, we introduce a loss function defined by the form of
\[\mathcal{L}(\mathbf{\theta})=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\;\mathcal{T}(F(x; \mathbf{\theta}),\mathcal{F}_{\mathbf{X}}(\mathbf{x}))\;, \tag{2}\]
where \(\mathcal{T}\colon\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) is a differentiable similarity function. \(F(\mathbf{x};\mathbf{\theta})\) is the learning objective to be optimized and \(\mathcal{F}_{\mathbf{X}}(\mathbf{x})\) is the distance measure with respect to \(X\). In this work, our insight is that incorporating neural gradients in a manner similar to (Atzmon and Lipman, 2020, 2021) can learn neural gradient fields with consistent orientations from various point clouds. To this end, we add the derivative data of \(f\), _i.e._,
\[F(\mathbf{x};\mathbf{\theta})=f(\mathbf{x};\mathbf{\theta})\cdot\mathbf{\nu}\;, \tag{3}\]
where \(\mathbf{\nu}=\nabla f(\mathbf{x};\mathbf{\theta})/\|\nabla f(\mathbf{x};\mathbf{\theta})\|\) is the normalized neural gradient. Eq. (3) incorporates an implicit representation and a gradient approximation with respect to the underlying geometry of \(X\).
We first show a special case of Eq. (2), which is given by
\[\mathcal{L}(\mathbf{\theta})=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\;\mathcal{T}(\mathbf{x }-f(\mathbf{x};\mathbf{\theta})\cdot\mathbf{\nu},\;\mathbf{p}). \tag{4}\]
Such definition of training objective has been used by surface reconstruction methods (Chibane et al., 2020; Ma et al., 2021) to learn
Figure 3. (a-c): The neural gradient learning function \(f\) takes a point cloud \(P\) as input and derives point-wise gradient \(\nabla f\) within the network based on neighboring regions of the surface. (d-f): The gradient vector optimization function \(g\) selects the optimal vector sample according to angular distance as the normal \(\mathbf{n}\).
signed or unsigned distance functions from noise-free data. Recall that the gradient will be the direction in which the distance value increases the fastest. These methods exploit this property to move a query position \(\mathbf{x}\) by distance \(f(\mathbf{x};\mathbf{\theta})\) along or against the gradient direction \(\mathbf{v}\) to its closest point \(\mathbf{p}\) sampled on the manifold. Specifically, \(f(\mathbf{x};\mathbf{\theta})\) is interpreted as a signed distance (Ma et al., 2021) or unsigned distance (Chibane et al., 2020). This way they can learn reasonable signed/unsigned distance functions from the input noise-free point clouds. In contrast, we are not looking to learn an accurate distance field to approximate the underlying surface, but to learn a neural gradient field with a consistent orientation from a variety of data, even in the presence of noise.
Next, we will extend Eq. (2) to a more general case for neural gradient learning. Given a point \(\mathbf{x}\), instead of using the unsigned distance in (Atzmon and Lipman, 2020) or its nearest sampling point (Chibane et al., 2020; Ma et al., 2021), we consider the mean vector of its neighborhood, that is
\[\mathcal{F}_{\mathbf{X}}(\mathbf{x})=\frac{1}{k}\sum_{i=1}^{k}\big{(}\mathbf{x}-\mathbf{N}_{i }^{k}(\mathbf{x},\mathbf{P})\big{)},\ \mathbf{x}\in\mathbf{X}\,, \tag{5}\]
where \(\mathbf{N}_{i}^{k}(\mathbf{x},\mathbf{P})\) denotes the \(k\) nearest points of \(\mathbf{x}\) in \(\mathbf{P}\). Intuitively, \(\mathcal{F}_{\mathbf{X}}(\mathbf{x})\in\mathbb{R}^{3}\) is a vector from the averaged point position \(\tilde{\mathbf{x}}=\sum_{i=1}^{k}\mathcal{N}_{i}^{k}(\mathbf{x},\mathbf{P})/k\) to \(\mathbf{x}\).
For the similarity measure \(\mathcal{T}\) of vector-valued functions, we adopt the standard Euclidean distance. Then, the loss in Eq. (2) for _Neural Gradient Learning_ (NGL) has the format
\[\mathcal{L}(\mathbf{\theta})=\left\lVert f(\mathbf{x};\mathbf{\theta})\cdot\mathbf{v}-\frac{1 }{k}\sum_{i=1}^{k}\big{(}\mathbf{x}-\mathbf{N}_{i}^{k}(\mathbf{x},\mathbf{P})\big{)}\right\rVert. \tag{6}\]
As illustrated in Fig. 3(b), our method not only matches the predicted gradient on the position of \(\mathbf{x}\), but also matches the gradient on the neighboring regions of \(\mathbf{x}\). This is important because our input point cloud is noisy and individual points may not lie on the underlying surface. Finally, the training loss is an aggregation of the objective for each neural gradient learning function \(\mathcal{L}(\mathbf{\theta})|_{\mathbf{x}_{i}}\) of \(\mathbf{x}_{i}\), _i.e._,
\[\mathcal{L}_{\text{NGL}}=\frac{1}{M_{1}}\sum_{i=1}^{M_{1}}\mathcal{L}(\mathbf{ \theta})|_{\mathbf{x}_{i}}\,\ \mathbf{x}_{i}\in\mathbf{X}. \tag{7}\]
For the distribution \(\mathcal{D}\), we make it concentrate in the neighborhood of \(\mathbf{x}\) in 3D space. Specifically, \(\mathcal{D}\) is set by uniform sampling points \(\mathbf{x}\) from \(\mathbf{P}\) and placing an isotropic Gaussian \(N(\mathbf{x},\sigma^{2})\) for each \(\mathbf{x}\). The distribution parameter \(\sigma\) depends on each point \(\mathbf{x}\) and is adaptively set to the distance from the 50th nearest point to \(\mathbf{x}\)(Atzmon and Lipman, 2020, 2021).
Our network architecture for neural gradient learning is based on the one used in (Atzmon and Lipman, 2020; Ma et al., 2021), which is composed of eight linear layers with ReLU activation functions (except the last layer) and a skip connection. After training, the network can derive pointwise gradients from the raw data \(\mathbf{P}\) (see 2D examples in Fig. 4).
**Extension.** If we assume the raw data \(\mathbf{P}\) is noise-free, that is, the neighbors \(\mathbf{N}^{k}(\mathbf{x},\mathbf{P})\) are located on the surface, then the formula of Eq. (6) can take another form
\[\mathcal{L}(\mathbf{\theta})=\left\lVert\left(f(\mathbf{x};\mathbf{\theta})\cdot\mathbf{v}- \mathbf{x}\right)+\frac{1}{k}\sum_{i=1}^{k}\mathbf{N}_{i}^{k}(\mathbf{x},\mathbf{P})\right\rVert. \tag{8}\]
More particularly, if we set \(k=1\) and the nearest point of \(\mathbf{x}\) in \(\mathbf{P}\) be \(\mathbf{p}\), _i.e._, \(\mathbf{N}^{k=1}(\mathbf{x},\mathbf{P})=\mathbf{p}\), then the above formula is turned into the special case in Eq. (4). Specifically, the derived formula in Eq. (8) also distinguishes our method from the methods (Atzmon and Lipman, 2020, 2021; Chibane et al., 2020; Ma et al., 2021), since their objectives only consider the location of each clean point, while our proposed objective covers the neighborhood of each noisy point to approximate the surface gradients.
### Gradient Vector Optimization
A notable shortcoming of neural gradient learning is that the derived gradient vectors are inaccurate because the implicit function tries to approximate the whole shape surface instead of focusing on fitting local regions. Therefore, the learned gradient vectors are inadequate to be used as surface normals and need to be further refined. Inspired by the implicit surface representations, we define the expected normal as the zero level-set of a function
\[\mathbf{\mathcal{V}}=\big{\{}\mathbf{x}\in\mathbb{R}^{3},\mathbf{v}\in\mathbb{R}^{3}\mid g (\mathbf{x},\mathbf{v};\mathbf{\beta})=0\big{\}}\,, \tag{9}\]
where \(g\colon\mathbb{R}^{3}\times\mathbb{R}^{3}\to\mathbb{R}\) is a neural network with parameter \(\mathbf{\beta}\) that predicts (unsigned) angular distance field between the normalized gradient vector \(\mathbf{v}\) and the ground-truth normal vector \(\hat{\mathbf{n}}\) (see Fig. 5). Given appropriate training objectives, the zero level-set of \(g\) can be a vector cluster describing the normals of point cloud \(\mathbf{P}\). To this end, we introduce _Gradient Vector Optimization_ (GVO) defined by the form of a loss function
\[\mathcal{L}(\mathbf{\vartheta})=\mathbb{E}_{\mathbf{v}\sim\mathcal{D}^{\prime}}\ \mathcal{T}\big{(}g(\mathbf{x},\mathbf{v};\mathbf{\beta}),\ \langle\mathbf{\sigma},\hat{\mathbf{n}}\rangle\big{)}, \tag{10}\]
where \(\mathcal{D}^{\prime}\) is a probability distribution based on an initial vector \(\mathbf{v}\in\mathbb{R}^{3}\). \(\langle\cdot\rangle\in[0,\pi]\) means the angular difference between two unit vectors. In contrast to the previous method (Li et al., 2023), we regress angles using weighted features of the approximated local plane instead of point features from PointNet (Qi et al., 2017). The motivation is that simple angle regression with \(g\) fails to be robust to noise or produce high-quality normals.
Given a neighborhood size \(m\), we can construct the input data as the nearest neighbor graph \(G=(\mathbf{N},\mathcal{E})\), where \((\mathbf{x},\mathbf{x}_{j})\in\mathcal{E}\) is a directed edge if \(\mathbf{x}_{j}\) is one of the \(m\) nearest neighbors of \(\mathbf{x}\). Let \(\mathbf{N}^{m}(\mathbf{x})=\{\mathbf{x}_{j}-\mathbf{x}_{j=1}^{m}\}\) be the centered coordinates of the points in the neighborhood. The standard way to solve for unoriented normal at a point is to fit a plane to its local neighborhood (Levin
Figure 4. Our method can estimate gradient vectors (green rays) from point clouds (black dots) with different noise levels.
1998), which is described as
\[\mathbf{n}_{i}^{*}=\underset{\mathbf{n}}{\text{argmin}}\sum_{\mathbf{x}_{i}^{\prime}\in\mathbf{N}^ {m}(\mathbf{x}_{i})}\left\|\mathbf{x}_{i}^{\prime}\cdot\mathbf{n}\right\|^{2}. \tag{11}\]
In practice, there are two main issues about the utilizing of Eq. (11) [Lenssen et al.2020]: (i) it acts as a low-pass filter for the data and eliminates sharp details, (ii) it is unreliable if there is noise or outliers in the data. We will show that both issues can be resolved by integrating weighting functions into our optimization pipeline. In short, the preservation of detailed features is achieved by an anisotropic kernel that infers weights of point pairs based on their relative positions, while the robustness to outliers is achieved by a scoring mechanism that weights points according to inlier scores.
**Anisotropic Kernel**. For feature encoding, our extraction layer is formulated as
\[\mathbf{x}_{i}^{\prime}=\gamma\left(\mathbf{x}_{i},\ \beta\left(\text{MAX}\left\{ \alpha(\mathbf{w}_{j}\cdot\mathbf{x}_{j})\right\}_{j=1}^{m}\right)\right),\ l=1,\cdots,m^{\prime}, \tag{12}\]
where \(\text{MAX}(\cdot)\) indicates the feature maxpooling over the neighbors \(\mathbf{N}^{m}(\mathbf{x})=\{\mathbf{x}_{j}-\mathbf{x}\}_{j=1}^{m}\) of a center point \(\mathbf{x}\). \(m^{\prime}\in m\) means that fewer neighbors are used in the next layer, and we usually set \(m^{\prime}\) to \(m/2\). \(\alpha\), \(\beta\) and \(\gamma\) are MLPs. They compose an anisotropic kernel that considers the full geometric relationship between neighboring points, not just their positions, thus providing features with richer contextual information. Specifically, \(w\) is a weight given by
\[w_{j}=\frac{d_{j}}{\sum_{i=1}^{m}d_{i}},\ d_{i}=\text{sigmoid}\big{(}\beta_{1 }-\beta_{2}\|\mathbf{x}_{i}-\mathbf{x}\|\big{)}, \tag{13}\]
where \(\beta_{1}\) and \(\beta_{2}\) are learnable parameters with the initial value set to \(1\). The weight \(w\) lets the kernel concentrate on the points \(\mathbf{x}_{i}\in\mathbf{N}^{m}(\mathbf{x})\) that are closer to its center \(\mathbf{x}\).
**Inlier Score**. Based on the neighbors \(\mathbf{N}^{m}(\mathbf{x})\) of \(\mathbf{x}\), the inlier score function \(s(\mathbf{x},\mathbf{\nu};\mathbf{\beta})\) is optimized by
\[\mathcal{L}_{1}(\mathbf{\beta})=\mathbb{E}_{\mathbf{\nu}\sim\mathcal{D}^{\prime}}\ \mathcal{T}_{1}\big{(}s(\mathbf{x}_{i},\mathbf{\nu};\mathbf{\beta}),\ \delta(\mathbf{x}_{i},\hat{\mathbf{n}})\big{)},\ \mathbf{x}_{i}\in\mathbf{N}^{m}(\mathbf{x})\, \tag{14}\]
where \(\mathcal{T}_{1}\) is mean squared error. The function \(s\) assigns low scores to outliers and high scores to inliers. Correspondingly, \(\delta\) generates scores based on the distance between neighboring points \(\mathbf{x}_{i}\) and the local plane determined by the normal vector \(\hat{\mathbf{n}}\) at point \(\mathbf{x}\), that is
\[\delta(\mathbf{x}_{i},\hat{\mathbf{n}})=\exp\left(-\frac{(\mathbf{x}_{i}\cdot\hat{\mathbf{n}} )^{2}}{\rho^{2}}\right),\ \mathbf{x}_{i}\in\mathbf{N}^{m}(\mathbf{x})\, \tag{15}\]
where \(\rho=\max(0.05^{2},\ 0.3\sum_{i=1}^{m}(\mathbf{x}_{i}\cdot\hat{\mathbf{n}})^{2}/m)\)[Li et al.2022a]. The function \(s\) regresses the score of each point in the neighbor graph, and these scores are used to find the vector angles based on score-weighted gradient vector optimization
\[\mathcal{L}_{2}(\mathbf{\beta})=\mathbb{E}_{\mathbf{\nu}\sim\mathcal{D}^{\prime}} \ \mathcal{T}_{2}\big{(}\odot\mathbf{\wp}(\mathbf{x},\mathbf{\nu};\mathbf{\beta}),\ \langle\mathbf{\wp},\hat{\mathbf{n}} \rangle\big{)}, \tag{16}\]
where \(\mathcal{T}_{2}\) is mean absolute error. \(\odot\) denotes that the score function \(s\) is integrated into the feature encoding of learning angular distance field. The score and angle are jointly regressed by MLP layers based on the neighbor graph. In summary, our final training loss is
\[\mathcal{L}_{\text{GVO}}=\mathcal{L}_{1}(\mathbf{\beta})+\lambda\mathcal{L}_{2}( \mathbf{\beta})\, \tag{17}\]
where \(\lambda=0.5\) is a weighting factor.
**Distribution \(\mathcal{D}^{\prime}\)**. This distribution is different during the training and testing phases. During training, we first uniformly sample \(M_{2}\) random vectors in 3D space for each point of the input point cloud.
Figure 5. _Left:_ illustration of the angular distance field of a vector \(n\). _Right:_ given an initial vector \(\mathbf{\nu}_{0}\) and its vector samples in the unit sphere (black dots with a Gaussian distribution), our method will select vector \(\mathbf{\nu}_{1}\) rather than \(\mathbf{\nu}_{2}\) as a candidate since \(\mathbf{\nu}_{1}\) has a smaller angular distance \(\phi\) with respect to the target vector \(\mathbf{n}\).
Figure 6. The PGP curves of oriented normal on the FanousShape dataset. It depicts the percentage of good points (PGP) for a given angle threshold. Our method achieves the best value at most of the thresholds.
Then the network is trained to predict the angle of each vector with respect to the ground-truth normal. At test time, we establish an isotropic Gaussian \(N\big{(}\mathbf{v},(\eta\cdot 45^{\circ})^{2}\big{)}\) that forms a distribution about the initial gradient vector \(\mathbf{p}\) in the unit sphere, and then we obtain a set of \(M_{3}\) vector samples around \(\mathbf{v}\). As shown in Fig. 5, the trained network tries to find an optimal candidate as output from the vector samples according to the predicted angle.
## 5. Experiments
**Implementation**. For NGL, the \(k\) in Eq. (5) is set to \(k=64\) and we select \(M_{1}=5000\) points from distribution \(\mathcal{D}\) as the input during training. For GVO, we train it only on the PCPNet training set (Guerrero et al., 2018) and use the provided normals to calculate vector angles. We select \(m=700\) neighboring points for each query point. For the distribution \(\mathcal{D}^{\prime}\), we set \(M_{2}=500\), \(M_{3}=4000\) and \(\eta=0.4\).
**Metrics**. We use the Root Mean Squared Error (RMSE) to evaluate the estimated normals and use the Percentage of Good Points (PGP) to show the error distribution (Li et al., 2022; Zhu et al., 2021).
### Evaluation
**Evaluation of Oriented Normal.** The baseline methods include PCPNet (Guerrero et al., 2018), DPGO (Wang et al., 2022), SHSA-Net (Li et al., 2023) and different two-stage pipelines, which are built by combining unoriented normal estimation methods (PCA (Hoppe et al., 1992), AdaFit (Zhu et al., 2021), Hsur-Net (Li et al., 2022)) and normal orientation methods (MST (Hoppe et al., 1992), SNO (Scherhertz et al., 2017), ODP (Metzer et al., 2021)). We choose them as they are representative algorithms in this research field at present. The quantitative comparison results on datasets PCPNet (Guerrero et al., 2018) and FamousShape (Li et al., 2023) are shown in Table 1. It is clear that our method achieves large performance improvements over the vast majority of noise levels and density variations on both datasets. Through this experiment, we
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c|c} \hline \hline \multirow{2}{*}{Category} & \multicolumn{6}{c||}{**PCPNet Dataset**} & \multicolumn{6}{c}{**FamousShape Dataset**} \\ \cline{2-13} & \multicolumn{4}{c|}{Noise} & \multicolumn{2}{c|}{Density} & \multicolumn{2}{c|}{Average} & \multicolumn{2}{c|}{Noise} & \multicolumn{2}{c|}{Density} & \multirow{2}{*}{Average} \\ & None & 0.12\% & 0.6\% & 1.2\% & Stripe & Gradient & & & & 0.12\% & 0.6\% & 1.2\% & Stripe & Gradient \\ \hline PCA-MST (Hoppe et al., 1992) & 19.05 & 30.20 & 31.76 & 39.64 & 27.11 & 23.38 & 28.52 & 35.88 & 41.67 & **38.09** & 60.16 & 31.69 & 35.40 & 40.48 \\ PCA+SNO (Schertzler et al., 2017) & 18.55 & 21.61 & 30.94 & 39.54 & 23.00 & 25.46 & 26.52 & 32.25 & 39.39 & 41.80 & 61.91 & 36.69 & 35.82 & 41.31 \\ PCA+ODP (Metzer et al., 2021) & 28.96 & 25.86 & 34.91 & 51.52 & 28.70 & 23.00 & 32.16 & 30.47 & 31.29 & 41.65 & 84.00 & 39.41 & 30.72 & 42.92 \\ AdaFit (Zhu et al., 2021)+MST & 27.67 & 43.69 & 48.83 & 54.39 & 36.18 & 40.66 & 41.87 & 43.12 & 39.33 & 62.28 & 60.27 & 45.57 & 42.00 & 48.76 \\ AdaFit (Zhu et al., 2021)+SDO & 26.41 & 24.17 & 40.31 & 48.76 & 27.74 & 31.56 & 31.36 & 27.55 & 37.60 & 69.56 & 62.77 & 27.86 & 29.19 & 42.42 \\ AdaFit (Zhu et al., 2021)+ODP & 26.37 & 24.86 & 35.44 & 51.88 & 26.45 & 20.57 & 30.93 & 41.75 & 39.19 & 44.31 & 72.91 & 45.09 & 42.37 & 47.60 \\ Hsur-Net (Li et al., 2022)+MST & 29.82 & 44.49 & 50.47 & 55.47 & 40.54 & 43.51 & 43.99 & 54.02 & 42.67 & 63.87 & 65.91 & 52.52 & 53.96 & 56.24 \\ Hsur-Net (Li et al., 2022)+SDO & 39.44 & 32.34 & 44.08 & 51.71 & 33.46 & 40.49 & 38.74 & 41.62 & 41.06 & 67.41 & 62.04 & 45.59 & 43.83 & 50.26 \\ Hsur-Net (Li et al., 2022)+ODP & 26.91 & 28.45 & 35.57 & 51.75 & 26.91 & 20.16 & 31.07 & 43.77 & 43.74 & 46.61 & 72.00 & 45.09 & 43.98 & 49.37 \\ PCPNet (Guerrero et al., 2018) & 33.34 & 34.22 & 40.54 & 44.46 & 37.95 & 35.54 & 37.66 & 40.51 & 41.09 & 46.67 & 54.36 & 40.54 & 44.26 & 44.57 \\ DPGOT (Wang et al., 2022) & 23.79 & 25.19 & 35.66 & 43.89 & 28.99 & 29.39 & 31.14 & - & - & - & - & - & - \\ SHS-Net (Li et al., 2023) & **10.28** & 13.23 & **25.40** & 35.51 & **16.40** & 17.92 & 19.79 & 21.63 & 25.96 & 41.14 & 52.67 & **26.39** & 28.97 & 32.79 \\ Ours & 12.52 & **12.97** & 25.94 & **33.25** & 16.81 & **9.47** & **18.49** & **13.22** & **18.66** & 39.70 & **51.96** & 31.32 & **11.30** & **27.69** \\ \hline \hline \end{tabular}
\end{table}
Table 1. RMSE of oriented normals on datasets PCPNet and FamousShape. \(*\) means the source code is uncompleted.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c|c} \hline \hline \multirow{2}{*}{Category} & \multicolumn{6}{c||}{**PCPNet Dataset**} & \multicolumn{6}{c}{**FamousShape Dataset**} \\ \cline{2-13} & \multicolumn{4}{c|}{Noise} & \multicolumn{2}{c|}{Density} & \multicolumn{2}{c|}{Average} & \multicolumn{2}{c|}{Noise} & \multicolumn{2}{c|}{Density} & \multirow{2}{*}{Average} \\ & None & 0.12\% & 0.6\% & 1.2\% & Stripe & Gradient & & & & & 0.12\% & 0.6\% & 1.2\% & Stripe & Gradient \\ \hline Jet (Cazals and Pouget, 2005) & 12.35 & 12.84 & 18.33 & 27.68 & 13.39 & 13.13 & 16.29 & 20.11 & 20.57 & 31.34 & 45.19 & 18.82 & 18.69 & 25.79 \\ PCA (Hoppe et al., 1992) & 12.29 & 12.87 & 18.38 & 27.52 & 13.66 & 12.81 & 16.25 & 19.90 & 20.60 & 31.33 & 45.00 & 19.84 & 18.54 & 25.87 \\ PCPNet (Guerrero et al., 2018) & 9.64 & 15.11 & 18.72 & 22.84 & 11.73 & 13.46 & **14.58** & 18.47 & 21.07 & 32.60 & 39.93 & 18.14 & 19.50 & 24.95 \\ Zhou et al. (Zhou et al., 2020) & 8.67 & 10.49 & 17.62 & 24.14 & 10.29 & 10.66 & 13.62 & - & - & - & - & - \\ Nest-Net (Benh-Shah et al., 2019) & 7.06 & 10.24 & 17.77
also find that combining a better unoriented normal estimation algorithm with the same normal orientation algorithm does not necessarily lead to better orientation results, _e.g._, PCA+MST _vs._ AdaFit+MST and PCA+SNO _vs._ HSWr-Net+SNO. The error distributions in Fig. 6 show that our method has the best performance at most of the angle thresholds.
We provide more experimental results on different datasets in the supplementary material, including comparisons with GCNO [22] on sparse data and more applications to surface reconstruction.
**Evaluation of Unoriented Normal**. In this evaluation, we ignore the orientation of normals and compare our method with baselines that are used for estimating unoriented normals, such as the traditional methods PCA [19] and Jet [14], the learning-based surface fitting methods AdaFit [15] and GraphFit [16], and the learning-based regression methods NeAF [17] and HSWr-Net [17]. The quantitative comparison results on datasets PCPNet [1] and FamousShape [17] are reported in Table 2. We can see that our method has the best performance under most point cloud categories and achieves the best average result.
**Application**. We employ the Poisson reconstruction algorithm [14] to generate surfaces from the estimated oriented normals on the Paris-rue-Madame dataset [13], acquired from the real-world using laser scanners. The reconstructed surfaces are shown in Fig. 7, where ours exhibits more complete and clear car shapes.
**Complexity and Efficiency**. We evaluate the learning-based oriented normal estimation methods on a machine equipped with NVIDIA 2080 Ti GPU. In Table 3, we report the RMSE, number of learnable network parameters, and test runtime for each method on the PCPNet dataset. Our method achieves significant performance improvement with minimal parameters and relatively less runtime.
\begin{table}
\begin{tabular}{l l|c c c c|c c c||c c c c|c c c} \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{Category} & \multicolumn{8}{c||}{**Unoriented Normal**} & \multicolumn{8}{c}{**Oriented Normal**} \\ \cline{3-14} & & \multicolumn{4}{c|}{Noise} & \multicolumn{2}{c|}{Density} & \multicolumn{2}{c||}{Average} & \multicolumn{2}{c||}{Noise} & \multicolumn{2}{c||}{Density} & \multicolumn{2}{c||}{Average} \\ & & None & 0.125 & 0.6\% & 1.2\% & Stripe & Gradient & \multicolumn{2}{c||}{None} & 0.12\% & 0.6\% & 1.2\% & Stripe & Gradient & \multicolumn{2}{c||}{Average} \\ \hline \multirow{4}{*}{**(a)**} & w/o NGL & 4.20 & 87.8 & 16.16 & 21.67 & 4.88 & 4.64 & 10.06 & 124.53 & 123.11 & 120.35 & 117.44 & 125.57 & 118.80 & 121.30 \\ & w/o GVO & 12.24 & 12.74 & 17.89 & 23.88 & 15.16 & 13.75 & 15.94 & 18.39 & 15.32 & 25.20 & 32.57 & 22.91 & 15.73 & 21.69 \\ & w/o inlier score & 4.26 & 8.94 & 16.11 & 21.70 & 5.26 & 5.00 & 10.21 & 12.78 & 13.25 & 25.99 & 33.43 & 17.30 & 9.82 & 18.76 \\ & w/o in kernel & 4.11 & 8.71 & 16.14 & 21.63 & 5.11 & 4.80 & 10.08 & 12.38 & 12.94 & 25.88 & 33.30 & 16.87 & 9.47 & 18.47 \\ \hline \multirow{4}{*}{**(b)**} & ZKOL (L1) & 4.09 & 8.69 & 16.13 & 21.65 & 4.80 & 4.57 & 9.99 & 17.27 & 12.27 & 35.58 & 37.95 & 11.26 & 9.28 & 20.60 \\ & \(\mathcal{L}_{\text{NGL}}\) (MSE) & 4.08 & 8.70 & 16.13 & 21.64 & 4.82 & 4.58 & 9.99 & 21.71 & 18.82 & 27.81 & 33.38 & 13.29 & 11.68 & 21.12 \\ & \(\mathcal{L}_{\text{GVO}}\)(\(A\)=0.2) & 4.12 & 8.75 & 16.16 & 21.74 & 5.09 & 4.71 & 10.10 & 12.60 & 12.99 & 25.98 & 33.34 & 16.90 & 9.57 & 18.56 \\ & \(\mathcal{L}_{\text{GVO}}\)(\(A\)=0.8) & 4.14 & 8.82 & 16.18 & 21.64 & 4.96 & 4.74 & 10.08 & 12.58 & 13.09 & 26.04 & 33.33 & 16.87 & 9.45 & 18.56 \\ \hline \multirow{4}{*}{(c)} & \(k\)=1 & 4.07 & 8.70 & 16.13 & 21.65 & 4.79 & 4.55 & 9.98 & 13.57 & 18.24 & 38.29 & 47.23 & 9.27 & 8.99 & 22.60 \\ & \(k\)=32 & 4.06 & 8.69 & 16.13 & 21.65 & 4.79 & 4.56 & 9.98 & 13.64 & 24.31 & 29.83 & 39.53 & 17.37 & 8.51 & 21.27 \\ & \(k\)=128 & 4.08 & 8.70 & 16.13 & 21.64 & 4.84 & 4.58 & 9.99 & 12.84 & 23.65 & 34.96 & 33.03 & 37.64 & 18.42 & 26.76 \\ \hline \multirow{4}{*}{**(d)**} & \(d_{o}\)=32th & 4.07 & 8.69 & 16.12 & 21.66 & 4.83 & 4.56 & 9.99 & 12.86 & 23.75 & 29.68 & 36.67 & 10.97 & 8.92 & 20.47 \\ & \(d_{o}\)=64th & 4.08 & 8.70 & 16.13 & 21.64 & 4.81 & 4.57 & 9.99 & 13.77 & 18.98 & 29.84 & 33.25 & 18.41 & 8.87 & 20.52 \\ \cline{1-1} & \(\eta\)=0.3 & 4.10 & 8.70 & 16.14 & 21.64 & 4.87 & 4.62 & 10.01 & 12.46 & 13.01 & 25.85 & 33.18 & 16.78 & 9.47 & 18.46 \\ \cline{1-1} & \(\eta\)=0.5 & 4.06 & 8.69 & 16.12 & 21.64 & 4.80 & 4.55 & 9.98 & 12.54 & 13.04 & 25.91 & 33.26 & 16.77 & 9.39 & 18.49 \\ \cline{1-1} & \(\mathcal{M}_{\text{L}}\)=3000 & 4.07 & 8.70 & 16.13 & 21.65 & 4.82 & 4.57 & 9.99 & 12.55 & 13.05 & 25.90 & 33.23 & 16.79 & 9.40 & 18.49 \\ \cline{1-1} & \(\mathcal{M}_{\text{L}}\)=5000 & 4.06 & 8.70 & 16.12 & 21.65 & 4.81 & 4.56 & 9.98 & 12.47 & 13.01 & 25.90 & 33.22 & 16.72 & 9.30 & 18.44 \\ \cline{1-1} & \(\mathcal{M}_{\text{L}}\)=5000 & 4.06 & 8.70 & 16.12 & 21.65 & 4.80 & 4.56 & 9.98 & 12.52 & 12.97 & 25.94 & 33.25 & 16.81 & 9.47 & 18.49 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Ablation studies with the metric of unoriented and oriented normal on the PCPNet dataset. Please see the text for more details.
Figure 8. Error maps of oriented normals. We integrate our NGL and GVO into other methods to estimate oriented normals. The mean value of RMSE is provided above each shape.
Figure 7. The top row shows the scene reconstructed from LiDAR data using our estimated normals, and below is a local region comparison of the different methods.
### Ablation Studies
Our method seeks to achieve better performance in both unoriented and oriented normal estimation. We provide the ablation results of our method in Table 4 (a)-(e), which are discussed as follows.
**(a) Component**. We remove NGL, GVO, inlier score and weight w of the anisotropic kernel, respectively. If NGL is not used, we optimize a randomly sampled set of vectors in the unit sphere for each point, but the optimized normal vectors face both sides of the surface, resulting in the worst orientations. Gradient vectors from NGL are inaccurate when used as normals without being optimized by GVO. The score and weight are important for improving performance, especially in unoriented normal evaluation.
**(b) Loss**. Replacing L2 distance in \(\mathcal{L}_{\textsc{NGL}}\) with L1 distance or MSE is not a good choice. We also alternatively set \(\lambda\) in \(\mathcal{L}_{\textsc{GVO}}\) to 0.2 or 0.8, both of which lead to worse results.
**(c) Size \(k\)**. For the neighborhood size in Eq. (5), we alternatively set \(k\) to 1, 32 or 128, however, all of which do not bring better oriented normal results.
**(d) Distribution \(\mathcal{D}\)**. We change the distribution parameter \(\sigma\) as the distance \(d_{\sigma}\) of the 32th or 64th nearest point to \(\mathbf{x}\), whereas the results get worse.
**(e) Distribution \(\mathcal{D}^{\prime}\)**. We change the distribution parameter \(\eta\) to 0.3 or 0.5 and the vector sample size \(M_{2}\) to 3000 or 5000, respectively. The influence of these parameters on the results is relatively small. The larger size gives better results, but requires more time and memory consumption.
**(f) Modularity**. In Fig. 8, we show that our NGL and GVO can be integrated into some other methods (PCPNet (Guerrero et al., 2018) and NeAF (Li et al., 2023b)) to estimate more accurate oriented normals. Note that NeAF can not estimate oriented normals. We can see that our NGL+GVO gives the best results.
## 6. Conclusion
In this work, we propose to learn neural gradient from point cloud for oriented normal estimation. We introduce _Neural Gradient Learning_ (NGL) and _Gradient Vector Optimization_ (GVO), defined by a family of loss functions. Specifically, we minimize the corresponding loss to let the NGL learn gradient vectors from global geometry representation, and the GVO optimizes vectors based on an insight into the local property. Moreover, we integrate two weighting functions, including anisotropic kernel and inlier score, into the optimization to improve robust and detail-preserving performance. We provide extensive evaluation and ablation experiments that demonstrate the state-of-the-art performance of our method and the effectiveness of our designs. Future work includes improving the performance under high noise and density variation, and exploring more application scenarios of our algorithm.
## Acknowledgments
This work was supported by National Key R&D Program of China (2022YFC3800600), the National Natural Science Foundation of China (62272263, 62072268), and in part by Tsinghua-Kuaishou Institute of Future Media Data.
|
2309.10740 | ConsistencyTTA: Accelerating Diffusion-Based Text-to-Audio Generation
with Consistency Distillation | Diffusion models are instrumental in text-to-audio (TTA) generation.
Unfortunately, they suffer from slow inference due to an excessive number of
queries to the underlying denoising network per generation. To address this
bottleneck, we introduce ConsistencyTTA, a framework requiring only a single
non-autoregressive network query, thereby accelerating TTA by hundreds of
times. We achieve so by proposing "CFG-aware latent consistency model," which
adapts consistency generation into a latent space and incorporates
classifier-free guidance (CFG) into model training. Moreover, unlike diffusion
models, ConsistencyTTA can be finetuned closed-loop with audio-space text-aware
metrics, such as CLAP score, to further enhance the generations. Our objective
and subjective evaluation on the AudioCaps dataset shows that compared to
diffusion-based counterparts, ConsistencyTTA reduces inference computation by
400x while retaining generation quality and diversity. | Yatong Bai, Trung Dang, Dung Tran, Kazuhito Koishida, Somayeh Sojoudi | 2023-09-19T16:36:33Z | http://arxiv.org/abs/2309.10740v3 | # Accelerating Diffusion-Based Text-to-Audio Generation
###### Abstract
Diffusion models power a vast majority of text-to-audio (TTA) generation methods. Unfortunately, these models suffer from slow inference speed due to iterative queries to the underlying denoising network, thus unsuitable for scenarios with inference time or computational constraints. This work modifies the recently proposed consistency distillation framework to train TTA models that require only a single neural network query. In addition to incorporating classifier-free guidance into the distillation process, we leverage the availability of generated audio during distillation training to fine-tune the consistency TTA model with novel loss functions in the audio space, such as the CLAP score. Our objective and subjective evaluation results on the AudioCaps dataset show that consistency models retain diffusion models' high generation quality and diversity while reducing the number of queries by a factor of 400.
Yatong Bai\({}^{*}\)\({}^{1,2}\) Trung Dang\({}^{2}\) Dung Tran\({}^{2}\) Kazuhito Koishida\({}^{2}\) Somayeh Sojoudi\({}^{1}\)\({}^{1}\)University of California, Berkeley
\({}^{2}\)Applied Sciences Group, Microsoft Corporation
Diffusion models, Consistency models, Audio generation, Generative AI, Neural networks
## 1 Introduction
Text-to-audio (TTA) generation has recently gained significant popularity [1, 2, 3, 4, 5, 6, 7, 8, 9]. This task involves generating audio based on a user-provided textual prompt. TTA models have rapidly improved and demonstrated the ability to produce diverse, precise, and high-quality audio. Many existing TTA models are based on latent diffusion models (LDM) [10], which have gained popularity in various applications due to their superior generation quality. However, they suffer from slow inference speed as they require iterative queries to the underlying neural network. Such limitations pose challenges in scenarios with time or computation constraints.
This work proposes a novel approach to accelerate diffusion-based TTA models. The proposed method is based on consistency distillation (CD) [11], which distills a pre-trained diffusion model into a consistency model that only requires a single neural network query per generation. Our approach leverages classifier-free guidance (CFG) [12], which has been shown to significantly enhance text-conditioned generative model performance, by incorporating it into the CD process. We explore three different approaches for using CFG: direct guidance, fixed guidance, and variable guidance. To our knowledge, this is the first work to extend CD to CFG models.
Moreover, leveraging the generated audio that is only available during consistency distillation training, we propose fine-tuning the consistency TTA model with audio space loss functions to further improve the audio quality and the audio-text correspondence and use the CLAP score as an example loss function. In contrast, back-propagation from the audio is prohibitively expensive for diffusion models due to the recurrent diffusion process.
Our experiments on the AudioCaps dataset demonstrate that the single-step consistency model is comparable with the 400-step distillation model across five objective metrics as well as subjective audio quality and audio-text correspondence. We encourage the reader to listen to our generated examples at bai-yt.github.io/consi-stency_tta/demo.
The paper is structured as follows. Section 2 reviews the related literature, including diffusion models and acceleration techniques. Section 3 outlines our proposed methods to accelerate diffusion-based TTA models. Section 4 discusses our experimental results. Additional discussions and details are presented in the appendix. Throughout this paper, vectors and matrices are denoted as bold symbols whereas scalars use regular symbols.
## 2 Background and Related Work
### Diffusion models
Diffusion models [13, 14], known for their diverse, high-quality generations, have rapidly gained popularity among various conditional and unconditional generation tasks in vision and audio fields [15, 16, 3, 17]. In the vision domain, while pixel-level diffusion methods (e.g., EDM [16]) perform well on small image sizes, generating larger images usually requires latent diffusion models (LDMs) [10], where the diffusion process takes place in a latent space. In the audio domain, generative model applications can be further categorized into speech, music, and in-the-wild audio generation. This paper considers the in-the-wild audio setting, where the goal is to generate diversified samples that cover a variety of real-world sound clips. While some works consider autoregressive models [8] or Mel-space diffusion [9], LDMs have emerged as the dominant approach for the TTA task [1, 2, 3, 4, 5, 6, 7].
The intuition of diffusion models is to gradually recover a clean sample from a noisy sample. During training, Gaussian noise is progressively added to a ground-truth sample \(\mathbf{z}_{0}\), forming a continuous diffusion trajectory. At the end of the trajectory, the noisy sample becomes indistinguishable from pure Gaussian noise. This trajectory is then discretized into \(N\) steps, where the noisy sample at each time step is denoted as \(\mathbf{z}_{n}\) for \(n=1,\dots,N\). In each training step, a random time step \(n\) is selected, and a Gaussian noise with variance depending on \(n\) is injected into the clean sample to produce \(\mathbf{z}_{n}\). A denoising neural network, often a U-Net [18], is optimized to recover the noise distribution from the noisy sample. During inference, Gaussian noise is used to initialize the last noisy sample \(\mathbf{\hat{z}}_{N}\), where \(\mathbf{\hat{z}}_{n}\) denotes the predicted sample at the time step \(n\). The diffusion model generates a clean sample by iteratively querying the denoising network step by step, producing the sequence \(\mathbf{\hat{z}}_{N-1},\dots,\mathbf{\hat{z}}_{0}\). The final \(\mathbf{\hat{z}}_{0}\) is used as the generated sample.
### Accelerating diffusion model inference
Diffusion models suffer from high generation latency and expensive inference computation due to iterative queries to the denoising network. To this end, several methods have been proposed to reduce the number of model queries. Such methods are mostly presented for image generation tasks and can be grouped into two main categories: improved differential equation solvers and distillation methods.
Improved differential equation solvers can reduce the number of inference steps \(N\) of existing diffusion models without additional training. Examples include DDIM [19], Euler [20], Heun, DPM [21, 22], and PNDM [23]. The best solvers can reduce \(N\) to 10-50 from the hundreds required by vanilla inference using DDPM [14].
On the other hand, distillation methods, where a pre-trained diffusion model serves as the teacher and a student model is trained to simulate multiple teacher steps in a single step, have been shown to reduce the number of denoising steps to below 10. One representative method is progressive distillation (PD) [24], which iteratively halves the number of diffusion steps. While PD can reduce the number of steps to only a few, the single-step capability is unideal, and the repetitive distillation procedure can be time-consuming. To this end, consistency distillation [11] has been proposed. The training goal of CD is to reconstruct the noiseless image within a single step from an arbitrary step on the teacher model's diffusion trajectory. Note that both PD and CD were proposed for _unconditional_ image generation. For text-conditioned audio generation, there are additional considerations, which we discuss in Section 3.
### Classifier-free guidance
CFG [12] is a simple yet effective method for adjusting the text conditioning strength for guided generation problems, significantly improving the performance of existing diffusion-based TTA models. CFG obtains two noise estimations from the denoising network in the diffusion model - one with text conditioning (denoted as \(\mathbf{v}_{\text{cond}}\)) and one without (by masking the text embeddings, denoted as \(\mathbf{v}_{\text{uncond}}\)). The guided estimation, denoted by \(\mathbf{v}_{\text{cfg}}\), is obtained via
\[\mathbf{v}_{\text{cfg}}=w\cdot\mathbf{v}_{\text{cond}}+(1-w)\cdot\mathbf{v}_{\text{ uncond}}, \tag{1}\]
where the scalar \(w\geq 0\) is the guidance strength. When \(w\) is between 0 and 1, CFG interpolates the conditioned and unconditioned estimations. When \(w\) is greater than 1, CFG becomes an extrapolation. For example, for TANGO, \(w=3\) produces the best overall result [1].
Since CFG is external to the denoising network in diffusion models, it makes distilling guided models more complex than their unguided counterparts. The authors of [25] outlined a two-stage pipeline for performing PD on a CFG classifier. The first stage absorbs CFG into the denoising network by letting the student network take \(w\) as an additional input. The second stage performs conventional PD on top of the stage-1 student. During both training stages, the CFG strength \(w\) is randomized, and the resulting distilled network allows for selecting \(w\) during inference.
## 3 Consistency distillation for TTA
We select TANGO [1] as the distillation teacher model due to its high performance. However, we highlight that most of the innovations in this paper can also be applied to other diffusion-based TTA models.
### Overall setup
Similar to TANGO, our model has four components: a conditional U-Net, a text encoder that processes the textual prompt, a VAE encoder-decoder pair that converts the Mel spectrogram to and from the U-Net latent space, and a HiFi-GAN vocoder [26] that produces time-domain audio waveform from the Mel spectrogram. We only train the U-Net and freeze other components.
During training, the Mel spectrogram of the audio is processed by the VAE encoder to produce a latent representation, and the prompt is transformed by the text encoder into a text embedding. They are given to the conditional U-Net as the input and the condition. The VAE decoder and the HiFi-GAN are not used.
During inference, the text embedding is used to guide the U-Net to reconstruct a latent audio representation. The Mel spectrogram and waveform are recovered by the VAE decoder and the HiFi-GAN vocoder, respectively. The VAE encoder is not used.
### Consistency distillation
The goal of CD is to learn a student U-Net \(f_{\text{S}}(\cdot,\cdot,\cdot)\) from the diffusion U-Net module in the teacher TTA model \(f_{\text{T}}(\cdot,\cdot,\cdot)\). The architecture of \(f_{\text{S}}\) is the same as the \(f_{\text{T}}\), taking three inputs: the noisy latent representation \(\mathbf{z}_{n}\), the corresponding time step \(n\), and the text embedding \(\mathbf{e}_{\text{te}}\). Furthermore, the parameters in \(f_{\text{S}}\) are initialized using \(f_{\text{T}}\) information.
The goal for the student U-Net is to generate a realistic latent audio representation within a single forward pass, directly producing an estimated clean example \(\mathbf{\hat{z}}_{0}\) based on \(\mathbf{z}_{n}\), where \(n\in\{0,\dots,N\}\) is an arbitrary step along the diffusion trajectory [11, Algorithm 2]. The risk function to be minimized for achieving this goal is
\[\mathbb{E}_{\begin{subarray}{c}(\mathbf{z}_{0},\mathbf{e}_{\text{te}})\sim\mathcal{D }\\ n\sim\text{U}_{\text{lin}}(1,N)\end{subarray}}\Big{[}d\Big{(}f_{\text{S}}(\mathbf{ z}_{n},n,\mathbf{e}_{\text{te}}),f_{\text{S}}(\mathbf{\hat{z}}_{n-1},n-1,\mathbf{e}_{\text{te}}) \Big{)}\Big{]}, \tag{2}\]
where \(d(\cdot,\cdot)\) is a distance measurement, \(\mathcal{D}\) is the training dataset, \(\text{U}_{\text{lin}}(1,N)\) denotes the discrete uniform distribution supported over the set \(\{1,\dots,N\}\), and \(\mathbf{\hat{z}}_{n-1}=\text{solve}f_{\text{T}}(\mathbf{z}_{n},n,\mathbf{e}_{\text{te}})\) is the teacher diffusion model's estimation for \(\mathbf{z}_{n-1}\). Here, \(\text{solve}\circ f_{\text{T}}\) denotes the composite function of the teacher denoising U-Net and the solver that converts the U-Net raw output to the estimation of the previous time step. We use the \(\ell_{2}\) distance in this latent space as \(d(\cdot,\cdot)\), with additional discussions in Appendix A.3. Intuitively, this risk measures the expected distance between the student's reconstructions from two adjacent time steps on the diffusion trajectory.
The authors of [11] used the Heun solver for querying the teacher diffusion model during distillation and adopted "Karras noise schedule", a discretization scheme that unevenly selects the time steps on the diffusion trajectory. In Section 4, we empirically investigate multiple solvers and noise schedules.
### Consistency distillation with classifier-free guidance
Since CFG is crucial to the conditional generation quality, we consider three methods for incorporating it into the distilled model.
Direct Guidancedirectly performs CFG on the consistency model output by applying (1). Since this method naively extrapolates or interpolates on the consistency model \(\mathbf{z}_{0}\) prediction, the CFG operation will likely move the prediction outside the manifold of realistic latent representations.
Fixed Guidance Distillationaims to distill from the diffusion model coupled with CFG using a fixed guidance strength \(w\). Specifically, the training risk function is still (2), but \(\mathbf{\hat{z}}_{n-1}\) is replaced with the estimation after CFG. Now, \(\mathbf{\hat{z}}_{n-1}\) becomes \(\text{solve}\circ f_{\text{T}}^{\text{cfg}}(\mathbf{z}_{n},n,\mathbf{e}_{\text{te}},w)\), where the guided teacher output \(f_{\text{T}}^{\text{cfg}}\) is
\[f_{\text{T}}^{\text{cfg}}(\mathbf{z}_{n},n,\mathbf{e}_{\text{te}},w)=\] \[w\cdot f_{\text{T}}(\mathbf{z}_{n},n,\varnothing)+(1-w)\cdot f_{\text{ T}}(\mathbf{z}_{n},n,\mathbf{e}_{\text{te}}),\]
with \(\varnothing\) denoting the masked language token. Here, \(w\) should be fixed to the value corresponding to the best teacher generation quality.
Variable Guidance Distillationis similar to fixed guidance distillation, but with randomized guidance strength \(w\) during distillation, so that \(w\) can be adjusted during inference. To make the student network compatible with adjustable \(w\), we add a \(w\)-encoding condition branch to \(f_{\mathrm{S}}\) (which now has four inputs). We use Fourier encoding for \(w\) following [25] and merge the embedding into \(f_{\mathrm{S}}\) similarly to the time step embedding. Each training iteration samples a random guidance strength \(w\) via the uniform distribution supported on \([0,6)\).
The latter two methods are related to the two-stage distillation procedure outlined in [25], with details described in Appendix A.2.
### Min-SNR training loss weighing strategy
The literature has proposed to improve diffusion models by using the truncated signal-noise ratio (SNR) to weigh the training loss at each time step \(n\) for diffusion models, and the Min-SNR strategy [27] is one of the latest examples. The specific calculation of Min-SNR depends on the parameterization of the diffusion model. Specifically, diffusion models can be trained to predict the clean example \(\mathbf{z}_{0}\), the additive noise \(\mathbf{\epsilon}\), or the noise velocity \(\mathbf{v}\). The Min-SNR weighting formulation is different for the three parameterizations.
This work investigates whether the Min-SNR strategy also improves CD. Since consistency models predict the clean sample \(\mathbf{z}_{0}\), we use the Min-SNR formulation for \(\mathbf{z}_{0}\)-predicting diffusion models, which is \(\omega(n)=\min\{\mathrm{SNR}(t_{n}),\gamma\}\), where \(\omega(n)\) is the loss weight for the \(n^{\text{th}}\) time step, \(\mathrm{SNR}(t)\) is the SNR at time \(t\), \(t_{n}\) is the time corresponding to the \(n^{\text{th}}\) time step, and \(\gamma\) is a constant defaulted to 5. For the Heun solver used in most of our experiments, \(\mathrm{SNR}(t)\) is the inverse of the additive Gaussian noise variance at time \(t\).
### End-to-end fine-tuning with CLAP
Since our consistency TTA model produces audio in a single neural network query, we can optimize auxiliary losses operating in the audio space along with the latent-space CD loss to improve the audio quality and semantics. On the contrary, since a diffusion model has an iterative inference process, optimizing such a model by back-propagating from the audio resembles the training of a recurrent neural network, which is known to be expensive and challenging. This work uses the CLAP score [28] as an example of fine-tuning loss function. The CLAP score, denoted by \(\mathrm{CS}\), is defined as:
\[\mathrm{CS}(\mathbf{\hat{\mathbf{x}}},\mathbf{x})=\max\left\{100\times\frac{\mathbf{e}_{\mathbf{z }}\cdot\mathbf{e}_{\mathbf{z}}}{\|\mathbf{e}_{\mathbf{z}}\|\cdot\|\mathbf{e}_{\mathbf{x}}\|},0\right\}, \tag{3}\]
where \(\mathbf{\hat{\mathbf{x}}}\) is the generated audio waveform, \(\mathbf{x}\) is the reference (ground-truth waveform or textual prompt), and \(\mathbf{e}_{\mathbf{\hat{x}}}\) and \(\mathbf{e}_{\mathbf{x}}\) are the corresponding embeddings extracted by the CLAP model.
We select the CLAP score due to its superior embedding quality arising from the diverse training tasks and datasets, as well as its consideration of audio-text correspondence. Since the CD training loss (2) does not use ground truth information, optimizing this score provides valuable feedback to the consistency model.
## 4 Experiments
### Dataset and experiment settings
The experiments in this work use AudioCaps [29], a popular dataset for in-the-wild audio generation. AudioCaps is a collection of human-captioned YouTube audio, each instance having a length of at most ten seconds. Our AudioCaps copy contains 882 test instances and 45,260 training instances. Like several existing works [1, 3], the core generative U-Net of our models is trained only on the AudioCaps training set, leaving larger datasets for future work.
While we explicitly use TANGO [1] as the baseline, our methods apply to diffusion-based TTA models in general. We select FLAN-T5-Large [30] as the text encoder and the same checkpoint as [1]. For the VAE and the HiFi-GAN, we use the checkpoint pre-trained on AudioSet released by the authors of [3] as in [1]. For faster training and inference, we shrink the U-Net from 866M parameters used in [1] to 557M. All consistency models are distilled from this smaller TANGO model, which performs similarly to the checkpoint from [1] (Table 2). Additional details about our model, training setup, and evaluation are shown in Appendix A.3 and A.4.
### Objective evaluation results
Our objective evaluation considers five metrics: FAD, FD, KLD, CLAP\({}_{\mathrm{A}}\), and CLAP\({}_{\mathrm{T}}\). Specifically, FAD is the Frechet distance between generated and ground-truth audio embeddings extracted by the VGGish model [31], FD is the Frechet distance between the embeddings extracted by PANN [32], and KLD is Kullback-Leibler divergence between the PANN embeddings. CLAP\({}_{\mathrm{A}}\) and CLAP\({}_{\mathrm{T}}\) are the CLAP scores with respect to the ground-truth audio waveform and the textual prompt. We use the CLAP checkpoint from [33] trained on LAION-Audio-630k [33], AudioSet [34], and music data.
We first ablate the performance of the consistency TTA generation model under various training settings, with the results presented in Table 1. Note that "guided initialization" refers to initializing the consistency model with a guidance-aware diffusion model, whereas "unguided initialization" refers to initializing with the unmodified
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \# queries (\(\downarrow\)) & Solver & Noise schedule & CFG \(w\) & Guidance method & Min-SNR & Initialization & FAD (\(\downarrow\)) & FD (\(\downarrow\)) & KLD (\(\downarrow\)) \\ \hline
1 & DDIM & Uniform & & & & & 13.48 & 45.75 & 2.409 \\ & Heun & Karras & 1 & - & & & 10.97 & 50.19 & 2.425 \\ \hline
2 & DDIM & Uniform & & & & & 8.565 & 38.67 & 2.015 \\ & Heun & Karras & & & & & 7.421 & 39.36 & 1.976 \\ \hline & & Karras & & & & & & \\ & & Uniform & & & & & & \\
1 & Heun & Uniform & & & & & & & \\ & & Uniform & & & & & & \\ \hline
1 & Heun & Uniform & & & & & & & \\ & & & & & & & & \\ & & & & & & & & \\ & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablate various guidance weights, distillation techniques, solvers, noise schedules, training lengths, loss weights, and initialization. “CFG \(w\)” represents the guidance weight; “# queries” indicates the number of neural network queries during inference. U-Net modules have 557M parameters, except in variable guidance models (559M). Distillation runs are 40 epochs; inference uses FP32 precision.
TANGO teacher weights. Table 1 demonstrates that distilling with fixed or variable guidance significantly improves the performance over direct or no guidance. In terms of the teacher solver used during distillation, with \(N=18\) discretization steps as in [11], the more accurate Heun solver is advantageous over the simpler DDIM solver. Moreover, the uniform noise schedule is preferred over the Karras schedule (see Appendix A.1 for a detailed discussion). We also observe that the Min-SNR weights and the guided initialization improve the FD and FAD but slightly sacrifice the KLD.
Table 2 compares the consistency TTA models with the diffusion baseline models. On top of the best consistency model, we perform end-to-end CLAP fine-tuning, co-optimizing three loss components: the consistency loss (2), CLAP\({}_{\text{A}}\), and CLAP\({}_{\text{T}}\). Table 2 demonstrates that fine-tuning further improves all objective metrics except KLD. Furthermore, the gap between the best consistency and diffusion models is small for all quality metrics, with the FD and KLD even surpassing the reported numbers from [1] and [3].
Note that the diffusion baseline models use 200 steps following [3, 1], each step requiring two noise estimations due to CFG, amounting to 400 total network queries per generation. Thus, with minimal performance drop, the proposed consistency model reduces the number of U-Net queries by a factor of 400.
### Subjective evaluation results
Finally, we conduct subjective evaluations in two aspects: overall audio quality and audio-text correspondence. For each subject, we use 25 generated audio clips from the same set of prompts together with those from ground-truth samples. We instructed 20 evaluators to rate the audio clips on a scale of 1 to 5 for each aspect. Other details can be found in Appendix A.4. We further confirm that the consistency model produces audios close to those of the diffusion model in terms of subjective evaluation scores. Moreover, optimizing the CLAP scores improves the text-audio correspondence score, which supports our assumption that CLAP\({}_{\text{T}}\) provides closed-loop feedback to help align the generated audio with the prompt.
### Diversity of generated audios
We also observe that different random seeds, i.e., different initial Gaussian latent for the consistency TTA model, generate noticeably different audio, confirming that consistency models produce diverse generations like diffusion models. We present two example prompts from the CLAP-finetuned model in Table 4 to illustrate this diversity.
## 5 Conclusion
This work proposes an approach to accelerate diffusion-based TTA generation models hundreds of times based on consistency distillation. The delicate design of the distillation procedure emphasizing CFG achieves this vast acceleration with minimal generation quality reduction, enabling diverse and realistic in-the-wild audio generation within one neural network query. The differentiability of the resulting model allows for end-to-end fine-tuning, unlocking possibilities for further improving the training method of such models.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & U-Net & \# queries & CLAP & CFG & Human & Human & CLAP\({}_{\text{T}}\) & CLAP\({}_{\text{A}}\) & FAD & FD & KLD \\ & \# params & (\(\downarrow\)) & fine-tuning & \(w\) & Quality (\(\uparrow\)) & Corresp (\(\uparrow\)) & (\(\uparrow\)) & (\(\downarrow\)) & (\(\downarrow\)) & (\(\downarrow\)) & (\(\downarrow\)) \\ \hline Diffusion & 557M & 400 & ✗ & 3 & 4.136 & 4.064 & 24.57 & 72.79 & 1.908 & 19.57 & 1.350 \\ \hline Consistency & 559M & 1 & ✗ & 5 & **3.902** & 4.010 & 22.50 & 72.30 & 2.575 & 22.08 & **1.354** \\ \hline Ground-truth audio & - & - & - & - & 4.424 & **4.69** & **72.54** & **2.406** & **20.97** & 1.358 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Compare the human evaluation results of consistency and diffusion models. Bold numbers are defined same as Table 2.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & U-Net & \# params & CFG \(w\) & \# queries (\(\downarrow\)) & CLAP\({}_{\text{T}}\) (\(\uparrow\)) & CLAP\({}_{\text{A}}\) (\(\uparrow\)) & FAD (\(\downarrow\)) & PD (\(\downarrow\)) & KLD (\(\downarrow\)) \\ \hline AudioIDM-L reported in [3] & 739M & 2 & & & - & - & 2.08 & 27.12 & 1.86 \\ TANGO reported in [1] & 866M & 3 & & - & - & - & 1.59 & 24.53 & 1.37 \\ TANGO [1] tested by us & 866M & 3 & & 24.10 & 72.85 & 1.631 & 20.11 & 1.362 \\ Our TANGO model & 557M & 3 & & 24.57 & 72.79 & 1.908 & 19.57 & 1.350 \\ \hline Consistency model & & 3 & & 21.00 & 71.39 & 3.202 & 22.04 & 1.411 \\ without CLAP fine-tuning & 559M & 4 & 1 & 22.05 & 72.08 & 2.610 & 21.71 & 1.373 \\ \hline \multirow{2}{*}{Consistency model with CLAP fine-tuning} & & 5 & & 22.50 & 72.30 & 2.575 & 22.08 & **1.354** \\ \cline{2-10} & & 3 & & 24.44 & 72.39 & **2.182** & **20.44** & 1.368 \\ \cline{1-1} \cline{2-10} & & 4 & 1 & 24.69 & **72.54** & 2.406 & 20.97 & 1.358 \\ \cline{1-1} \cline{2-10} & & 5 & & **24.70** & 72.53 & 2.626 & 21.33 & 1.356 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Compare consistency models to the diffusion baselines. Distillation runs are extended to 60 epochs for better performance; CLAP-fine-tuning uses 10 additional epochs. All CD runs use the Heun teacher solver, uniform noise schedule, variable guidance distillation, guided initialization, Min-SNR loss weights, and BF16 inference precision. Bold numbers indicate the best results among consistency models. |
2307.16845 | The intrinsic X-ray luminosity distribution of an optically-selected
SDSS quasar population | In active galactic nuclei, the relationship between UV and X-ray luminosity
is well studied (often characterised by $\alpha_\text{ox}$) but often with
heterogeneous samples. We have parametrized the intrinsic distribution of X-ray
luminosity, $L_\text{X}$, for the optically-selected sample of SDSS quasars in
the Stripe 82 and XXL fields across redshifts 0.5-3.5. We make use of the
available XMM observations and a custom pipeline to produce Bayesian
sensitivity curves that are used to derive the intrinsic X-ray distribution in
a hierarchical Bayesian framework. We find that the X-ray luminosity
distribution is well described by a Gaussian function in
${\log_{10}}L_\text{X}$ space with a mean that is dependent on the
monochromatic 2500A UV luminosity, $L_{2500}$. We also observe some redshift
dependence of the distribution. The mean of the $L_\text{X}$ distribution
increases with redshift while the width decreases. This weak but significant
redshift dependence leads to $L_{2500}$-$L_\text{X}$ and
$L_{2500}$-$\alpha_\text{ox}$ relations that evolve with redshift, and we
produce a redshift- and $L_{2500}$-dependent $\alpha_\text{ox}$ equation.
Neither black hole mass nor Eddington ratio appear to be potential drivers of
the redshift evolution. | Amy L. Rankine, James Aird, Angel Ruiz, Antonis Georgakakis | 2023-07-31T17:06:55Z | http://arxiv.org/abs/2307.16845v2 | # The intrinsic X-ray luminosity distribution of an optically-selected SDSS quasar population
###### Abstract
In active galactic nuclei, the relationship between UV and X-ray luminosity is well studied (often characterised by \(\alpha_{\rm ox}\)) but often with heterogeneous samples. We have parametrized the intrinsic distribution of X-ray luminosity, \(L_{\rm X}\), for the optically-selected sample of SDSS quasars in the Stripe 82 and XXL fields across redshifts 0.5-3.5. We make use of the available XMM observations and a custom pipeline to produce Bayesian sensitivity curves that are used to derive the intrinsic X-ray distribution in a hierarchical Bayesian framework. We find that the X-ray luminosity distribution is well described by a Gaussian function in \(\log_{10}L_{\rm X}\) space with a mean that is dependent on the monochromatic 2500 A UV luminosity, \(L_{2500}\). We also observe some redshift dependence of the distribution. The mean of the \(L_{\rm X}\) distribution increases with redshift while the width decreases. This weak but significant redshift dependence leads to \(L_{2500}\)-\(L_{\rm X}\) and \(L_{2500}\)-\(\alpha_{\rm ox}\) relations that evolve with redshift, and we produce a redshift- and \(L_{2500}\)-dependent \(\alpha_{\rm ox}\) equation. The increasing average black hole mass with redshift in our sample points to black hole mass as a potential driver of the redshift evolution.
keywords: galaxies: active - X-rays: galaxies - ultraviolet: galaxies - galaxies: evolution - methods: statistical
## 1 Introduction
The energetic processes associated with the fuelling of Active Galactic Nuclei (AGN) produce radiation across the electromagnetic spectrum. An optically thick accretion disc is expected to emit thermally resulting in a blackbody across the optical/UV (Shakura and Sunyaev, 1973). Meanwhile, the bulk of the X-ray emission is thought to be produced by inverse Compton scattering of accretion disc photons accelerated to X-ray energies in some form of corona following a power-law spectrum. The geometry of the corona is unclear. Various models exist to describe this corona: from a lamp-post geometry where the corona illuminates the disc from its position above the black hole (Fabian et al., 2017), to a slab corona that sandwiches the disc (Haardt and Maraschi, 1991). The presence of a'soft excess' (i.e., X-ray emission \(\lesssim\)1 keV exceeding what would be expected from an extrapolated power-law) suggests an additional component to the X-ray production and is often attributed to an inner warm disc (Petrucci et al., 2018, 2020).
The relationship between the X-ray and UV luminosity have been known for some decades (Avni and Tananbaum, 1982, 1986) and parametrized as \(L_{\rm X}\propto L_{\rm UV}^{\gamma}\) with \(\gamma\sim 0.6\). The relationship is generally considered to be tight (Lusso and Risaliti, 2017), although some scatter is observed, motivating models where the processes involved in producing the X-ray and UV emission are dependent on some common parameter of the AGN (e.g., accretion rate, black hole mass; Lusso and Risaliti, 2017; Kubota and Done, 2018). There is little evidence of evolution with redshift (e.g., Vignali et al., 2003; Steffen et al., 2006; Just et al., 2007; Green et al., 2009; Lusso and Risaliti, 2017; Timlin et al., 2021); however, see Kelly et al. (2007) who do see some redshift-dependence of the relation. In fact, the lack of significant redshift evolution and the general tightness of the relation has lead to claims that the correlation between the X-ray and UV luminosities (or more precisely the X-ray and UV fluxes) can be used to infer cosmological parameters (Salvestrini et al., 2019; Lusso et al., 2020).
The spectral index of a power-law between the UV luminosity and X-ray luminosity, specifically the monochromatic 2500 A and 2 keV luminosities (first introduced by Tananbaum et al., 1979), is denoted \(\alpha_{\rm ox}\) and is often used to parametrize the \(L_{2500}\)-\(L_{\rm X}\) relationship. The non-flat relationship between \(\alpha_{\rm ox}\) and \(L_{2500}\) shows that the X-ray luminosity increases less than monotonically as UV luminosity increases suggesting that the spectral energy distribution (SED) becomes more disc-dominated. The physical driver of this relation is unclear [refs and examples] and revealing the true relation between \(L_{2500}\) and \(L_{\rm X}\) free from selection effects will be a step towards understanding the physical mechanism(s) that govern the relations.
The observational \(L_{2500}\)-\(L_{\rm X}\) and \(L_{2500}\)-\(\alpha_{\rm ox}\) relations are plagued by selection effects due to both the choice of the parent quasar sample and the limitations of the available X-ray data. Many studies make attempts to reduce the systematic biases that can be introduced. In particular, in a flux-limited sample the correlation between luminosity and \(z\) will invariably produce a redshift-dependent \(\alpha_{\rm ox}\). Attempts to reduce this effect include studying just the most luminous of sources across a wide redshift range (\(z\approx 1.5\)-4.5) with the downside that the sample sizes are small (Just et al., 2007); and adding a handful of faint \(z\sim 4\) AGN in order to remove the strong \(L_{2500}\)-\(z\) correlation (Kelly et al., 2007). While not making these particular choices at the sample selection stage, other studies have looked at the
observed relations across narrow luminosity bins to determine the extent of any redshift evolution (Vignali et al., 2003). Additionally, the X-ray non-detections must be treated with care. Vignali et al. (2003); Timlin III et al. (2021) include upper X-ray flux limits for their X-ray undetected quasars. Green et al. (2009) do also but down-weight the undetected objects in their analyses. Steffen et al. (2006) include (optically-selected) objects with targeted X-ray observations such that the fraction of sources requiring upper X-ray flux limits is low. Meanwhile, Lusso and Risaliti (2017) limit their sample to only sources that have X-ray detections.
In this paper, we develop and apply a Bayesian method to measure the intrinsic distribution of X-ray luminosities as a function of redshift and \(L_{2500}\) for the well-defined sample of optically-selected SDSS quasars, carefully considering the impact of X-ray flux limits. We will make use of XMM observations in the Stripe 82 and XXL fields, reducing all of the XMM data with a custom pipeline in order to accurately construct the sensitivity curves in a consistent manner. Our approach is designed to not only use the X-ray detected quasar population but also extract information from X-ray undetected quasars with X-ray emission within the noise of the available XMM-Newton observations. With accurate sensitivity curves we will be able to consider the \(L_{2500}\)-\(L_{\rm X}\) relation in a probabilistic way, thereby removing the need for upper X-ray flux limits. The choice of using the optically-selected SDSS quasar population will reduce biases otherwise brought about by the inclusion of X-ray selected or radio-selected objects, for example. We will also then be able to produce \(L_{2500}\)-\(L_{X}\) and \(L_{2500}\)-\(\alpha_{\rm ox}\) relations for a well-studied population of quasars and accurately determine any dependence on redshift.
We detail our sample selection criteria for the optically-selected sample and the careful reduction of the X-ray data, subsequent cross-matching, and calculations of luminosities in Section 2. In Section 3 we describe our Bayesian methodology for calculating the underlying distribution of X-ray luminosity as a function of UV luminosity and redshift. The intrinsic \(L_{2500}\)-\(L_{\rm X}\) relation produced by our best-fitting model is presented in Section 4 followed by the corrected \(L_{2500}\)-\(\alpha_{\rm ox}\) relation in Section 5. We briefly discuss our finding of an evolving \(L_{2500}\)-\(L_{\rm X}\) relation with redshift in Section 6.
Vacuum wavelengths are employed throughout the paper and we adopt a \(\Lambda\)CDM cosmology with \(h_{0}=0.71\), \(\Omega_{\rm M}=0.27\), and \(\Omega_{\Lambda}=0.73\) when calculating quantities such as quasar luminosities.
## 2 Data
We use X-ray data from XMM and UV/optical data from SDSS both taken in the XXL and Stripe 82 fields. In brief, we are using the optically-selected quasars from SDSS DR16 (Lyke et al., 2020) at redshifts \(0.5<z<3.5\) across the two regions and have re-reduced the XMM data using the xmmpype custom pipeline outlined in Georgakakis and Nandra (2011). We crossmatch the XMM sources with the SDSS sources using Nway(Salvato et al., 2018). Detailed descriptions of each dataset are provided below; however, some readers may wish to peruse Table 1 and move onto Section 3.
### Optical/UV
We use the SDSS DR16 quasar catalogue (Lyke et al., 2020) to create the optically-selected sample of AGN within the XXL and S82 fields. We filter the DR16 quasar catalogue with the multi-order coverage maps (MOCs) of the XXL and S82 XMM observations (see Section 2.2) in Aladin to select only the objects within SDSS that fall within the footprints of XXL and S82. The quasar catalogue is further limited to the optically-selected quasars which we define as the CORE sample from the BOSS and eBOSS targets (Myers et al., 2015). The CORE sample is produced by selecting objects with the following of SDSS's Bitmasks activated: bit 40 (050_CORE_MAIN) of mask BOSS_TARGET1, bit 10 (050_EDOSS_CORE) of eBOSS_TARGET1, and bit 40 (0501_EDOSS_CORE) of eBOSS_TARGET1. With this selection, we aim to only include quasars that were selected and targeted based on their optical properties. In doing so, we avoid biasing our results by including, for example, the X-ray selected quasars in the XXL field which were observed as part of the large SDSS ancillary programme led by A. Georgakakis.
The optical sample contains both X-ray detected and undetected objects (see Section 2.2) with a total of 2292 quasars.
#### 2.1.1 Optical/UV properties
The SDSS spectra are reconstructed using the ICA technique outlined in Rankine et al. (2020) which essentially provides high S/N versions of the spectra over the restframe wavelength range 1260-3000 A from which the continuum luminosity at restframe 2500 A can be measured, \(L_{2500}\). The left-hand panel of Fig. 1 contains an example spectrum and reconstruction. \(L_{2500}\) is estimated by calculating the median flux in a 10 A window centred on 2500 A and converting to a luminosity. We correct the luminosities for Galactic dust extinction with the dustmaps Python module (Green, 2018) and the dust map of Schlegel et al. (1998) updated by Schlafly and Finkbeiner (2011) in tandem with the extinction module (Barbary, 2016) and the reddening curve of Fitzpatrick (1999), producing median \(E(B-V)=0.02\) for the XXL sample and 0.03 for S82 and S82X. 2500 A is redshifted out of the BOSS spectrograph at \(z\gtrsim 3.2\) (\(z\gtrsim 2.7\) for the SDSS spectrograph) which would ordinarily prevent the measurement of the 2500 A monochromatic luminosity of quasars above this redshift. However, reconstructing the spectra with the ICA technique which utilises the spectral information, including emission lines and the continuum shape, across the rest of the available spectrum above 1260 A allows the 2500 A luminosity to be estimated reliably. We checked the accuracy of extrapolating the reconstructions with a sample of quasar spectra in which 2500 A was present but only included the wavelength range 1260-2200 A in the fitting and found good agreement with the reconstructions that used the full available wavelength range between 1260-3000 A. See an example of this extrapolation in the right-hand panel of Fig. 1. Only 46 quasars of our optically-selected sample require extrapolation of the reconstructions.
Uncertainties on \(L_{2500}\) are calculated by propagating the errors on the weights of the ICA spectral components produced during the reconstruction process. The median errors on \(L_{2500}\) for the subset of objects without restframe 2500 A in their spectra and \(L_{2500}\) was extrapolated from the reconstructions are \(\sim\)0.04 dex compared to \(\sim\)0.02 dex for the subset with restframe 2500 A which reflects the indirect measurement of \(L_{2500}\). Errors from the spectrum reconstructions will be much less than those from the spectrophotometry; however, we do not propagate the \(L_{2500}\) errors further, since the main source of uncertainty is the X-ray luminosities, and so do not make an attempt to quantify them here.
The left panel of Fig. 2 shows the distribution of \(L_{2500}\) versus redshift for the X-ray detected and undetected quasars. The CORE Stripe 82 and Stripe 82X samples contain very few quasars above \(z\sim 2.2\) compared to the XXL sample due to the differing SDSS selection between SDSS II and SDSS III/IV with all of the CORE Stripe and Stripe 82X quasars originating from SDSS II.
### X-ray
We start from the 294 XMM pointings in the North field of XXL (Pierre et al., 2016). XMM-XXL North covers \(\sim\)25 deg\({}^{2}\) with an exposure time of 10 ks per XMM pointing.
Stripe 82 is an equatorial region of sky covering \(\sim\)300 deg\({}^{2}\) which has been repeatedly observed with SDSS. Approximately 28 deg\({}^{2}\) of Stripe 82 has been observed with XMM. This combines the 198 pointings from the Stripe 82X survey (S82X) at \(\sim\)5 ks per XMM pointing and 33 additional archival pointings (S82; 7-66 ks per pointing) extracted from the XMM archive (LaMassa et al., 2013; LaMassa et al., 2016).
Figure 1: Example reconstructions of quasar spectra using the ICA technique for restframe 1260–3000 Å (top) and the residuals (observed spectrum flux – reconstruction) normalised by the noise (bottom). The left panels contain a \(z=2.08\) quasar with full coverage of the 2500 Å region. The right panels demonstrate the extrapolation of the reconstruction for a \(z=3.47\) quasar without coverage of the 2500 Å region. The red shaded area is the 1-\(\sigma\) uncertainties on the reconstruction.
Figure 2: Optical/UV (left) and X-ray luminosities (right) as a function of redshift for the Stripe 82 (blue), Stripe 82X (orange) and XXL (green) samples. Median luminosity errors are presented in the legend. The 1-D redshift and luminosity distributions for the three samples are plotted above and to the right, respectively, of their corresponding axes. The \(L_{2500}\) panel contains both X-ray detected and undetected quasars, whereas the \(L_{X}\) panel contains only the X-ray detected subsample.
#### 2.2.1 Reduction
We use the xmmpypye XMM pipeline, which is based on the methods and techniques described in Georgakakis and Nandra (2011). In brief, the pipeline creates images in the different energy bands, sources are detected and astrometric corrections are applied before X-ray fluxes are estimated and any optical counterparts to the X-ray sources are identified. One advantage of employing the pipeline is the greater accuracy of the sensitivity curves which are generated with a robust and well-quantified Bayesian approach (following the methods of Georgakakis et al., 2008). The sensitivity curves allow for an accurate characterisation of the selection function of a sample using analytic relations instead of cumbersome and computationally expensive simulations and can naturally account for non-detected sources. In particular, at faint fluxes the Bayesian sensitivity curves correctly account for the effects of Poisson statistics on the X-ray detection and photometry in the low-counts regime and the impact of Eddington bias. Figure 3 contains the area curves for the S82, S82X, and XXL fields in the full band. In general, at a given flux, the XXL sample is most sensitive, followed by the S82 archival pointings and finally the S82X survey. The nature of our investigations means that correcting for the X-ray detection probability is necessary and will be most significant at faint fluxes. Additionally, the pipeline coadds overlapping XMM observations to increase the X-ray depth. It is also designed for large-area serendipitous X-ray surveys which greatly facilitates the post-processing of the various products in the case of surveys that extend over large sky areas. We limit our sources to those detected in the full band (0.5-10 keV) where a detection is defined by a "false detection probability" \(p_{\rm false}<4\times 10^{-6}\), where \(p_{\rm false}\) is the probability of the observed counts (or higher) being produced purely by a fluctuation of the background. Column 1 of Table 1 lists the number of X-ray point sources resulting from the reduction of the XMM pointings, totalling 14 493 sources. Comparing to the S82 reductions of LaMassa et al. (2016), we find 5529 X-ray sources in the combined S82 regions, whilst LaMassa et al. (2016) produced a catalogue of 4668 sources with XMM detections in the full band. We find that the \(\log N\)-\(\log S\) relations of Georgakakis et al. (2008), the ExSeSS catalogue (Delaney et al., 2023), and the CDWFS (Masini et al., 2020) are in good agreement with those of our sample (see Fig. 4) providing confidence in the source detection and sensitivity maps of the xmmpye reductions.
#### 2.2.2 Crossmatching
We perform an initial search for possible optical counterparts in SDSS DR16 (Ahumada et al., 2020) with xmatch(Pineau et al., 2020) and a search radius of 40 arcsec around each X-ray source which yields an optical catalogue of 1 484 651 sources. We use Nway to match the X-ray observations to this catalogue with a 20 arcsec maximum radius. X-ray RA and Dec positional uncertainties were generated during the reduction with median uncertainties of \(\sim\)1.5 arcsec. We supply constant 0.1 arcsec positional uncertainties for the optical catalogue. We supply Nway with the total sky area of the reduced XMM observations - calculated from the multi-order coverage maps (MOCs) generated by xmmpye - and estimate the sky area of the input optical catalogue by creating a MOC with Aladin(Bonnarel et al., 2000) and a radius around each X-ray observation of 40 arcsec, producing a total area of 14.23 deg\({}^{2}\) once overlaps between 40-arcsec regions have been accounted for. Nway produces a matched catalogue containing all possible matches for each X-ray source and corresponding probabilities. \(p_{\rm any}\) is the probability that an X-ray source has a true counterpart in the provided catalogue and \(p_{i}\) is the probability that a particular match is the true counterpart. As such, a combination of \(p_{\rm any}\) and \(p_{i}\) and limits on each can be invoked to produce a final catalogue of robust optical counterparts of the X-ray sources. Nway calculates the average source density on the sky from the provided sky areas which leads to a scaling of the counterpart probabilities, \(p_{\rm any}\) and \(p_{i}\). Combining the XXL and Stripe 82 fields leads to an average sky density across the two fields which will affect the relative counterpart probabilities for sources in different fields. However, in our use case of Nway we do not use any absolute \(p_{\rm any}\) or \(p_{i}\) thresholds to determine the final matches; instead we are only ever comparing \(p_{\rm any}\) and \(p_{i}\) values between different objects across small physical scales; i.e., optical sources that are potential matches to the same X-ray source such that they are within the same region. As such, the scaling of the probabilities does not affect the final matching.
We include magnitude priors in the crossmatching to preferentially select counterparts with brighter optical magnitudes that are less likely to be spurious alignments and more likely to be the true counterparts to the X-ray sources. We utilise Nway's automatic prior generation to perform the matching with \(r\)-band and, separately, \(i\)-band information from SDSS as well as with \(r\)- and \(i\)-band in tandem. We also tried pre-determining the priors based on the \(r\)- and \(i\)-band
\begin{table}
\begin{tabular}{c c c c} \hline & Sources & Detected & Undetected \\ \hline S82 & 2393 & 226 & 366 \\ S82X & 3136 & 196 & 764 \\ XXL & 8964 & 348 & 392 \\ \hline \end{tabular}
\end{table}
Table 1: Number counts for final samples of X-ray detected and undetected sources for the S82, S82X and XXL fields. The first column contains the total number of point sources with detections in the full band extracted with xmmpye. The second and third columns contains the number of optically-selected quasars that have X-ray counterparts (are X-ray detected) and those that do not (undetected).
Figure 3: Sensitivity curves for the full band (0.5–10 keV) across the three regions in our sample.
magnitudes of the optical quasars in the optical catalogue compared to the non-quasar objects.
We make the final match selection by prioritising counterparts that are classed as AGN which we define as either having spCl (the spectroscopic class) as 'QSO' or if the object is found in the SDSS DR16 quasar catalogue (Lyke et al., 2020) having performed a simple 1-arcsec crossmatch between the DR16 and DR16Q catalogues. To implement the AGN-prioritisation, we take the possible matches from Nway, and inspect the match with the highest product of \(p_{\mathrm{any}}\) and \(p_{i}\) that is also an AGN. The product avoids multiple X-ray sources having the same AGN optical counterpart and gives priority to the X-ray source with the highest probability of having a counterpart in this optical catalogue (\(p_{\mathrm{any}}\)). Only using \(p_{i}\) would result in 195 X-ray sources having an optical match already associated with another X-ray source. Only if the \(p_{i}\) for this optical AGN is \(>0.01p_{i}\) of the original best match is the AGN selected as the counterpart. We perform a false-positive calibration by offsetting the X-ray positions and running Nway with this mock X-ray catalogue (and corresponding mock optical catalogue obtained with xmatch and a 40 arcsec search radius). The AGN number density on the sky is low such that for our AGN-prioritisation scheme a \(p_{\mathrm{any}}\) threshold of zero is sufficient to maintain a false-positive fraction \(<\)1 %.
Ultimately, we obtain the highest completeness when including the \(r\)-band quasar-based magnitude prior; however, the QSO-prioritisation scheme leads to only a few X-ray matches changing depending on the prior used. The final sample of X-ray sources with an optical counterpart identified as an AGN is 26 %. Given that we are starting from an optically-selected subsample of the SDSS DR16 quasar catalogue, we limit the sample to the X-ray sources that have optical counterparts identified as AGN based on their inclusion in the DR16 quasar catalogue. Our final sample thus contains 2292 optically-selected AGN, 770 (34 %) of which are X-ray detected (see Table 1).
#### 2.2.3 X-ray properties
X-ray flux measurements for the full 0.5-10 keV band are calculated during the reduction with Galactic absorption taken into account (estimated from the H i maps of the LAB survey; Kalberla et al., 2005) but assume a photon index of \(\Gamma=1.4\). We are specifically selecting X-ray sources associated with (broad-line) quasars and so expect them to have unabsorbed X-ray spectra. We check this using the hardness ratios and confirm that they are, on average, consistent with \(\Gamma=1.9\) (see Fig. 5). We convert the flux measurements to a \(\Gamma=1.9\) using conversion factors from webpmns based on H i column densities of \(2\times 10^{20}\) and \(3\times 10^{20}\) cm\({}^{-2}\) for the XXL and S82 fields, respectively. Where available, redshifts from Rankine et al. (2020) which are based on an independent component analysis (ICA) of the optical spectra are used to calculate the X-ray luminosity, \(L_{\mathrm{X}}\), otherwise, the redshifts reported in Lyke et al. (2020) are used. We apply a K-correction assuming a photon index of \(\Gamma=1.9\).The X-ray luminosity distribution with redshift is plotted in the right-hand panel of Fig. 2. The sensitivity of the X-ray data is apparent in the lower bound on \(L_{\mathrm{X}}\) with redshift. The S82 and XXL samples are similar in their \(L_{\mathrm{X}-z}\) distributions; however, XXL extends to higher redshifts due to the SDSS selection.
Figure 4: Top: cumulative number counts as a function of full band XMM flux and comparison to ExSeSS (Delaney et al., 2023) and CDWFS (Masini et al., 2020) Bottom: differential number counts with the euclidean slope removed with comparison to the model of Georgakakis et al. (2008). Errors are Poisson errors based on the number of sources and scaled accordingly.
Figure 5: Hardness ratios versus redshift for our X-ray detected quasar sample. The median errors are plotted in the bottom right. The horizontal lines mark the average hardness ratios for \(\Gamma=1.4\), 1.6, and 1.9 assuming appropriate H i column densities of \(2\times 10^{20}\) and \(3\times 10^{20}\) cm\({}^{-2}\) and PN and MOS detectors.
Measurements of the intrinsic distribution of \(L_{\rm X}\) as a function of \(L_{2500}\) and redshift
We aim to arrive at a model that describes the distribution of X-ray luminosity as a function of UV luminosity and redshift. In Fig. 6 we plot the distribution of X-ray luminosity for our X-ray detected quasar sample in bins of \(L_{2500}\) and \(z\) (solid colour histograms). In what follows we will make use of the Bayesian sensitivity curves provided by xmmpype in order to account for the X-ray undetected quasar population and derive the underlying \(L_{\rm X}\) distribution function.
### Completeness-corrected distribution at a given \(L_{\rm X}\)
We attempt to account for the undetected X-ray sources in each (\(L_{\rm X}\), \(L_{2500}\), \(z\)) bin by calculating the probability of a source having an X-ray luminosity \(L_{\rm X}\) given its \(L_{2500}\) and \(z\):
\[P(L_{\rm X}|L_{2500},z)=\frac{N_{\rm det}}{\sum_{i=1}^{N_{\rm tot}}p({\rm det }|L_{\rm X},z_{i})\,\Delta{\rm log}_{10}L_{\rm X}} \tag{1}\]
The numerator is the number of X-ray detected quasars in each (\(L_{\rm X}\), \(L_{2500}\), \(z\)) bin. The denominator takes into account the probability that quasar \(i\) with redshift \(z_{i}\) would be detected if it had an X-ray luminosity corresponding to the centre of the \(L_{\rm X}\) bin and is summed over all X-ray detected and undetected quasars in that (\(L_{2500}\), \(z\) bin). \(\Delta{\rm log}_{10}L_{\rm X}\) is the width of the \(L_{\rm X}\) bin. The corrected counts in a given (\(L_{\rm X}\), \(L_{2500}\), \(z\)) bin can then be calculated by the following:
\[N_{\rm corr} =P(L_{\rm X}|L_{2500},z)\,N_{\rm tot}\,\Delta{\rm log}_{10}L_{ \rm X}\] \[=\frac{N_{\rm det}\,N_{\rm tot}}{\sum_{i=1}^{N_{\rm tot}}p({\rm det }|L_{\rm X},z_{i})}. \tag{2}\]
In the limit where \(p({\rm det}|L_{\rm X},z)=1\) for all quasars in a given \(L_{\rm X}\) bin (i.e. the X-ray data are sufficiently deep that any quasar with that \(L_{\rm X}\) should be detected), \(N_{\rm corr}=N_{\rm det}\) and thus corresponds to the "uncorrected" (solid) histograms in Figure 6. Thus, as expected, at the highest \(L_{\rm X}\) the corrected (open histograms/error bars) and uncorrected (solid histograms) estimates are consistent.
The corrected counts are plotted in Fig. 6 as coloured outlined histograms and Poisson errors are generated based on the \(N_{\rm det}\) in each bin and applying Gehrels' method for small number statistics (Gehrels, 1986). As expected, the correction is larger at low X-ray luminosities. The corrected \(L_{\rm X}\) distribution is perhaps Gaussian with the centre, \(\mu\), and width, \(\sigma\) potentially varying with \(L_{2500}\) and \(z\); however, this model can only correct bins with \(N_{\rm det}>0\) and significant binning is required. Additionally, while narrower bins leads to higher resolution, the uncertainties on \(L_{\rm X}\) are comparable to the \(L_{\rm X}\) bin-width. In the following section we move on to using Maximum Likelihood Estimation (MLE) to arrive at a fully unbinned approach to determining the \(L_{\rm X}\) distribution as a function of \(L_{2500}\) and \(z\).
### Maximum likelihood fitting
The observed and corrected distributions in Fig. 6 suggest that \(\log_{10}L_{\rm X}\) is normally distributed for quasars of a given \(L_{2500}\) and \(z\):
\[P(L_{\rm X}|L_{2500},z)=\frac{1}{\sigma\sqrt{2}\pi}\exp\left[-\frac{(\log_{10 }L_{\rm X}-\mu)^{2}}{2\sigma^{2}}\right], \tag{3}\]
with mean \(\mu\) and width \(\sigma\) both of which may depend on \(L_{2500}\) and/or \(z\). In this section, we will attempt to fit the X-ray luminosity distribution function from equation 3 via maximum likelihood estimation (MLE) and will investigate the requirement for \(L_{2500}\)- and \(z\)-dependence.
The log-likelihood (which we derive in Appendix A) is given by
\[\ln\mathcal{L}(\theta) =\sum_{i=1}^{N_{\rm det}}\ln\int_{40}^{\infty}P(L_{\rm X}|L_{2500}, z_{i},\theta)\ P(N_{i}|N_{\rm exp})\ {\rm d}\log_{10}L_{\rm X}+\] \[\sum_{j=1}^{N_{\rm neg}}\ln\int_{40}^{\infty}P(L_{\rm X}|L_{2500}, z_{j},\theta)\ p(\overline{{\rm det}}|L_{\rm X},z_{j})\ {\rm d}\log_{10}L_{\rm X}, \tag{4}\]
where the first and second terms account for the X-ray detected and undetected quasar samples, respectively. Considering the X-ray detected term, \(P(L_{\rm X}|L_{2500},z_{i},\theta)\) is the probability of detected quasar \(i\) having an X-ray luminosity \(L_{\rm X}\) (drawn from the corresponding log-normal distribution) given its UV luminosity \(L_{2500_{i}}\) and redshift \(z_{i}\) calculated from equation 3 with parameters \(\theta=\mu,\sigma\). The \(P(N_{i}|N_{\rm exp})\) term takes into account the uncertainty on the measured X-ray luminosity of the quasar, which is described by a Poisson distribution:
\[P(N_{i}|N_{\rm exp})=\frac{N_{\rm exp}^{N_{i}}}{N_{i}!}\ e^{-N_{\rm exp}} \tag{5}\]
with \(N_{i}\), the total observed counts for quasar \(i\), and \(N_{\rm exp}\), the expected number of counts from a source with \(L_{\rm X}\) which is determined via
\[N_{\rm exp}=\frac{L_{\rm X}}{4\pi D_{L}^{2}\left(z_{i}\right)K_{\rm corr}\left( z_{i}\right)}\times{\rm ECF}_{i}\times{\rm EEF}\times t_{\rm exp_{i}}+B_{i}. \tag{6}\]
The energy conversion factor, ECF, exposure, \(t_{\rm exp}\), background counts, \(B_{i}\), and total counts, \(N_{i}\) are specific to each X-ray detection and calculated during the reduction. The encircled energy fraction, EEF, is 70 %. The luminosity distance, \(D_{L}(z_{i})\), and K-correction, \(K_{\rm corr}(z_{i})\), are calculated for the redshift \(z_{i}\) of the quasar. Since the X-ray luminosities (fluxes more precisely) and errors are calculated from a Poisson distribution by xmmpype, \(P(N_{i}|N_{\rm exp})\) will be maximal when the integration variable \(L_{\rm X}\) equals the estimated X-ray luminosity of quasar \(i\), \(L_{\rm X_{i}}\). We note that the maximum of \(P(N_{i}|N_{\rm exp})\) corresponds to our nominal best estimate of the X-ray luminosity, \(L_{\rm X_{i}}\), for a given detected quasar.
The X-ray undetected term, similarly to the detected term, depends on the probability of undetected quasar \(j\) having an X-ray luminosity \(L_{\rm X}\) given its UV luminosity \(L_{2500_{j}}\) and redshift \(z_{j}\), \(P(L_{\rm X}|L_{2500_{j}},z_{j},\theta)\). This probability is multiplied by the probability of quasar \(j\) remaining undetected if it were to have X-ray luminosity \(L_{\rm X}\):
\[p(\overline{{\rm det}}|L_{\rm X},z_{j})=1-p({\rm det}|L_{\rm X},z_{j}) \tag{7}\]
where \(p({\rm det}|L_{\rm X},z_{j})\) is calculated from the area curves (Fig. 3) via
\[p({\rm det}|L_{\rm X},z_{j})=\frac{{\rm Area}}{{\rm Total\ Area}}. \tag{8}\]
The integration limits are set as \(\log_{10}L_{\rm X}=40,\infty\). In practise the upper limit is set by the maximum X-ray flux probed by the sensitivity curves (\(10^{-10}\,{\rm erg\,s^{-1}\,cm^{-2}}\)).
### Distribution of \(L_{\rm X}\) in fixed \(L_{2500}\) and redshift bins
We first aim to determine if and how the X-ray luminosity distribution changes as a function of \(L_{2500}\) and \(z\). We divide X-ray detected and undetected quasar samples between equally spaced redshift bins: \(0.5<z<1.5\), \(1.5<z<2.5\), and \(2.5<z<3.5\). We also split the samples across six \(L_{2500}\) bins such that there are approximately equal numbers of quasars in each bin. For each \((z_{k},L_{2500_{k}})\) bin we fit for \(\mu\) and \(\sigma\) by maximising the log-likelihood in equation 4 with the Python
package emcee(Foreman-Mackey et al., 2013). The best-fitting parameters are presented in Fig. 7 as circles. \(\mu\) is clearly dependent on \(L_{2500}\) with the mean \(L_{X}\) increasing with increasing \(L_{2500}\) across all redshift bins. On the other hand, there is little evidence for a \(L_{2500}\)-dependent \(\sigma\) at any redshift but it is possible that \(\sigma\) decreases as redshift increases suggesting that the distribution of \(L_{X}\) is narrower at greater redshifts. In light of these correlations, we remove the \(L_{2500}\) binning in the next section and model the relationship between \(L_{2500}\) and \(\mu\) (and \(\sigma\)) as linear.
### \(L_{2500}\)-dependent distribution of \(L_{x}\) in fixed redshift bins
When binning by \(L_{2500}\) the model parameter \(\mu\) (i.e. the average of the \(\log L_{X}\) distribution) appears to increase as \(L_{2500}\) increases. We model this dependence on \(L_{2500}\) for \(\mu\) and \(\sigma\) as linear with \(\log_{10}L_{2500}\):
\[\mu=m_{\mu}(\log_{10}L_{2500}-30)+c_{\mu}; \tag{9}\] \[\sigma=m_{\sigma}(\log_{10}L_{2500}-30)+c_{\sigma}.\]
We perform MLE on the \(z\)-binned data to constrain \(m_{\mu}\), \(m_{\sigma}\), \(c_{\mu}\), and \(c_{\sigma}\) (model (iii)), thus removing the need to bin our quasar sample according to \(L_{2500}\). It is not clear that \(\sigma\) varies with \(L_{2500}\), thus we repeat the MLE for the model where \(\sigma\) does not depend on \(L_{2500}\), formally \(m_{\sigma}=0\) therefore \(\sigma=c_{\sigma}\) (model (iii)).
To compare the different models (with different numbers of free parameters) we will use the Akaike Information Criterion defined as
\[\mathrm{AIC}=2N_{\mathrm{dim}}-2\ln\hat{\mathcal{L}} \tag{10}\]
with \(N_{\mathrm{dim}}\) the number of free parameters and \(\hat{\mathcal{L}}\) the maximum of the likelihood function (Equation 4). The AIC penalises models with a large number of parameters and models with lower AICs are considered to better represent the data. To calculate the AICs of the models with binning, we treat the model as a piecewise function such that the maximum log-likelihood is the sum of the maximum log-likelihood over all \(z\) bins (and \(L_{2500}\) bins for model (i)) and \(N_{\mathrm{dim}}\) is the total number of parameters across all bins. Model (iii) is formally a better fit with a lower AIC than model (ii) (see Table 2) and so we plot the results of model (iii) in Fig. 7 as the straight lines and shaded regions. Within the errors the linear dependence on \(L_{2500}\) agrees with the \(L_{2500}\)-binned fits of model (i) (circles; Section 3.3).
Across redshift bins, the intercept of the \(\mu\) relation and \(\sigma\) in general change. At a given \(L_{2500}\), \(\mu\) increases as redshift increases and \(\sigma\) decreases. This can be seen more clearly in Fig. 8. The gradient of the \(\mu\)-\(L_{2500}\) relation appears to be relatively constant with redshift; however, \(c_{\mu}\) is clearly increasing as redshift increases and \(\sigma\) is decreasing. We thus move on to model the dependence of these parameters on redshift to arrive at a fully unbinned MLE in Section 3.5.
Figure 6: Distribution of \(L_{X}\) binned by \(L_{2500}\), \(z\). Each panel is a different \(L_{2500}\) (columns, increasing to right) and \(z\) (rows, increasing towards bottom) bin. The X-ray detected quasars are presented as the filled histograms. The binned corrected counts and associated Poisson errors are represented by the open histograms and error bars (see Section 3.1). The black histograms are a random sample drawn from the assumed Gaussian distribution of \(L_{X}\) with parameters determined by the maximum likelihood estimation with the best-fitting model (vii) (see Section 3.2). From both the binned corrected counts and the MLE results, it is clear that the X-ray detected sample is skewed towards the high \(L_{X}\) sources. In the majority of the \(L_{2500}\) and \(z\) bins the stacked \(L_{X}\) from the MLE results agrees with the stacked data (black and coloured vertical arrows with 1-\(\sigma\) error bars). Only the (\(L_{2500}\), \(z\), \(L_{X}\)) bins populated with X-ray detected quasars can be corrected via the binning method outlined in Section 3.1, providing motivation for the MLE detailed in Section 3.2).
### Continuous model of the redshift evolution
In this section we arrive at a selection of models to describe the whole data sample in a continuous manner instead of discrete \(L_{2500}\) or \(z\) bins. We model any possible redshift evolution of \(\mu\) and \(\sigma\) via a linear dependence of \(m_{\mu}\), \(c_{\mu}\), and \(\sigma\) on \(z\). We perform the MLE with various models with different \(z\) dependencies, explicitly:
1. no redshift evolution, with parameters \(m_{\mu}\), \(c_{\mu}\), and \(\sigma\). This is the equivalent of model (iii) in the limit of one redshift bin.
2. no redshift evolution, with parameters \(m_{\mu}\), \(c_{\mu}\), \(m_{\sigma}\), and \(c_{\sigma}\).
This is the equivalent of model (ii) but assuming a single, broad redshift bin.
3. only redshift evolution of \(\mu\), with gradient and intercept parameters for \(m_{\mu}(z)\) and \(c_{\mu}(z)\), and a constant \(\sigma\).
4. redshift evolution of only \(c_{\mu}\) and \(\sigma\) with gradient and intercept parameters for \(c_{\mu}(z)\) and \(\sigma(z)\) and a constant \(m_{\mu}\).
5. redshift evolution of \(m_{\mu}\), \(c_{\mu}\), and \(\sigma\) with gradient and intercept parameters for \(m_{\mu}(z)\), \(c_{\mu}(z)\), and \(\sigma(z)\).
From the AIC values in Table 2, the model which best represents the data is model (vii) which allows for redshift evolution of \(c_{\mu}\) and \(\sigma\) parametrized by,
\[c_{\mu}=p_{\mu}z+k_{\mu}; \tag{11}\] \[\sigma=p_{\sigma}z+k_{\sigma}.\]
The grey lines and shaded regions in Fig. 8 are the best-fitting parameters for this model (also listed in Table 3). The \(z\)-binned \(m_{\mu}\) values from Section 3.4 are systematically higher than the continuous redshift modelling in this section. This is due to the distribution of objects within the relatively broad redshift bins and the intrinsic redshift evolution of \(\mu\) within such bins. The higher \(L_{2500}\) and \(L_{\rm X}\) sources within a given redshift bin are preferentially identified toward higher redshifts and thus in our binned results a steeper relation between \(L_{\rm X}\) and \(L_{2500}\) (i.e. a steeper \(m_{\mu}\)) is recovered to account for this redshift evolution. The intercept, \(c_{\mu}\), does not have such a
\begin{table}
\begin{tabular}{c c c c c c} \hline Model & Binning & Parameters & \(N_{\rm dim}\) & AIC & \(\Delta\)AIC \\ \hline (i) & \(z\), \(L_{2500}\) & (\(\mu\), \(\sigma\)) for each \(L_{2500}\) and \(z\) bin & 36 & 10131.86 & 99.20 \\ (ii) & \(z\) & (\(m_{\mu}\), \(c_{\mu}\), \(m_{\sigma}\), \(c_{\sigma}\)) for each \(z\) bin & 12 & 10090.76 & 58.10 \\ (iii) & \(z\) [constant \(\sigma\) with \(L_{2500}\)] & (\(m_{\mu}\), \(c_{\mu}\), \(\sigma\)) for each \(z\) bin & 9 & 10085.77 & 53.12 \\ (iv) & Unbinned, no \(z\) evolution & \(m_{\mu}\), \(c_{\sigma}\), \(\sigma\) & 3 & 10145.90 & 113.25 \\ (v) & Unbinned, no \(z\) evolution & \(m_{\mu}\), \(c_{\mu}\), \(m_{\sigma}\), \(c_{\sigma}\) & 4 & 10145.40 & 112.74 \\ (vi) & Unbinned, \(z\) evolution & \(m_{\mu}(z)\), \(c_{\mu}(z)\), \(\sigma\) & 5 & 10061.18 & 28.53 \\
**(vii)** & **Unbinned, \(z\) evolution** & \(m_{\mu}\), \(c_{\mu}(z)\), \(\sigma(z)\)** & **5** & **10032.65** & **0.00** \\ (viii) & Unbinned, \(z\) evolution & \(m_{\mu}(z)\), \(c_{\mu}(z)\), \(\sigma(z)\) & 6 & 10032.67 & 0.01 \\ \hline \end{tabular}
\end{table}
Table 2: The models fitted with MLE. Columns are, in order, model number for referring to in text; the binning required of the model and whether or not there is redshift evolution for the completely unbinned models; the parameters of the model; the number of dimensions of the model which takes into account the number of redshift and \(L_{2500}\) bins; the AIC values; \(\Delta\)AIC is the difference between the AIC for that model and the lowest AIC value. Model (vii) with the lowest AIC is presented as bold.
\begin{table}
\begin{tabular}{c c} \hline Parameter & Value \\ \hline \(m_{\mu}\) & \(0.313^{+0.035}_{-0.033}\) \\ \(p_{\mu}\) & \(0.414^{+0.037}_{-0.033}\) \\ \(k_{\mu}\) & \(43.426^{+0.066}_{-0.062}\) \\ \(p_{\sigma}\) & \(-0.122^{+0.021}_{-0.021}\) \\ \(k_{\sigma}\) & \(0.657^{+0.061}_{-0.039}\) \\ \hline \end{tabular}
\end{table}
Table 3: Best fit parameter values and 1-\(\sigma\) uncertainties for model (vii) where \(\mu\)(\(L_{2500}\), \(z\)) = \(m_{\mu}\)(\(\log_{10}L_{2500}-30\)) + \(p_{\mu}z+k_{\mu}\) and \(\sigma\)(\(z\)) = \(p_{\sigma}z+k_{\sigma}\).
Figure 7: Model parameters \(\mu\) (top) and \(\sigma\) (bottom) from equation 3 and estimated via MLE as a function of \(L_{2500}\). The parameters for model (i), which requires running the MLE on data binned by \(L_{2500}\) and \(z\), are shown by the circles and 1-\(\sigma\) error bars. Colours represent \(z\) bins. Not all (\(z\), \(L_{2500}\)) bins contain data and so some bins are missing from the analysis. The lines and shaded regions correspond to the \(\mu\) and \(\sigma\) parameters obtained when modelling a linear dependence of \(L_{2500}\) on \(\mu\) and no dependence for \(\sigma\) (model (ii)). Significant trends with \(L_{2500}\) exist for \(\mu\) in all redshift bins and the results from models (i) and (ii) match well. Within the errors, models (i) and (ii) agree with a \(L_{2500}\)-independent \(\sigma\) but a clear decreasing \(\sigma\) with increasing redshift.
strong dependence on the width of the redshift bin. Increasing the number of redshift bins by a factor of two removes this systematic bias but also reduces the number of objects in each bin and thus leads to greater statistical uncertainties in the parameters.
## 4 Underlying \(L_{2500}\)-\(L_{\rm X}\) distribution
With model (vii) in hand, for a given \(z\), as expected the peak of the intrinsic \(L_{\rm X}\) distribution increases as \(L_{2500}\) increases. Perhaps not as obvious is that for a given \(L_{2500}\), the intrinsic \(L_{\rm X}\) distribution shifts to higher \(L_{\rm X}\) as redshift increases (\(c_{\mu}\) increases since the gradient of \(c_{\mu}(z)\) is found to be positive) and also narrows (\(\sigma\) decreases since the gradient of \(\sigma(z)\) is negative).
We compare the underlying distribution of \(L_{\rm X}\) of our optically-selected quasar sample to the observed data and original binned corrections in Fig. 6. For each object in our sample with a given \(L_{2500}\) and redshift, detected or otherwise, we draw 100 samples from the distribution function (equation 3) with the best-fit parameters listed in Table 3, effectively creating a mock sample of \(L_{\rm X}\) measurements if there were no limitations in X-ray depth. In Fig. 6 we then normalise to the number of objects in each \(L_{2500}\) and redshift bin (black histograms). Unlike the binned corrections, we can infer the source counts in \(L_{\rm X}\) bins with zero observed sources. In the majority of \(L_{2500}\) and \(z\) bins the binned corrections and the MLE corrections agree. However, in some panels (e.g., second row, second to last column) the binned corrections are significantly lower than the MLE distribution which we believe to be the combination of using the Bayesian sensitivity curves which appropriately account for Eddington bias but are not suitable for the crude binned corrections carried out in Section 3.1 in bins where the majority of the sample is around the flux limit. As noted previously, in all bins, the detected sources are only probing the high \(L_{\rm X}\) tail of the distribution.
As mentioned in Section 1, there is a well-known correlation between the optical and X-ray luminosities of AGN with \(L_{\rm X}\propto L_{\rm UV}^{\gamma}\) and \(\gamma\sim 0.6\). In this work, we have found that the X-ray luminosity distribution is a function of \(L_{2500}\) and redshift, with the peak of the distribution given by \(\mu(L_{2500},z)\). In the left panel of Fig. 9 we plot the peak of the \(L_{\rm X}\) distribution for a constant \(z=1,2,3\) where an increase in redshift produces a higher \(L_{\rm X}\) for a given \(L_{2500}\).
In order to check our results, we produce a stacked value of \(L_{\rm X}\) from the X-ray counts extracted at the positions of all of our quasars. We do so by calculating individual X-ray luminosities for each optical source and then produce a mean \(L_{\rm X}\). This produces an \(L_{\rm X}\) value (black square in the left panel of Fig. 9) that is higher than the centre of the contours due to the mode of a log-normal distribution (which the \(L_{\rm X}\) distribution is) being different from its mean. As a sanity check, calculating the average \(L_{\rm X}\) in the same way from the mock data used to produce the contours results in a higher \(L_{\rm X}\) value than the contours would suggest (red cross); however, it is consistent with the stacked \(L_{\rm X}\) from the data. We do the same in the middle panel of Fig. 9 in bins of \(L_{2500}\) and \(z\) to compare to the relations. Again, we find that the \(L_{\rm X}\) values from the stacked data are systematically higher than the relations and, although suffering from small-number statistics with this relatively high-resolution binning, the stacked mock data is in agreement. In fact, the stacked data in the highest redshift bin (red squares) appear to agree too well with the relations; however, this is due to the distribution of redshifts within this redshift bin. If instead we were to plot the relations for the mean redshift within each redshift bin, the red line would shift lower and the squares would be offset. The redshift dependence on the width of the \(L_{\rm X}\) distribution is also having an effect: at low redshifts where the distribution is wider, the discrepancy between the stacked data and the relation is greater. This in turn reduces the redshift dependence in the stacked data.
In order to compare more directly to the literature, we take the mock sample (red contours in the left-hand panel of Fig. 9) and fit a straight line and we obtain a \(\gamma\simeq 0.62\) which is in agreement with the literature (right-hand panel of Fig. 9). It is not obvious that this best-fit line is in agreement with the contours; however, the median \(\log_{10}L_{\rm X}\) in \(L_{2500}\) bins is consistent with the \(\gamma\simeq 0.62\) relation (blue points and error bars in the right panel of Fig. 9).
## 5 Corrected \(\alpha_{\rm OX}\)
The spectral slope between the X-ray and optical, \(\alpha_{\rm ox}\), is often used as a means of describing the relationship between the X-ray and UV luminosities, and is calculated as follows,
\[\alpha_{\rm ox}=\frac{\log_{10}\left(L_{\rm 2keV}/L_{\rm 2500\AA}\right)}{ \log_{10}\left(\nu_{\rm 2keV}/\nu_{\rm 2500\AA}\right)} \tag{12}\]
where \(L_{\rm 2keV}\) is the monochromatic X-ray luminosity at \(2\,\rm keV\) and corresponding frequency \(\nu_{\rm 2keV}\). \(\nu_{\rm 2500\AA}\) is the frequency equivalent
Figure 8: Breakdown of the parameters \(\mu\) (top and middle) and \(\sigma\) (bottom) as a function of redshift for the \(z\)-dependent models. The top two panels are the gradient \(m_{\mu}\) and the value of \(\mu\) at \(\log_{10}(L_{\rm 2500})=30\). The squares are the parameter values used to produce the straight-lines in Fig. 7 from model (ii). The grey lines and shaded regions are the parameter values obtained with model (vii) where redshift evolution is modelled as a linear dependence of \(z\) on \(m_{\mu}\), \(c_{\mu}\), and \(\sigma\). Higher resolution redshift binning...
to 2500 A. We calculate \(\alpha_{\rm ox}\) for our X-ray detected sample by converting the full band \(L_{\rm 0.5-10\,keV}\) into \(L_{\rm 2\,keV}\) and plot these values against \(L_{2500}\) in Fig. 10 colour-coded by redshift. The flux-limited nature of the parent sample is clear here in that the highest \(L_{2500}\) quasars are only found at high redshifts; however, the completeness curves generated from the sensitivity curves in Fig. 3 reveal that the completeness of our optically-selected sample drops significantly as X-ray luminosity decreases (\(\alpha_{\rm ox}\) decreases) across the full range of \(L_{2500}\) probed by our quasar sample.
In what follows, we make use of the derived underlying X-ray luminosity distribution as a function of both \(L_{2500}\) and redshift to produce a corrected \(L_{2500}\)-\(\alpha_{\rm ox}\) relation. We calculate \(\alpha_{\rm ox}\) for the mock sample in Section 4 with equation 12 and produce the red contours in Fig. 10. The true underlying \(\alpha_{\rm ox}\) distribution suggests that we are missing the \(L_{2500}\)-moderate, \(L_{\rm X}\)-faint population which any \(\alpha_{\rm ox}\) relation should account for.
In order to check our results, we produce a stacked value of \(\alpha_{\rm ox}\). We take the stacked\(L_{\rm X}\) value from Section 4 for all of the quasars, log this value and convert to \(\alpha_{\rm ox}\) with the mean \(\log_{10}L_{2500}\) of our data. This produces an \(\alpha_{\rm ox}\) value (black square in Fig. 10) that is higher than the centre of the contours due to the mode of a log-normal distribution (which \(\alpha_{\rm ox}\) is) being different from its mean. Calculating the average \(\alpha_{\rm ox}\) in the same way from the mock sample used to produce the contours results in a higher \(\alpha_{\rm ox}\) value than the contours would suggest; however, it is consistent with the stacked \(\alpha_{\rm ox}\) from the data. We caution that simple linear stacked measurements to infer relations with broad, log-normal shapes will not correspond to the peak (mode) of the distribution but be biased high as we have found here.
The relationship between the peak of the \(\alpha_{\rm ox}\) distribution and the \(L_{2500}\) and redshift is given by
\[\alpha_{\rm ox}(L_{2500},z)=a\log_{10}\left(\frac{L_{2500}}{\rm erg\,s^{-1}\, Hz^{-1}}\right)+bz+c, \tag{13}\]
where \(a=-0.264^{+0.013}_{-0.013}\), \(b=0.159^{+0.013}_{-0.013}\) and \(c=6.095^{+0.400}_{-0.395}\). In short, the relation is derived by converting the peak of the full-band \(L_{\rm X}\) (0.5-10 keV), given by model (vii) with the parameter values from Table 3, to the 2 keV monochromatic luminosity and substituting this in equation 12. The full derivation is presented in Appendix B. Thus far we have not considered whether the parameters of our model are independent; however, parameters \(a\) and \(c\) are correlated since both are functions of \(m_{\mu}\) and so the uncertainty on \(\alpha_{\rm ox}\) will include, at the very least, the covariance of \(a\) and \(c\). We consign the equation for \(\Delta\alpha_{\rm ox}\) and its derivation to Appendix B but note here that we assume the parameters of our model in Table 3 are independent (but
Figure 10: \(\alpha_{\rm ox}\) versus \(L_{2500}\) for our X-ray detected quasar sample with points colour-coded by redshift. The blue-to-yellow lines correspond to the completeness of our sample across \(L_{2500}\). The \(L_{2500}\) distribution is plotted above the axes. The red contours are the mock sample. The black square and error bars is the \(\alpha_{\rm ox}\) result from our XMM stacking analysis. The red cross is generated by stacking the random sample from the MLE results used to produce the contours.
Figure 9: The \(L_{2500}\)–\(L_{\rm X}\) plane with the X-ray detected sample plotted as black circles. Left: The solid blue and red lines are the peaks of the \(L_{\rm X}\) distribution function with constant \(z=1,2,3\) and the shaded areas are the 0.5-\(\sigma\) width of the distribution. The mock sample is represented by the red contours. The black square is the result from our XMM stacking analysis and red cross is the equivalent stacking of the mock sample. Middle: The same relations and data as in the left panel, now overplotted with the stacked data (squares) and stacked mock sample (crosses) in redshift and \(L_{2500}\) bins. Right: The orange line is produced by fitting a straight line to the mock sample, and the blue points and error bars are the median \(L_{\rm X}\) in \(L_{2500}\) bins and the 1-\(\sigma\) of the \(L_{\rm X}\) distributions, respectively.
see Appendix B and Fig. 11). We provide the posterior distributions of the parameters as supplementary data.
In Fig. 11 we plot \(\alpha_{\rm ox}\) from equation 13 for a constant \(z=1,2,3\) where an increase in redshift produces a vertical shift towards less-negative \(\alpha_{\rm ox}\) values (upwards on the plot). Our model that describes how the peak of the intrinsic distribution of \(\alpha_{\rm ox}\) depends on \(L_{2500}\) at different redshifts (accounting for X-ray sensitivity limits and the underlying redshift evolution of the relation between \(L_{2500}\) and \(L_{X}\) over this redshift range) produces significantly steeper relations (solid lines in Fig. 11) than most prior estimates that use X-ray upper-limits and often combine a wide redshift range (e.g., Just et al., 2007; Nanni et al., 2017; Timlin III et al., 2021, as shown by the dashed lines in Fig 11, right). For comparison to the literature, we use our mock sample that corrects for the X-ray incompleteness (but not the uneven sampling of the quasar samples in terms of \(L_{2500}\) and \(z\)) and fit a linear relation between \(L_{2500}\) and all of our mock \(\alpha_{\rm ox}\) values. Fitting the mock sample with a single linear relation produces an \(\alpha_{\rm ox}\) with a flatter slope that is in better agreement with the literature but has a lower normalisation. The black line is lower in normalisation for two reasons: i) it accounts for X-ray fainter sources that tend to be below the sensitivity limits, and ii) it tracks the quasar sample that is dominated by lower redshift (\(z\lesssim 2\)) sources, which we find to have lower \(\alpha_{\rm ox}\) (at a given \(L_{2500}\)).
## 6 Discussion
Using a sophisticated Bayesian framework, we have shown that the intrinsic distribution of X-ray luminosities of the SDSS quasar sample evolves with redshift, shifting toward higher \(L_{\rm X}\) at a given \(L_{2500}\) and with decreasing scatter in the distribution as redshift increases. Our finding is in disagreement with a number of prior works that do not find any evolution in this relation, albeit for distinct samples and without applying the sophisticated analysis techniques that we present here (see Section 1). However, Kelly et al. (2007) also find evolution of the \(\alpha_{\rm ox}\)-\(z\) relation in a sample of radio-quiet quasars across \(z=0.1\)-\(4.7\) with \(\alpha_{\rm ox}\) increasing as redshift increases (with \(\alpha_{\rm ox}\) depending linearly on the age of the Universe).
The purpose of this work is to derive the intrinsic distribution of \(L_{\rm X}\) as a function of \(L_{2500}\) and redshift that applies to the optically-selected SDSS quasar sample, specifically. While our finding of redshift evolution of the \(L_{2500}\)-\(L_{\rm X}\) relation is at odds with the consensus (see Section 1) we do not dwell on complex comparisons as our results apply to a specific (but well-defined) sample. However, one advantage of our work is that we have carefully considered the X-ray sensitivity limitations, without so doing would result in a different answer for the \(L_{2500}\)-\(L_{\rm X}\) relation (or \(\alpha_{\rm ox}\)). These relations are important for understanding the balance of energetic output coming from the corona versus the accretion disk, and in order to compare to physical models (e.g., Kubota and Done, 2018) one must account for X-ray sensitivity limitations of the sample.
While we do not aim to come up with a detailed physical model to explain the observed redshift evolution, it is informative to look at the black hole properties of the optically-selected SDSS quasar sample across redshift and compare to the trends observed in Kubota and Done (2018). We estimate the black hole masses (BHM) and Eddington ratios (\(\lambda_{\rm Edd}\)) of our quasars using the ICA-based spectrum reconstructions (see Fig. 1 and Section 2.1), calculating BIMs from the full width at half maximum (FWHM) of the C iv\(\lambda 1550\) and Mg ii\(2800\) emission lines, redshift-permitting, and the 1350 A and 3000 A monochromatic luminosities. For the C iv-derived BIMs we apply the relation of Coatman et al. (2017) which accounts for the non-virial component and subsequent asymmetry of the C iv emission line. The Mg ii BHMs are estimated with the Vestergaard and Osmer (2009) relation. We apply bolometric corrections of 5.15 and 3.81 to the monochromatic 1350 A and 3000 A luminosities, respectively, to estimate the bolometric luminosities (Shen et al., 2011). With the BHMs and bolometric luminosities in hand, the \(\lambda_{\rm Edd}\) are calculated. We calculate the median BHM and \(\lambda_{\rm Edd}\) in bins of redshift along with the standard error on the median values and the standard deviation of the distributions using either the C iv- and Mg ii-derived values or both values at redshifts where both lines are within the spectral window (see Fig. 12). We do not focus on the absolute values of the quantities but instead focus on the general trends of BHM and \(\lambda_{\rm Edd}\) with redshift, observing that both BHM and \(\lambda_{\rm Edd}\) increase as redshift increases.
Considering the Mg ii-derived BHM estimates, the average log(BHM) increases roughly linearly by \(\sim\)0.5 dex per unit \(z\). Similarly, log \(\lambda_{\rm Edd}\) increases by \(\sim\)0.4 dex per unit \(z\). Given that we find that the peak of the \(L_{\rm X}\) distribution increases by \(\sim\)0.4 dex per unit \(z\) (\(p_{\mu}\)) (for a given \(L_{2500}\)), both the BHM and \(\lambda_{\rm Edd}\) are viable drivers of the redshift evolution of the peak of the distribution. However, we caveat our \(\lambda_{\rm Edd}\) measurements with the fact that we have used only a constant bolometric correction applied to the UV luminosity, whilst the negative \(\alpha_{\rm ox}\) (i.e., non-flat relationship between \(L_{2500}\) and \(L_{\rm X}\)) indicates a small, luminosity-dependence to these corrections may be required. While this may alter the magnitude of the change in \(\lambda_{\rm Edd}\) over the sample it is unlikely to significantly alter the observed trend.
We also find that the width of the \(L_{\rm X}\) distribution decreases by \(\sim\)0.1 dex per unit \(z\) (as quantified by the \(p_{\sigma}\) parameter). In contrast, as redshift increases the uncertainty in the BHM measurements increases which produces an increase in the width of the BHM and \(\lambda_{\rm Edd}\) distributions. If BHM and/or \(\lambda_{\rm Edd}\) are driving the redshift evolution of the \(L_{2500}\)-\(L_{\rm X}\) relation then it is perhaps the case that the intrinsic BHM distribution is narrower at higher redshift but that the uncertainty (and resulting observational scatter) in our BHM measurements prevent this from being observed.
The spectral energy distribution (SED) model of Kubota and Done (2018) predicts that the X-ray luminosity scales linearly with BHM, after fixing \(L_{\rm X}=0.02L_{\rm Edd}\) motivated by the SED fits of a handful of AGN (but applied to a larger sample by Mitchell et al., 2023). In fact, Mitchell et al. (2023) explicitly show that \(\alpha_{\rm ox}\) (and therefore the \(L_{2500}\)-\(L_{\rm X}\) relation) has a dependence on both the BHM and Eddington-scaled accretion rate due to the relative contributions of the hot X-ray corona, the warm Compton region of the disc and the standard disc in their truncated disc model. The large uncertainties on our BHM and \(\lambda_{\rm Edd}\) measurements preclude a more detailed discussion on the effect of the BH properties but the general trends are at least an indication that the evolving population of (observed) BHs and specifically their _increasing_ masses (toward higher redshift) could be producing the observed redshift dependence of the \(L_{2500}\)-\(L_{\rm X}\) relation. Additionally, Kubota and Done (2018) observe that an increase in the Eddington-scaled accretion rate of 1 dex should produce a _decrease_ in log\(L_{\rm X}\) of 0.5 dex (for constant \(L_{2500}\)). In the future, it would be valuable to extend the Bayesian hierarchical modelling to consider the underlying Eddington-scaled accretion rate and black hole masses, enabling a more direct comparison with Kubota and Done (2018).
Quasar SEDs likely evolve with BHM and \(\lambda_{\rm Edd}\). This true evolution coupled with the flux-limited nature of SDSS leads to quasar samples that have masses and \(\lambda_{\rm Edd}\) distributions that appear to evolve with redshift. Thus, it is important to stress that any observed evolution with redshift of the \(L_{2500}\)-\(L_{\rm X}\) and \(L_{2500}\)-\(\alpha_{\rm ox}\) relations is likely due to the evolution of the quasar selection with redshift. However,
these measurements help provide insight into the physical origins of the observed trends. Modelling of the _optical_ selection effects will be the focus of a future work. Nevertheless, the changes, due to the optical selection, in the distribution of BHMS with redshift seem to match with what might be expected from Kubota & Done (2018). Given this result, a single \(L_{2500}\)-\(\alpha_{\rm ox}\) relation with a constant slope across all redshifts is almost certainly not correct and so physical models should not be trying to reproduce a non-evolving \(\alpha_{\rm ox}\) relation.
## 7 Conclusions
We have carefully inferred the intrinsic X-ray luminosity distribution as a function of UV luminosity and redshift of the optically-selected SDSS quasars in the Stripe 82 and XXL fields using a sophisticated Bayesian hierarchical modelling approach. We have crossmatched the optical SDSS sample to the XMM point sources with Nway (Salvato et al., 2018). We have combined XMM-detected quasars with Bayesian sensitivity curves calculated with the custom xmmappipe (Georgakakis & Nandra, 2011) in order to extract information from the X-ray undetected quasars. Our main findings are:
1. The xmapptype reductions produce log \(N\)-\(\log S\) curves that are consistent with previous works (Fig. 4 and Section 2.2)
2. The \(L_{2500}\)-\(L_{\rm X}\) relation can be modelled as a Gaussian function with mean \(\mu\) which depends on the \(\log\)\(L_{2500}\) and width \(\sigma\)(Section 3.2).
3. There is some redshift dependence of the \(L_{2500}\)-\(L_{\rm X}\) relation with \(\mu\) increasing with redshift. \(\sigma\), on the other hand, decreases as \(z\) increases (Section 3.5 and Fig. 8).
4. For a constant \(z\), our fitted \(L_{2500}\)-\(L_{\rm X}\) relation has a slope of \(\gamma\sim 0.3\). The slope in the observed relation of \(\gamma\sim 0.6\) found by previous works is reproduced when considering the joint redshift and \(L_{2500}\) distribution of the optically-selected SDSS quasar sample (Section 4 and Fig. 9).
5. Measurements from stacked X-ray data should be considered with caution when deriving quantities from a log-normal distribution (Sections 4 and 5).
6. Attempting to correct the X-ray luminosity distribution in \(L_{\rm X}\) bins to account for the undetected quasars can lead to underestimated source counts and is limited to only X-ray luminosity ranges that have been detected (Fig. 6 and Section 3.1). A more sophisticated estimation of the \(L_{\rm X}\) distribution is implemented via the Bayesian hierarchical modelling approach used throughout the rest of the paper.
7. We produce a relation to describe \(\alpha_{\rm ox}\) that is now a function of \(L_{2500}\) and redshift. When marginalising over redshift in our SDSS sample, the \(\alpha_{\rm ox}\) relation we recover has a slope consistent with the literature but with a lower normalisation (Section 5 and Fig. 11).
We have made the first steps to understand the intrinsic relationship between the X-ray and UV luminosity by considering the optically-selected SDSS quasar sample. The next step is to approach the problem from an X-ray selected sample in order to parametrize the optical selection. The X-ray selected sample from eROSITA (Merloni et al., 2012; Predehl et al., 2021) with follow-up spectroscopy from SDSS-V (Kollmeier et al., 2017) will be beneficial for this work and support a broader goal of obtaining a full characterisation of the UV and X-ray emission properties of the AGN population and the underlying physical structure of the accreting system that produce them.
## Acknowledgements
ALR would like to thank Aneesh Naik for helpful discussions surrounding the derivation of the likelihood function, and Chris Done for helpful comments on the discussion. The research leading to these
Figure 11: The same \(L_{2500}\)–\(\alpha_{\rm ox}\) space as Fig. 10 with the X-ray detected sample as black circles and mock sample described by the contours. Left: The solid blue and red lines are calculated via equation 13 with constant \(z=1,2,3\) and the shaded areas are the 0.5-\(\sigma\) widths of the relation. Right: The dashed lines are relations from the literature. The orange line is produced by fitting a straight line to the mock sample which is consistent with the median \(\alpha_{\rm ox}\) in \(L_{2500}\) bins (blue points and error bars).
results has received funding from the European Union's Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158). ALR and JA acknowledge support from a UKRI Future Leaders Fellowship (grant code: MR/T020989/1). AR acknowledges financial support by the European Union's Horizon 2020 programme "XMM2ATHENA" under grant agreement No 101004168. AG acknowledges support from the EU H2020-MSCA-ITN-2019 Project 860744 "BiD4BESt: Big Data applications for Black hole Evolution Studies" and the Hellenic Foundation for Research and Innovation (HFRI) project "4MOVE-U" grant agreement 2688. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
The results in this paper are based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions.
SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
## Data availability
The data underlying this article are all public and available via the XMM-Newton Science Archive ([https://www.cosmos.esa.int/web/xmm-newton/xssa](https://www.cosmos.esa.int/web/xmm-newton/xssa)) and the SDSS website ([https://www.sds84.org/](https://www.sds84.org/)). The emcee samples of the parameters for model (vii) are included in the article's online supplementary material. Additional data products generated for this article will be shared on request to the corresponding author.
|
2309.14526 | Multifractality for intermediate quantum systems | While quantum multifractality has been widely studied in the physics
literature and is by now well understood from the point of view of physics,
there is little work on this subject in the mathematical literature. I will
report on a proof of multifractal scaling laws for arithmetic \u{S}eba
billiards. I will explain the mathematical approach to defining the Renyi
entropy associated with a sequence of eigenfunctions and sketch how arithmetic
methods permit us to obtain a precise asymptotic in the semiclassical regime
and how this allows us to compute the fractal exponents explicitly. Moreover, I
will discuss how the symmetry relation for the fractal exponent is related to
the functional equation of certain zeta functions. | Henrik Ueberschaer | 2023-09-25T21:02:07Z | http://arxiv.org/abs/2309.14526v1 | # Multifractality for
###### Abstract.
While quantum multifractality has been widely studied in the physics literature and is by now well understood from the point of view of physics, there is little work on this subject in the mathematical literature. I will report on a proof of multifractal scaling laws for arithmetic Seba billiards. I will explain the mathematical approach to defining the Renyi entropy associated with a sequence of eigenfunctions and sketch how arithmetic methods permit us to obtain a precise asymptotic in the semiclassical regime and how this allows us to compute the fractal exponents explicitly. Moreover, I will discuss how the symmetry relation for the fractal exponent is related to the functional equation of certain zeta functions.
## 1. Introduction
Many dynamical systems are in a state of transition between two regimes. In models of the brain, such as neural networks, the firing patterns of neurons may undergo a transition from isolated firing to avalanches of firing neurons. In the quantum physics of disordered electronic systems the system may be in an insulating or a conducting phase. The former phase corresponds to electronic states which are localized (no transport), whereas the latter phase corresponds to extended states (diffusive dynamics). The study of phase transitions and, in particular, the critical states at the transition between these different regimes, is central to understanding important phenomena such as the functioning of our brain or the properties of semi-conducting materials.
One of the key features of systems in a critical state is that they often display a self-similarity in a certain scaling regime which is so complex that it cannot be captured by a single fractal exponent but only by a continuous spectrum of fractal exponents. This phenomenon is known as multifractality.
Multifractality in quantum systems has been studied in the physics literature since the 1980s and has become an extremely active field in theoretical and experimental physics [2, 24, 16, 18, 8, 3, 7, 9]. However, the abundance of results in the physics literature is in stark contrast with a glaring absence of rigorous mathematical results. One of the key difficulties in obtaining
a mathematical proof is to formulate the problem in a concise mathematical way and to then develop the mathematical methods which permit its resolution.
In joint work with Keating, we recently proved the existence of multifractal eigenfunctions for arithmetic Seba billiards [14] as well as quantum star graphs [15]. The key idea which permitted this advance was an approach to associate a quantity, known as Renyi's entropy - in some sense a generalization of Shannon's entropy - with each eigenfunction. We were able to obtain asymptotic estimates of the Renyi entropy along a typical sequence of eigenfunctions. This permitted the derivation of explicit formulae for the fractal exponents and led to the derivation of a multifractal scaling law for this system.
Multifractal self-similarity typically emerges at the transition between two physical regimes. Examples of such intermediate quantum systems are disordered systems at the Anderson or Quantum Hall transitions from a localized to a delocalized phase [2, 24]. In the field of quantum chaos, pseudo-integrable systems [21] are intermediate between integrability and chaos in the sense that their dynamics in phase space is not constrained to tori but rather to handled spheres (e. g. rational polygonal billiards). One often includes in this class toy models of pseudointegrable dynamics such as parabolic automorphisms of the torus [19], quantum star graphs [5, 4, 13, 6] and Seba billiards (rectangular billiards with a Dirac delta potential) [25].
The morphology of eigenfunctions with multifractal self-similar structure is far more complex than being purely localized or delocalized. Numerical and experimental studies of a large class of quantum systems have resulted in numerous conjectures in the physics literature [18, 3, 7, 9] such as predictions of a symmetry relation for the fractal exponents \(D_{q}\) around the critical value \(q=1/4\).
## 2. The gap between localization and delocalization
Much of the mathematical literature on quantum chaos over the past 40 years has focused on the classification of limit measures which arise in the high frequency limit from eigenfunctions of quantized chaotic systems. One of the key results of the field is the Quantum Ergodicity Theorem which states that on a Riemannian manifold without boundary, whose geodesic flow is ergodic with respect to Liouville measure, a typical sequence of eigenfunctions gives rise to Liouville measure as the only semiclassical defect measure along this sequence.
Quantum Ergodicity (QE) was first proved in the 1980s by Zelditch and Colin de Verdiere [10, 28] who completed the earlier work of Snirelman [26]. QE was later generalized to manifolds with boundary by Gerard-Leichtnam [12] and Zelditch-Zworski [29]. The Quantum Unique Ergodicity (QUE) Conjecture put forward by Rudnick and Sarnak in 1994 [22] asserts that the only such measure should be the Liouville measure. Lindenstrauss [17]
proved this conjecture in 2006 for arithmetic hyperbolic surfaces and was awarded the Fields Medal for his work. Moreover, Anantharaman [1] ruled out localization on points or geodesic segments for Anosov manifolds. De Bievre-Faure-Nonnenmacher [11] demonstrated the existence of partially localized limit measures for the eigenstates of quantized hyperbolic automorphisms of tori with minimal periods.
While rigorous mathematical work has largely focused on the proof of localization and delocalization results for the probability densities which arise from quantum eigenfunctions (Q(U)E, Scarring, Anderson localization), the key feature of intermediate quantum systems is the multifractal self-similarity of their eigefunctions. This feature today remains poorly understood from a mathematical point of view.
## 3. Multifractality for Quantum Billiards
Consider the Dirichlet problem for the positive Laplacian \(-\Delta=-\partial_{x}^{2}-\partial_{y}^{2}\) on a compact domain \(D\subset\mathbb{R}^{2}\) with piece-wise smooth boundary. We have discrete spectrum accumulating at infinity associated with eigenfunctions \(\psi_{j}\):
\[(\Delta+\lambda_{j})\psi_{j}=0,\quad\psi_{j}|_{\partial D}=0, \tag{3.1}\]
where
\[0=\lambda_{0}<\lambda_{1}\leq\cdots\leq\lambda_{j}\leq\cdots\to+\infty.\]
Our goal is to prove a multifractal scaling law for a subsequence of eigenfunctions \(\{\lambda_{j_{k}}\}_{k=0}^{\infty}\), as \(\lambda_{j_{k}}\to+\infty\). The general idea is to embed the domain in a rectangle and expand with respect to an eigenbasis of complex exponentials. A key point is that the scaling law should be independent of rotations and scaling of the rectangle in which the domain is embedded. The scaling parameter will then arise from the number of \(O(1)\) contributions in this expansion, as the eigenvalue tends to infinity.
We will illustrate this in detail in the case of toral Schrodinger operators. Let \(\mathbb{T}^{d}=2\pi\mathbb{R}^{d}/\mathbb{Z}^{d}\) and \(V\in C^{0}(\mathbb{T}^{d})\). We consider \(L^{2}\)-normalized solutions of the stationary Schrodinger equation on \(\mathbb{T}^{d}\):
\[(-\Delta+V)\psi_{\lambda}=\lambda\psi_{\lambda},\quad\|\psi_{\lambda}\|_{L^{2} (\mathbb{T}^{d})}=1 \tag{3.2}\]
We can expand the eigenfunctions into Fourier series
\[\psi_{\lambda}(x)=\frac{1}{2\pi}\sum_{\xi\in\mathbb{Z}^{d}}\hat{\psi}_{ \lambda}(\xi)e^{i\xi\cdot x}. \tag{3.3}\]
By Parseval's identity, we obtain a discrete probability measure on \(\mathbb{Z}^{d}\):
\[\mu_{\lambda}(\xi):=|\hat{\psi}_{\lambda}(\xi)|^{2}.\]
For \(q>1\), we define the moment sum
\[M_{q}(\mu_{\lambda})=\sum_{\xi\in\mathbb{Z}^{d}}\mu_{\lambda}(\xi)^{q}. \tag{3.4}\]
A fractal scaling law, in the semiclassical limit \(\lambda\to\infty\), is a power law
\[M_{q}(\mu_{\lambda})\sim N_{\lambda}^{(1-q)D_{q}}, \tag{3.5}\]
where \(N_{\lambda}\) denotes the number of \(O(1)\)-contributions in (3.4), as \(\lambda\to\infty\), and \(D_{q}\) denotes the fractal exponent.
We note that, in the case of the torus, the mass of the probability measure \(\mu_{\lambda}\) is concentrated on lattice points which lie inside a thin annulus of central radius \(\sqrt{\lambda}\) and whose width depends on the spectral parameter \(\lambda\) (cf. Figure 1). In the semiclassical limit, as \(\lambda\to\infty\), the number of \(O(1)\)-contributions will grow slowly (in fact on a logarithmic scale) with \(\lambda\). However, this number fluctuates a lot, as the number of lattice points in thin annuli is subject to strong fluctuations (this is due to the scale of the width being of much lower order than the error term in the Gauss circle law). In order to compute \(N_{\lambda}\), as a function of \(\lambda\) one must perform a spectral average. This is the first challenge, from a mathematical point of view, to be able to prove a fractal scaling law. As we will see below, for particularly simple choices of potential, where the measure \(\mu_{\lambda}\) takes a simple and explicit form, it is possible to perform this calculation. For generic potentials, it is expected to be a much more challenging task.
In order to compute the fractal exponent associated with a sequence of eigenfunctions, we introduce the Renyi entropy of the measure \(\mu_{\lambda}\):
\[H_{q}(\mu_{\lambda})=\frac{1}{1-q}\log M_{q}(\mu_{\lambda}),\quad q>1. \tag{3.6}\]
The Renyi entropy may be thought of as a generalization of the Shannon entropy which is familiar from information theory, in the sense that the
Figure 1. The measure \(\mu_{\lambda}\) is concentrated on lattice points which lie inside a thin annulus of central radius \(\sqrt{\lambda}\). The width of this annulus grows with \(\lambda\) on a logarithmic scale. The number of lattice points inside the annulus is subject to subtle fluctuations.
latter is recovered in the limit, as \(q\to 1\):
\[\lim_{q\to 1}H_{q}(\mu_{\lambda})=-\left[\frac{d}{dq}\log M_{q}(\mu_{\lambda}) \right]_{q=1}=-\sum_{\xi\in\mathbb{Z}^{d}}\mu_{\lambda}(\xi)\log\mu_{\lambda}( \xi).\]
Provided one can obtain an asymptotic for the Renyi entropy in the limit, \(\lambda\to\infty\), possibly by restricting oneself to a subsequence of eigenvalues, and on tackles the problem of determining the scaling parameter (by averaging out the fluctuations mentioned above), then one might hope to be able to compute the fractal exponent \(D_{q}\) for \(q>1\).
For a generic choice of the potential \(V\) this problem can be very hard. To give an idea of the challenges involved: if one picks a potential modelling a disordered system in a scaling regime that corresponds to the thermodynamical limit (say taking a large torus and scaling back to the standard torus), the occurrence of multifractal scaling appears to be related to the onset of a phase transition between localized and delocalized regimes (in \(d\geq 3\)).
For a simple choice of potential, however, which allows for explicit expressions of the eigenfunctions, and, thus, the measure \(\mu_{\lambda}\) it is possible to overcome these challenges.
## 4. Multifractality for an arithmetic Seba billiard
In a 1990 paper [25] Petr Seba introduced rectangular billiards with a Dirac delta potential placed in the interior as a toy model for more complicated pseudo-integrable billiards whose dynamics is in some sense intermediate between integrable and chaotic dynamics. In this section we will consider a slightly modified version of this billiard, namely a square torus with a delta potential. We will refer to this as an arithmetic Seba billiard, because the Laplace spectrum is of arithmetic nature. It is given, up to a factor, by integers representable as a sum of two squares:
\[\sigma(-\Delta_{\mathbb{T}^{2}})=\{n=x^{2}+y^{2}\mid(x,y)\in\mathbb{Z}^{2}\}\]
We note that the Laplace eigenvalues have multiplicities which are given by the arithmetic function
\[r_{2}(n)=\#\{(x,y)\in\mathbb{Z}^{2}\mid n=x^{2}+y^{2}\} \tag{4.1}\]
which counts the number of lattice points on the circle of radius \(\sqrt{n}\).
Employing self-adjoint extension theory one can show that the spectrum of the Seba billiard consists of two types of eigenvalues. There are old Laplace eigenvalues, with multiplicity reduced by \(1\), which correspond to co-dimension \(1\) subspaces of eigenfunctions which vanish at the position of the potential. There are also new eigenvalues, with multiplicity \(1\), corresponding to new eigenfunctions which feel the potential. These new eigenvalues interlace with the Laplace eigenvalues.
Moreover, self-adjoint extension theory yields explicit formulae for these new eigenfunctions which in turn give rise to an explicit expression for the
Fourier coefficients and, hence, the measure \(\mu_{\lambda}\):
\[\mu_{\lambda}(\xi)=\frac{(|\xi|^{2}-\lambda)^{-2}}{\sum_{\xi^{\prime}\in\mathbb{Z }^{2}}(|\xi^{\prime}|^{2}-\lambda)^{-2}}\]
Moreover, we note that \(\lambda\notin\sigma(-\Delta)\), because of the interlacing property of the new eigenvalues.
The moment sums associated with the measure \(\mu_{\lambda}\), for a new eigenvalue \(\lambda\), are of the form
\[M_{q}(\mu_{\lambda})=\frac{\zeta_{\lambda}(2q)}{\zeta_{\lambda}(2)^{q}}, \tag{4.2}\]
where we introduce the shifted zeta function
\[\zeta_{\lambda}(s)=\sum_{n\geq 0}\frac{r_{2}(n)}{|n-\lambda|^{s}},\quad\Re s>1. \tag{4.3}\]
### Weak coupling: a monofractal regime
It is instructive to look at the physically trivial case of weak coupling (fixed self-adjoint extension). In this regime, \(\lambda\) is typically close to a neighbouring Laplace eigenvalue \(m\) (cf. [23]). Let us denote by \(\Delta_{j}\) the distance between a new eigenvalue and the nearest Laplace eigenvalue.
For a given \(x\gg 1\), we define the mean distance up to threshold \(x\) as
\[\left\langle\Delta_{j}\right\rangle_{x}=\frac{1}{\#\{\lambda_{k}\leq x\}} \sum_{\lambda_{k}\leq x}\Delta_{k}. \tag{4.4}\]
In the case of the square torus, we have \(\left\langle\Delta\right\rangle_{x}=O((\log x)^{-1/2})\) (which is a special case of a more general estimate derived in [23]), where we note that in this case the average spacing of the Laplace eigenvalues is of order \(\sqrt{\log x}\) due to the multiplicities in the Laplace spectrum.
Thus, only one term (or one circle in the lattice with radius \(\sqrt{m}\)) contributes. The sum scales as follows along the subsequence of typical eigenvalues:
\[M_{q}(\mu_{\lambda})=\frac{\zeta_{\lambda}(2q)}{\zeta_{\lambda}(2)^{q}}\sim \frac{r_{2}(m)|m-\lambda|^{-2q}}{(r_{2}(m)|m-\lambda|^{-2})^{q}}=r_{2}(m)^{1-q}\]
The number of terms which contribute is simply the number of lattice points on the circle \(|\xi|^{2}=m\).
The Renyi entropy has asymptotics
\[H_{q}(\mu_{\lambda})\sim\frac{1}{1-q}\log(r_{2}(m)^{1-q})=\log r_{2}(m).\]
It can be shown that for a full-density subsequence of Laplace eigenvalues we have for any \(m\) in this subsequence
\[r_{2}(m)=(\log n)^{\frac{1}{2}\log 2+o(1)},\quad m\to+\infty.\]
Hence,
\[N_{\lambda}=(\log m)^{\frac{1}{2}\log 2}\]
which is known as the normal order of \(r_{2}\).
We note that the fluctuations of the arithmetic function \(r_{2}(n)\) are very subtle. It is a classical theorem of Landau from 1907 that the number of integers less or equal than \(x\) grows like \(cx/\sqrt{\log x}\) which implies that on average the multiplicities are of order \(\sqrt{\log x}\). The smaller exponent \(\frac{1}{2}\log 2\) arises along a typical (as in full density) subsequence, because there is a very sparse subsequence, where \(r_{2}(n)\) grows much faster (of order \(n^{o(1)}\) for some slowly decaying exponent function). Moreover, there are also sparse subsequences, where \(r_{2}\) remains bounded.
From the Renyi entropy one can now readily obtain the fractal exponent
\[D_{q}=\lim_{\lambda\to\infty}\frac{H_{q}(\mu_{\lambda})}{\log N_{\lambda}}=1.\]
In particular, we note that the fractal exponent does not vary with \(q\), because, due to the weakness of coupling strength, only the nearest circle contributes.
### Strong coupling: a multifractal regime
The physically interesting regime requires a renormalization of the extension parameter in the semiclassical limit. This allows to consider stronger coupling strength. We can measure the strentgh of the perturbation by computing the mean distance between old and new eigenvalues. For a suitable renormalization one obtains
\[\langle\Delta_{j}\rangle=(\log x)^{\alpha+o(1)},\quad\alpha\in(-1/2,1/2],\]
where the exponent \(\alpha\) is a measure of the strentgh of the perturbation.
Because in such regimes the new eigenvalues lie farther away from the neighbouring Laplace eigenvalues (on the scale of the mean spacing of the eigenvalues), many more circles contribute. In fact, all lattice points in a thin annulus of central radius \(\sqrt{\lambda}\) must be taken into account.
We have the following theorem, proven jointly with Keating in [14], which computes the fractal exponents associated with a full density subsequence of new eigenvalues in a strong coupling regime. For a range of exponents \(q\) which depends on the coupling strength \(\alpha\) associated with the subsequence we derive an explicit formula for the fractal exponent which shows how it varies with \(q\), thereby proving multifractality.
**Theorem 4.1**.: _Let \(\Lambda\) be a sequence of new eigenvalues in a strong coupling regime such that \(\alpha(\Lambda)\in(\frac{1}{4},\frac{1}{2})\). There exists a full-density subsequence \(\Lambda^{\prime}\subset\Lambda\) such that for any \(q\) in the range_
\[\frac{1-\log 2}{2-4\alpha}<q\leq\frac{1}{2-4\alpha}\]
_we have the following formula for the fractal exponents associated with the sequence \(\Lambda^{\prime}\):_
\[D_{q}(\Lambda^{\prime})=\frac{1}{2\alpha}(1-\frac{1}{2q})\log 2 \tag{4.5}\]
### The ground state regime
Instead of studying a high frequency regime, where \(\lambda\to\infty\), one might as well consider a low frequency regime, where \(\lambda\to 0\). In this regime there is no relationship expected between the intermediate type of dynamics and the occurrence of multifractality. Rather, multifractality in such regimes is expected to occur for a much wider class of systems.
However, in the case of Seba billiards there is a very interesting link with Epstein's zeta function associated with quadratic forms. This link occurs for general tori not just arithmetic ones. We introduce the following modified version of the shifted zeta function above:
\[\zeta_{\lambda}^{*}(s)=\sum_{n\in\mathcal{N}}\frac{r_{Q}(n)}{|n-\lambda|^{s}},\quad\Re s>1, \tag{4.6}\]
where \(\mathcal{N}\) denotes the Laplace spectrum on a general unimodular rectangular tori, given by the set of values taken by the quadratic form \(Q(x,y)=a^{2}x^{2}+a^{-2}y^{2}\), \((x,y)^{2}\), and \(a>0\). Moreover, \(r_{Q}\) denotes the representation number of \(Q\).
We introduce the modified moment sums \(M_{q}^{*}(\lambda)=\zeta_{\lambda}^{*}(2q)\) for \(q>1\). Note that we need to remove the first term, as this blows up in the limit \(\lambda\to 0\). We are, thus, interested in the fluctuations around this blow-up term which motivates the study of the modified moment sums.
For \(q>1\) we define the fractal exponents
\[D_{q}^{*}=\frac{d_{q}^{*}-qd_{1}^{*}}{q-1},\quad d_{q}^{*}=\lim_{\lambda\to 0 }\zeta_{\lambda}^{*}(2q)=\zeta_{Q}(2q), \tag{4.7}\]
where we denote Epstein's zeta function associated with the quadratic form \(Q\) as
\[\zeta_{Q}=\sum_{(m,n)\in\mathbb{Z}^{2}\setminus\{0\}}Q(m,n)^{-s},\quad\Re s>1.\]
We also note that we have the following functional equation
\[\zeta_{Q}(1-s)=\varphi_{Q}(s)\zeta_{Q}(s)\]
where \(\varphi_{Q}\) denotes a certain meromorphic function associated with \(Q\).
The first predicition of a symmetry relations for the fractal exponents of multifractal systems is due to Mirlin, Fyodorov, Mildenberger and Evers for the case of the Anderson model in[20]. The following symmetry relation was proved in [14].
**Theorem 4.2**.: _The fractal exponent \(D_{q}^{*}\) admits an analytic continuation to the full complex plane. It satisfies the following symmetry relation with respect to the critical point \(q=1/4\):_
\[D_{1/2-q}^{*}=\frac{1-q}{1/2+q}\left(D_{q}^{*}+\frac{\log\varphi_{Q}(2q)+(2q-1 /2)\log\zeta_{Q}(2)}{1-q}\right) \tag{4.8}\]
## 5. Outlook
Multifractal scaling is an important property of quantum systems which are intermediate between two physical regimes, and many important systems such as the Anderson model and pseudo-integrable quantum billiards fall into this category. However, understanding the rigorous mathematical underpinning of multifractality goes far beyond the study of intermediate quantum systems. In fact, multifractal scaling appears to be related to deep and important mathematical problems in a number of models in mathematical physics. Another highly interesting type of models are nonlinear partial differential equations such as the Euler and Navier-Stokes equations which model the dynamics of incompressible fluids. In this case, it turns out that the occurrence of multifractal scaling is related to the deep and difficult question of the regularity or blow-up of solutions to these nonlinear PDE which is the subject of a forthcoming article [27].
|
2309.13249 | A Survey of Document-Level Information Extraction | Document-level information extraction (IE) is a crucial task in natural
language processing (NLP). This paper conducts a systematic review of recent
document-level IE literature. In addition, we conduct a thorough error analysis
with current state-of-the-art algorithms and identify their limitations as well
as the remaining challenges for the task of document-level IE. According to our
findings, labeling noises, entity coreference resolution, and lack of
reasoning, severely affect the performance of document-level IE. The objective
of this survey paper is to provide more insights and help NLP researchers to
further enhance document-level IE performance. | Hanwen Zheng, Sijia Wang, Lifu Huang | 2023-09-23T04:18:24Z | http://arxiv.org/abs/2309.13249v1 | # A Survey of Document-Level Information Extraction
###### Abstract
Document-level information extraction (IE) is a crucial task in natural language processing (NLP). This paper conducts a systematic review of recent document-level IE literature. In addition, we conduct a thorough error analysis with current state-of-the-art algorithms and identify their limitations as well as the remaining challenges for the task of document-level IE. According to our findings, labeling noises, entity coreference resolution, and lack of reasoning, severely affect the performance of document-level IE. The objective of this survey paper is to provide more insights and help NLP researchers to further enhance document-level IE performance.
## 1 Introduction
Natural language processing (NLP) triggers the present wave of artificial intelligence (Vaswani et al., 2017; Dosovitskiy et al., 2021; Liu et al., 2021; Zhang et al., 2021; Zhang and Eskandarian, 2022). Information Extraction (IE) plays a vital role in all aspects of NLP by extracting structured information from unstructured texts (Lin et al., 2020; Wang et al., 2022). Document-level IE has witnessed significant progress, benefiting from the enormous data resources provided by the Internet and the rapidly growing computational power resources (Yao et al., 2019; Xu et al., 2021; Tong et al., 2022). However, several challenges persist within the realm of document-level IE research, such as entity coreference resolution, reasoning across long-span contexts, and a lack of commonsense reasoning. Furthermore, current document-level IE research predominantly focuses on restricted domains and languages (Zheng et al., 2019; Yang et al., 2018; Tong et al., 2022; Li et al., 2021), which results in challenges for model comparisons and generalization. This limitation poses difficulties in conducting model comparisons and hampers the generalizability of findings.
To fulfill the aforementioned challenges, this survey reviews recent document-level relation extraction **(doc-RE)** and document-level event extraction **(doc-EE)** models and datasets to inform and encourage researchers for multilingual and cross-domain studies. In addition, we conduct a thorough error analysis among existing models and discuss these errors. Finally, we summarize the current literature work and propose potential future improvements to document-level IE research. The contributions of this survey paper include:
* We systematically summarize and categorize the existing datasets and approaches for Doc-RE and Doc-EE.
* A thorough error analysis is conducted with current state-of-the-art (SOTA) algorithms.
* To identify the current model challenges and limitations, we analyze and discuss the errors and construct error statistics.
This survey aims to contribute to the NLP community by providing valuable insights into document-level IE tasks. Our analysis of errors encountered in this study will serve as a foundation for future advancements in document-level IE research, encouraging researchers to innovate and improve upon existing methodologies. It is our hope that these findings will contribute to a deeper understanding of document-level IE and stimulate further enhancements in this field of study.
## 2 Tasks Definition
### Event Extraction
Event extraction (Grishman, 1997; Chinchor and Marsh, 1998; Ahn, 2006) is a task to identify and classify event triggers and relevant participants from natural language text. Formally, given a document consisting of a set of sentences where each sentence consists of a sequence of words,
the objective of this task is to identify and extract the following components from a given document:
**Event Mention**, which refers to the phrases or sentences denoting an event; **Event Trigger**, typically in the form of a verb that signals the occurrence of an event; **Event Type**, indicating the predefined type of event specified by the dataset, such as Conflict-Attack; **Argument Mention**, comprising entity mentions that provide additional details on the event, such as who, what, when, where, and how the event occurred; **Argument Role**, representing the role or type of argument associated with the entity; and finally, **Event Record**, the entry in an event table, containing several arguments with argument roles.
### Relation Extraction
The task of Relation Extraction involves predicting attributes and relationships between entities mentioned in a given document [23]. Given a document \(D\) with a set of sentences, we assume that \(D\) also contains a set of entities \(V=\{e_{i}\}_{i=1}^{N}\). For each entity \(e_{i}\), it might contain multiple entity mentions \(e_{i}=\{m_{j}\}_{j=1}^{M}\). The doc-RE task is to predict the relation types between an entity pair \((e_{s},e_{o})_{s,o\in\{1,\cdots,N\},s\neq o}\), where \(s\) stands for the subject and \(o\) stands for the object. It is possible for an entity pair to have multiple relations that require prediction, thereby rendering the task a multi-label classification problem.
More specifically, **Entity** refers to units such as _People_, _Geographic Entity_, _Location_, _Organization_, _Date_, and _Number_ within a text. **Entity Mention** refers to a phrase within a text that identifies a specific entity. For instance, "NYC" and "the big apple" are both entity mentions for "New York City". **Intra-sentence Relation** describes the relationship between entities within a single sentence, and the features within are often referred to as local features. On the other hand, **Inter-sentence Relation** refers to the relationship between entities across multiple sentences, and the features within are often referred to as global features.
## 3 Datasets
Existing studies only evaluate their proposed approaches on restricted targeted domains or languages. As a result, it is challenging to compare the effectiveness of different methods under a more general scenario. In this section, we list all doc-EE and doc-RE datasets, to share all possible options with the research community.
### Doc-RE Datasets
For biomedical domain, **Drug-gene-mutation (DGM)**[10] contains 4,606 PubMed articles, which are automatically labeled via distant supervision. DGM annotations include three entity types: _drugs_, _genes_, and _mutations_, and three relation types, including _drug-gene-mutation_, _drug-mutation_, and _gene-mutation relations_. **GDA**[20] gene-disease association corpus contains 30,192 titles and abstracts from PubMed articles that have been automatically labeled for _genes_, _diseases_, and _gene-disease associations_ via distant supervision. **CDR**[15] is manually annotated for _chemicals_, _diseases_, and _chemical-induced disease (CID)_ relations by domain experts. It contains the titles and abstracts of 1,500 PubMed articles and is split into training, validation, and test sets equally.
Several Doc-RE datasets are constructed for other domains or languages. **DocRED**[21] is a human-annotated doc-RE dataset, that includes 132,375 entities and 56,354 relational facts annotated on 5,053 Wikipedia documents.
Figure 1: Examples of doc-EE and doc-RE.
Doc-RED is generated by mapping Wikidata triples, originating from a comprehensive knowledge base closely intertwined with Wikipedia, onto complete English Wikipedia documents to get entity annotations. **RE-DocRED**Tan et al. (2022) refines 4,053 documents in the DocRED dataset targeting on resolving the problem of false negative samples. RE-DocRED increased the relation triples from 50,503 to 120,664 and decreased the _no_relation_ samples by \(3.1\%\) by adding the missing relation triples back to the original DocRED. **SciREX**Jain et al. (2020) is a document-level IE dataset that contains multiple IE tasks. It mainly focuses on its doc-RE tasks, such as Binary and N-ary relation classification. It consists of both automatic and human-annotated articles in the computer science field. **HacRED**Cheng et al. (2021) is a Chinese doc-RE dataset collected from CN-DBpedia Xu et al. (2017) that focuses on hard cases, such as long text and long distance between argument pairs, containing distractors or multiple homogeneous entity mentions.
### Doc-EE Datasets
Doc-EE datasets are mainly collected from the news and financial domain. News is a large-scale accessible source of events like social emergencies and human life incidents, thus many datasets are created focusing on news events. Meanwhile, exploding volumes of digital financial documents, as a byproduct of continuous economic growth, have been created. Many datasets are created to help extract valuable structured information to detect financial risks or profitable opportunities. Statistics of the datasets for Doc-EE are summarized in Table 2.
For the news domain, **ACE-20052** is a sentence-level event extraction (SEE) Wang et al. (2022); 2023) dataset but has been frequently used for comparison in doc-EE. Unlike ACE-2005 which contains 5 groups of events covering _justice_, _life_, _business events_, etc, **MUC-4**muc (1992) focuses on one specific event type, _attack_ events. MUC-4 contains 1,700 human-annotated news reports of terrorist attacks in Latin America collected by Federal Broadcast Information Services. More specifically, MUC-4 includes six incident types: _attack_, _kidnapping_, _bombing_, _arson_, _robbery_, and _forced work stoppage_, and four argument roles, including _individual perpetrator_, _organization perpetrator_, _physical target_, and _human target_. **WikiEvents**Li et al. (2021) follows the ontology from the KAIROS project3 for event annotation, which defines 67 event types in a three-level hierarchy. Researchers used the BRAT interface for online annotation of event mentions (triggers and arguments) and event coreference separately. **Roles Across Multiple Sentences (RAMS)**Ebner et al. (2020) is a crowd-sourced dataset with 9,124 event annotations on news articles from Reddit following the AIDA ontology. **DocEE** is the largest Doc-EE dataset to date. DocEE uses historical events and timeline events from Wikipedia as the candidate source to define 59 event types and 356 event argument roles. This dataset includes 27,485 document-level events and 180,528 event arguments that are manually labeled.
Footnote 2: [https://catalog.ldc.upenn.edu/LDC2006T06](https://catalog.ldc.upenn.edu/LDC2006T06)
For the financial domain, **DCFEE**Yang et al. (2018) comes from companies' official finance announcements and focuses on four event types: _Equity Freeze_, _Equity Pledge_, _Equity Repurchase_, and _Equity Overweight_. Data labeling was done through distant supervision. **ChFinAnn**Zheng et al. (2019) contains official disclosures such as annual reports and earnings estimates, obtained from the Chinese Financial Announcement (CFA). The dataset has five event types: _Equity Freeze_, _Equity Reurchase_, _Equity Underweight_, _Equity Overweight_ and _Equity Pledge_, with 35 different argu
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Dataset** & **Annotation** & **\# Rel Types** & **\# Rel Facts** & **\# Train** & **\# Dev** & **\#Test** \\ \hline DGM Jia et al. (2019) & Distant Supervision & 1 & - & 32,040 & - & - \\ CDR Luan et al. (2018) & Human-annotated & 1 & - & 1,500 & 500 & 500 \\ GDA Wu et al. (2019) & Distant Supervision & 1 & - & 30,192 & 5,839 & 1,000 \\ DocRED Yao et al. (2019) & Distant Supervision & 96 & 50,345 & 3,053 & 1,000 & 1,000 \\ Re-DocRED Tan et al. (2022) & Combined & 96 & 120,664 & 3,053 & 500 & 500 \\ SciREX Jain et al. (2020) & Human-annotated & 2 & - & 438 & 131 & 131 \\ HacRED Cheng et al. (2021) & Combined & 26 & 65,225 & 9,231 & 1,500 & 1,500 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of Doc-RE datasets.
ment roles in total. In contrast to Doc-EE with one event in each document, 29.0% of the documents in ChFinAnn contain multiple events. **DuEE-Fin**(Zheng et al., 2019) is the largest human-labeled Chinese financial dataset. It is collected from real-world Chinese financial news and annotated with 13 event types. 29.2% of the documents contain multiple events and 16.8% of events consist of multiple arguments.
## 4 Evaluation Metrics
In document-level information extraction (IE), the primary evaluation metrics are Precision (P), Recall (R), and Macro-F1 score (Kowsari et al., 2019). Additionally, for doc-RE, Ign F1 is used as an evaluation metric (Yao et al., 2019). Ign F1 refers to the F1 score that excludes relational facts shared by the training and dev/test sets. This metric is important for evaluating the generalizability of the model, as it disregards triples that are already present in the annotated training dataset.
## 5 Methods
The fundamental challenge in doc-RE and doc-EE is to express document content in a concise and effective way such that key information is maintained. Previous approaches usually resort to hierarchical, graph-based, or sequential structures. More recently, due to the emergence of powerful generative pre-trained language models (PLMs), generative models have also been introduced to address doc-IE tasks. A typology of existing doc-RE and doc-EE approaches categorized by model design is shown in Table 3.
### Doc-RE Approaches
Multi-granularity-based ModelsThe multi-granularity-based approach aims to emphasize the use of information from different granularities and the aggregation of global information. The standard procedure involves concatenating features from each level to complete the IE tasks. Jia et al. (2019) approaches document-level N-ary relation extraction using a multiscale representation learning method. This approach aggregates the representations of mentions and ensembles multiple sub-relations. The **HIN**(Hierarchical Inference Network) (Tang et al., 2020) uses Bi-LSTMs at the token, sentence, and document levels to extract features as sequences and weighs the overall features with the attention mechanism to obtain both local and global information. Multi-granularity-based designs employ two strategies: either they address intermediate tasks using various models, or they utilize the same model in a hierarchically ordered manner to independently tackle each subtask of information extraction, such as from sentence level to document level.
Graph-based ModelsGraph-based models generally construct a graph with words, mentions, entities, or sentences as nodes and define different types of edges across the entire document, further predicting the relations by reasoning on the graph. The first work done on doc-RE using a graph-based method is **DISCREX**(Quirk and Poon, 2017), where a document graph is constructed with word nodes and edges representing intra- and inter-sentential relations including dependency, adjacency, and discourse relations. Peng et al. (2017) contributes a Graph-LSTMs model with a bidirectional LSTM consisting of two directed acyclic graphs (DAG), and edges representing relations between nodes. Song et al. (2018) further compares bidirectional graph LSTM with bidirectional DAG LSTM, finding that the former, which doesn't alter the input graph structure, exhibits superior performance. While such dependency graphs have rich structural information, the pruning strategy does not necessarily keep the rele
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Dataset** & **\# Docs** & **\# Events** & **\# Event types** & **\# Roles** & **\# Arguments** & **Ratio** \\ \hline ACE-20051 [FOOTNOTE:1]Footnote 1: [https://github.com/deep-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learning-learning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning--learning-learning--learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning--learning-learning--learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning--learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning--learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning--learning-learning-learning--learning-learning-learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning-learning-learning--learning--learning-learning-learning-learning--learning-learning--learning--learning-learning--learning-learning-learning-learning--learning-](https://github.com/deep-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learning-learning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learning-learning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learning-learning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learninglearning-learninglearning-learning-learninglearning-learninglearning-learning-learninglearning-learning-learninglearning-learning-learning-learning-learning-learning-learninglearning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning--learning-learning--learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning--learning-learning--learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning--learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning-learning--learning-learning-learning-learning--learning-learning-learning-learning-learning-learning-learning-learning-learning-learning-learning--learning--learning-learning-learning--learning-learning-learning-learning-learning-learning--learning-learning-learning-learning--learning-learning-learning-learning-learning-learning--learning--learning-learning-learning-learning--learning-learning--learning--learning-learning--learning-learning-learning-learning--learning-)
vant information. **AGGCNs**Guo et al. (2019) proposes an end-to-end neural network that encodes the entire graph using multi-head self-attention to learn edge weights based on paired relations and using densely connected layers to glean global information. Sahu et al. (2019) designates words as individual nodes and establishes five types of edges to represent inter-and intra-sentence dependency. The model then uses an edge-oriented GCNN to retain aggregated node representation.
**EoG**Christopoulou et al. (2019) is a pioneering graph-based model. It uses entities as nodes and forms unique edge representations through the paths between nodes to better capture the paired relations. To predict relations between entity pairs, EoG makes iterative inferences on the path between the entities and aggregates every edge to a direct entity-entity (EE) edge. Many papers adapted from EoG can be divided into two main categories: homogeneous and heterogeneous graphs. **LSR**Nan et al. (2020) uses graph structure as a latent variable to form a homogeneous graph. Unlike EoG which uses a human-constructed graph, LSR learns structured attention to refine the graph dynamically and constructs latent structures based on the previous refinement. For heterogeneous graphs, different types of edges are defined, representing unique features, functions, and even dual graphs. **GLRE**Wang et al. (2020) utilizes a multi-layer R-GCN to learn entity global representations which are used as queries in the multi-headed self-attention layer to learn entity local representations while using sentence-level information as the keys. **HeterGSAN**Xu et al. (2021) is a heterogeneous graph based on EoG that uses a GAT to encode the graph relying more on related entity pairs' attention.
Dual graphs are normally used to capture hierarchical information. **GAIN**Zeng et al. (2020) utilized a heterogeneous mention-level graph to model interactions between the document and all mentions. **GEDA**Li et al. (2020) optimized entity representation with two attention layers and a heterogeneous GCN layer. **DHG**Zhang et al. (2020) contains two heterogeneous graphs: a structure modeling graph using words and sentences as nodes to better capture document structure information and a relation reasoning graph using mentions and entities as nodes to perform multi-hop relation reasoning. **POR**Xu et al. (2023) is a path-retrieving method between pair entities based on the BFS algorithm, which extracts path features through an LSTM and combines them using the attention mechanism. **DRN**Xu et al. (2021) passes encoded sentence and entity as a heterogeneous graph to a multi-layer GCN and meanwhile, uses the self-attention mechanism to learn a more contextual document-level representation.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Task** & **Main Category** & **Sub Category** & **Approaches** \\ \hline \multirow{8}{*}{Doc-RE} & \multirow{4}{*}{Multi-granularity-based} & Sentence-level\(\rightarrow\) & Tang et al. (2020) \\ & & Paragraph-level\(\rightarrow\) & \\ & & Document-level & \\ \cline{3-4} & & \begin{tabular}{l} Mention-level\(\rightarrow\) \\ Entity-level \\ \end{tabular} & Jia et al. (2019) \\ \cline{2-4} & & \begin{tabular}{l} Quirk and Poon (2017), Peng et al. (2017), Song et al. (2018), Guo et al. (2019), Sahu et al. (2019), Christopoulou et al. (2019), Wang et al. (2020), Xu et al. (2021), Zeng et al. (2020), Li et al. (2020), Zhang et al. (2020), Xu et al. (2023), Xu et al. (2021)c \\ \cline{3-4} & & Homogeneous graph & Nan et al. (2020) \\ \cline{2-4} & & Neural Networks & Xu et al. (2021), Zhang et al. (2021)b \\ \cline{2-4} & & Attention/Transformer & Zhou et al. (2021), Tan et al. (2022) \\ \cline{2-4} & & \begin{tabular}{l} Evidence-based \\ \end{tabular} & \begin{tabular}{l} Path reasoning \\ \end{tabular} & Huang et al. (2021) \\ \cline{2-4} & & Evidence retrieval & Xie et al. (2022), Xiao et al. (2022) \\ \hline \multirow{8}{*}{Doc-EE} & \multirow{4}{*}{Multi-granularity-based} & Sentence-level\(\rightarrow\) & Yang et al. (2018), Huang and Jia (2021) \\ & & Paragraph-level\(\rightarrow\) & \\ & Document-level & \\ \cline{1-1} \cline{2-4} &
\begin{tabular}{l} Graph-based \\ \end{tabular} & Heterogeneous graph & Zheng et al. (2019), Xu et al. (2021), Zhu et al. (2022), Xu et al. (2022) \\ \cline{1-1} \cline{2-4} & Sequence-based & Neural Networks & Huang and Peng (2021) \\ \cline{1-1} \cline{2-4} & & Attention/Transformer & Yang et al. (2021), Liang et al. (2022) \\ \cline{1-1} \cline{2-4} & Generation-based & - & Li et al. (2021), Zeng et al. (2022) \\ \cline{1-1} \cline{2-4} & Memory-based & - & Du et al. (2022), Cui et al. (2022) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Typology of Doc-IE methods.
Sequence-based ModelsSequence-based models mostly rely on NN-based or Transformer-based architectures, which can model complex interactions among entities by implicitly capturing long-distance dependencies. **SSAN**Xu et al. (2021) integrates structural dependencies within and throughout the encoding stage of the network, not only enabling simultaneous context reasoning and structure reasoning but also efficiently modeling these dependencies in all network layers. **AT-LOP**Zhou et al. (2021) simply applies BERT's own attention weights for Localized Context Pooling as well as a dynamic adaptive thresholding strategy, to ensure that each entity maintains the same representation and balances the logits of positive and negative labels. **DocuNet**Zhang et al. (2021) divides model construction into three parts leveraging a u-shaped semantic segmentation network to refine entity feature extraction. **KD**Tan et al. (2022) calculates self-attention in the vertical and horizontal directions of an \(n\times n\) two-hop attention entity pair table using axial attention. The logits of paired entity relations are ranked with the logits of the threshold classes individually instead of ranking all positive logits together. Sequence-based approaches focus on capturing contexts and entity information via careful designs, either an adequate neural network structure or a novel loss function.
Path(Evidence)-based ModelsPath-based models construct evidence paths and make relational decisions by reasoning on crucial information between entity pairs or sentences, instead of extracting features from the complete document. **THREE**Huang et al. (2021) presents three kinds of paths to find the supporting sentences: consecutive paths, multi-hop paths, and default paths for entity pairs. **EIDER**Xie et al. (2022) defines "evidence sentences", as a minimal number of sentences needed to predict the relations between certain pairs of entities in a document. **SAIS**Xiao et al. (2022) utilizes two intermediary phases to obtain evidence information: pooled evidence retrieval, which distinguishes entity pairs with and without supporting sentences, and fine-grained evidence retrieval, which produces more interpretable evidence specific to each relation of an entity pair. Those papers typically utilize supporting sentences from the DocRED dataset. When humans perform relation extraction on the long span of texts, we read through the whole document and evaluate sentences that are important for the task. The path-based approach is consistent with human perception and intuition, which has shown extraordinary performance.
### Doc-EE Approaches
Multi-granularity-based Models DCFEEYang et al. (2018) first designs a SEE component to obtain the event arguments and event trigger and splices them together to get the input for the second component-DEE. The DEE uses a convolutional neural network to concatenate the output of SEE and the vector representation of the current sentence. **SCDEE**Huang and Jia (2021) uses Graph Attention Networks (GAT) to transform vertex features, which are used to detect sentence communities and then obtain event types at the sentence level.
Graph-based ModelsDoc2EDAGZheng et al. (2019) first identifies all the entities in a document and uses transformer fusing information at the document level. When an event type is triggered, the model starts to generate an entity-based directed acyclic graph (EDAG) and treats the Doc-EE task as an event table-filling task. Following the order of roles in an event type, EDAG decides which entity node to be expanded and considers a path-expanding sub-task until the EDAG is fully recovered. **GIT**Xu et al. (2021) designs a heterogeneous graph with four types of edges between sentences and mentions. Based on detected event types, a tracker is designed to extract corresponding arguments by expanding a constrained event type tree while tracking and storing records in global memory. **PTPCG**Zhu et al. (2022) calculates the semantic similarity between entities to construct a pruned complete graph after event and argument detection. Pruning is done by deciding whether entity pairs retain an edge based on heuristics. **TSAR**Xu et al. (2022) leverages an AMR-guided interaction module to generate both global and local contextualized representations. A gate function is designed to decide the portion of global and local representation, to predict the argument roles for potential spans.
Sequence-based ModelsDE-PPNYang et al. (2021) is an encoder-decoder doc-EE model which utilizes two transformers to identify sentence-level elements as the document encoder and a multi-granularity decoder to decode event, role, and event-role in parallel. **ReDEE**Liang et al. (2022) is the first to use entity relation information for
doc-EE tasks, which utilizes SSAN to extract relation triples and transfer them with entity and sentence dependency. **DEED**Huang and Peng (2021) is an end-to-end model that utilizes Deep Value Networks (DVN), a structured prediction algorithm that effectively bridges the disparity between ground truth and prediction. This model directly incorporates event trigger prediction into DVN, thereby efficiently capturing cross-event dependencies for document-level event extraction.
Generative ModelsThe generative models are commonly found in doc-EE and joint extraction. **Bart-Gen**Li et al. (2021) takes the document and event templates as input, and uses an encoder-decoder model to generate arguments to fill in the blank in the templates based on the previous word in the sentence. **EA2E**Zeng et al. (2022) aims to achieve event-aware argument extraction by labeling arguments from nearby events in the document to enhance the context.
Memory-based ModelsDu et al. (2022) introduces a memory-enhanced neural generation-based framework based on a sequence-to-sequence PLM. The memory stores gold-standard events and previously generated events of the same document; and the decoder retrieves event knowledge and decodes arguments dynamically based on the event dependency constraints. **HRE**Cui et al. (2022) emulates the human reading process by conducting a two-stage analysis - rough reading and elaborate reading. The initial rough reading detects the event type and saves it as memory tensors. Upon detection, elaborate reading extracts the complete event record with arguments and stores them in memory while updating with previous event type and argument memory tensors.
## 6 Discussion
We concluded seven major types of errors in three existing doc-RE works based on the DocRED and Re-DocRED datasets, as well as in four doc-EE works based on the WikiEvents and ChFinAnn datasets. Examples and distributions of each type are shown in Table 4, 5, and Figure 2, 3, 4.
Entity coreference resolutionDocument-level texts contain a large number of recognized entities along with coreferential words such as them, he, which, etc. Entity coreference resolution errors happen when the model fails to resolve all mentions in a document that refer to the same entity.
Reasoning errorThis type of error mainly relates to multi-hop logical reasoning. Document-level texts contain considerable amounts of information, so models may fail to give correct logical inferences based on the given information. Inferring from multi-hop information requires a model to have a high level of natural language understanding ability.
Long-spanDocument contains multiple sentences in a long span. This error happens when the model fails to capture the full context of a document or uses global information for inference.
Commonsense knowledgeThe error occurs when models fail to correctly extract relations or events or assume the wrong semantics due to a lack of commonsense and background knowledge, which humans are able to learn or understand instinctively. Many datasets are specific to some domains, and in the absence of relevant background and domain-specific knowledge models may inaccurately reason or misinterpret information.
Relation transitivity errorDocuments tend to have many entities appearing in the same sentence or across sentences. Relation transitivity errors occur when a model fails to correctly infer a relation between two entities based on their individual relations with a third entity. Additionally, not all relations are transitive, thus the model should correctly recognize when transitivity applies.
Over prediction errorThis error type refers to the spurious error (as we presented in Table 4) where there is no ground truth relation between two entities but the model predicts a relation, and can be caused by a number of reasons. For instance, when using large pre-trained language models to encode the documents, learned prior can cause models to make overconfident predictions.
In addition to shared error types with Doc-RE, we observe two more types of errors based on the
Figure 2: Doc-RE error distribution in DocRED and Re-DocRED
WikiEvents and ChFinAnn datasets.
Multi-events errorIn Doc-EE tasks, documents contain multiple events that overlap or occur simultaneously, which requires the model to have sufficient training or advanced techniques to learn the inherent complexity of multi-event documents. In an event-trigger-annotated dataset such as WikiEvents, the model can fail at assigning arguments to the correct events or matching roles to arguments. In a trigger-not-annotated dataset like ChFinAnn, event detection errors may occur when models try to identify and differentiate distinct events within the document due to the complex contextual structure of each event.
Other errorsModels face other error types which are mainly associated with previous tasks like entity recognition or caused by the different linguistic features and complexities of datasets. For example, nominal mention recognition and argument span mismatch errors are common in many works, particularly in generative methods.
Noisy dataThis issue comprises natural language noises and labeling noises. Real-world documents contain noisy, unstructured, or poorly formatted content, causing difficulties in identifying entities and extracting relations. Natural language can be ambiguous or vague, leading to uncertainty in model inference. To overcome the limitations of the cost of creating annotated datasets, researchers commonly apply automatic labeling strategies like distant supervision to generate large-scale training data. However, this leads to several minor problems due to noise and bias: nested entities (i.e., some entities can be embedded within other entities), false negative labels (i.e., entity pairs not known to be related but getting labeled as such in the dataset), and missing ground truth labels.
Note that Doc-EE errors vary between ChFinAnn and WikiEvents. There could be a number of factors behind the different Doc-EE error distribution between ChFinAnn and WikiEvents. One crucial factor is the diversity in underlying statis
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Error Type** & **Text** & **GT** & **Prediction** \\ \hline \multirow{2}{*}{
\begin{tabular}{l} Sports \\ \(\text{for}\) Over prediction \\ \(\text{diction}\) \\ \end{tabular} } & **The Link River \(\text{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{ }_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{\textless{}_{ }_{\textless{\textless{\textless{\text
tics between datasets due to their distinct domains and languages. Compared to the news dataset WikiEvents, the Chinese financial dataset ChFinAnn requires less commonsense comprehension. Each dataset contains unique linguistic features and complexities. WikiEvents has annotated trigger words, and arguments tend to be near the trigger words, whereas ChFinAnn can have events spread across the entire document and is more likely to interfere with other events. Therefore, long-span and multi-events are major error types in ChFinAnn. Moreover, various model designs and approaches usually aim to address specific challenges and optimize performance on the respective dataset.
## 7 Remaining Challenges
Current difficulties can be broadly categorized into three areas. First, a lot of information is spread out over several sentences. Second, there might be several mentions pointing to the same entity throughout the entire document. Finally, some relations must be deduced from several sentences in order to be discovered. The first two issues have been addressed by existing approaches using attention mechanisms and graph construction, though multiple-step reasoning techniques are less widely used. Progressively, more methods try to use evidence sentences or evidence paths to infer complicated relations. Nevertheless, models continue to struggle with capturing common sense and knowledge-based reasoning as it is difficult to identify a pattern that is extremely similar in the training set or even during pre-training. Additionally, creating annotated datasets for this task is time-consuming and expensive, which limits the amount of data available for training and evaluation. Domain-specific datasets differ from main general datasets, but are necessary for identifying relations that are specific to certain domains, understanding domain-specific terminology, and handling the high variability of language used in different domains. There are several promising future directions. First, it is beneficial to incorporate entity coreference systems into doc-IE models, which we believe will play an important role in resolving ECR and multi-hop reasoning errors. Second, more investigations are needed to design a model with multi-hop reasoning capability. Finally, doc-EE and doc-RE can be supplementary tasks to each other. The information produced by these two tasks can provide a more complete picture of the information given in the document.
## Limitations
A thorough error analysis is conducted with current state-of-the-art algorithms and limitations in existing approaches as well as the remaining challenges are identified for the task of document-level IE. However, how a system can effectively address the challenges takes appropriate action, while we exclusively analyze existing studies mainly focused on news, financial, biomedical, and Wikipedia datasets in English and Chinese languages, we acknowledge that the challenges and conclusions drawn may not be generalizable to other domains, languages, or new datasets.
|
2309.04735 | Two-State Spin Systems with Negative Interactions | We study the approximability of computing the partition functions of
two-state spin systems. The problem is parameterized by a $2\times 2$ symmetric
matrix. Previous results on this problem were restricted either to the case
where the matrix has non-negative entries, or to the case where the diagonal
entries are equal, i.e. Ising models. In this paper, we study the
generalization to arbitrary $2\times 2$ interaction matrices with real entries.
We show that in some regions of the parameter space, it's \#P-hard to even
determine the sign of the partition function, while in other regions there are
fully polynomial approximation schemes for the partition function. Our results
reveal several new computational phase transitions. | Yumou Fei, Leslie Ann Goldberg, Pinyan Lu | 2023-09-09T09:44:36Z | http://arxiv.org/abs/2309.04735v2 | # Two-State Spin Systems with Negative Interactions+
###### Abstract
We study the approximability of computing the partition functions of two-state spin systems. The problem is parameterized by a \(2\times 2\) symmetric matrix. Previous results on this problem were restricted either to the case where the matrix has non-negative entries, or to the case where the diagonal entries are equal, i.e. Ising models. In this paper, we study the generalization to arbitrary \(2\times 2\) interaction matrices with real entries. We show that in some regions of the parameter space, it's #P-hard to even determine the sign of the partition function, while in other regions there are fully polynomial approximation schemes for the partition function. Our results reveal several new computational phase transitions.
## 1 Introduction
Spin systems are widely studied in statistical physics, probability theory and theoretical computer science. They can express many natural graph invariants such as the number of independent sets or the number of \(k\)-colorings, as well as spin models of statistical physics such as the Ising model or the Potts model.
### The Problem
The partition function of a \(q\)-state spin system can be parameterized by a symmetric matrix \(A\in\mathbb{R}^{q\times q}\). It associates with every graph \(G=(V,E)\) the real number
\[Z(G;A)=\sum_{\sigma\in[q]^{V}}\prod_{\{u,v\}\in E}A_{\sigma(u),\sigma(v)}.\]
**Remark 1**.: Throughout the paper, the word "graph" refers to undirected multigraph permitting self-loops and parallel edges.
Fixing a symmetric matrix \(A\), the complexity of exactly computing \(Z(G;A)\) given input \(G\) was studied and settled by [1] (for \(A\) with \(0/1\) entries), [1] (for \(A\) with nonnegative entries) [1] (for \(A\) with real algebraic entries), and [1] (for \(A\) with complex algebraic entries). They proved the remarkable "dichotomy theorem", which states that either computing \(Z(G;A)\)
can be done in polynomial time or it is #P-hard, and the class of tractable matrices \(A\), although lacking a simple explicit characterization, is polynomial-time decidable.
In this paper, we study the problem of _approximately_ computing \(Z(G;A)\). For simplicity of handling models of computation, we restrict our attention to rational numbers. We will deal exclusively with two-state spin systems (\(q=2\)), as they already appear challenging enough:
**Problem 1.1**.: For which symmetric matrices \(A=\begin{bmatrix}A_{00}&A_{01}\\ A_{10}&A_{11}\end{bmatrix}\in\mathbb{Q}^{2\times 2}\) is approximately computing \(Z(G;A)\) tractable?
If \(A_{01}=A_{10}=0\), it is easy to see that \(Z_{G}\) can be computed exactly in polynomial time (see also [1]). In the following, assume \(A_{01}=A_{10}\neq 0\), and we normalize the matrix \(A\) so that \(A_{01}=A_{10}=1\). Then \(A\) is given by two parameters \(A_{00}=\beta\) and \(A_{11}=\gamma\). Whenever \(\beta\) and \(\gamma\) is fixed, we abbreviate \(Z(G;A)\) to \(Z_{G}\).
Problem 1.1 is well studied for nonnegative matrix entries. In the nonnegative quadrant \(\beta,\gamma\geq 0\), [11] gave an FPRAS for the "ferromagnetic" case \(\beta\gamma\geq 1\). The "antiferromagnetic" case \(\beta\gamma<1\) was later very much settled by a series of work [11, 12, 13, 14, 15, 16]. They proved a computational phase transition that coincides with the boundary of the "uniqueness region" (uniqueness of Gibbs measure on infinite regular trees). Their results in fact extend much beyond Problem 1.1: the computational phase transition for the anti-ferromagnetic case holds even when external fields are allowed.
However, much less is known about Problem 1.1 when \(\beta\) or \(\gamma\) is negative. The only existing results in this direction are about the Ising model, which means the special case \(\beta=\gamma\). Embedded in a broader study about Tutte polynomials, the following theorems from [11] and [11] classified the approximation complexity of Ising partition functions with negative \(\beta\):
**Proposition 1.1** (Corollary 28 of [11]).: Fix rational numbers \(\beta,\gamma\) such that \(\beta=\gamma\in(-1,0)\). It is #P-hard to determine the sign of the partition function \(Z_{G}\), given an input graph \(G\).
**Proposition 1.2** (Lemma 7 of [11]).: Fix rational numbers \(\beta,\gamma\) such that \(\beta=\gamma<-1\). Approximating the partition function \(Z_{G}\) for an input graph \(G\) is equivalent to approximately counting perfect matchings in general graphs in the sense that there are approximation-preserving reductions between these problems, implying that either both problems have an FPRAS or neither problem has an FPRAS. Whether approximately counting perfect matchings is tractable or not is a central open question in the area.
Note that at the point \((\beta,\gamma)=(-1,-1)\), \(Z_{G}\) can be computed exactly in polynomial time [11, Theorem 1.2].
### Our Results
In this paper, we explore Problem 1.1 in the case \(\min\{\beta,\gamma\}<0\). In Section 3, we will prove the following generalization of Proposition 1.1:
**Theorem 1.3**.: _Fix rational numbers \(\beta,\gamma\) such that \(\min\{\beta,\gamma\}<0\) and \(-2<\beta+\gamma<1\), but \((\beta,\gamma)\not\in\{(1,-1),(-1,1)\}\). It is #P-hard to determine the sign of the partition function \(Z_{G}\), given an input graph \(G\)._
Of course Theorem 1.3 has ramifications for the complexity of approximating \(Z_{G}\). In particular, an FPRAS for approximating \(Z_{G}\) gives a polynomial-time randomised algorithm for computing the
sign of \(Z_{G}\), which is not possible assuming that #P-hard problems cannot be solved in randomised polynomial time.
Note that when \((\beta,\gamma)\in\{(1,-1),(-1,1)\}\), \(Z_{G}\) can be computed exactly in polynomial time [10].
It is then of great interest to find whether the two lines \(\beta+\gamma=-2\) and \(\beta+\gamma=1\) are actual thresholds of approximation complexity. The following two theorems, both of which will be proved in Section 4, show that the former line is indeed an actual threshold:
**Theorem 1.4**.: _Fix rational numbers \(\beta,\gamma\) such that \(\beta\neq\gamma\) and \(|\beta+\gamma|>2\). For any positive integer \(\Delta\), there is an FPTAS for \(Z_{G}\), where \(G\) is an input graph of maximum degree no more than \(\Delta\) (without the bounded degree requirement, there is a quasi-polynomial time approximation scheme)._
**Theorem 1.5**.: _Fix rational numbers \(\beta,\gamma\) such that \(\beta\neq\gamma\) and \(|\beta+\gamma|\geq 2\). There is an FPRAS for \(Z_{G}\), where \(G\) is an input graph._
Note that Theorem 1.5 contains the boundary case \(|\beta+\gamma|=2\), which Theorem 1.4 doesn't. What's more, since Theorem 1.5 doesn't require the input graph to be bounded degree, it is not subsumed by Theorem 1.4 even for the range \(|\beta+\gamma|>2\).
The algorithm of Theorem 1.4 is based on the zero-freeness framework of [1] and Asano's contraction method [11], while the algorithm of Theorem 1.5 relies on the "windability" framework of [13] and a holographic transformation. The zero-freeness framework, achieving notable successes in problems with nonnegative parameters (e.g. [16]), applies naturally in the presence of mixed signs as well. In contrast, the "windability" framework, or more generally Markov-chain-based methods only make sense for problems with positive parameters. It is thus somewhat surprising that, via a holographic transformation, we are able to transform the problem into one with positive parameters and furthermore prove the rapid mixing of a Markov chain, for the _maximum possible_ parameter range based on a lower bound on \(|\beta+\gamma|\).
Now, the obvious challenge is to determine the approximation complexity in the remaining region, that is, for parameters \(\beta,\gamma\) such that \(\min\{\beta,\gamma\}<0\) and \(1\leq\beta+\gamma<2\). Unfortunately, we are unable to fully achieve this goal. Instead, we give some results that might provide some insights into this challenge (see Section 6 for more discussion).
**Theorem 1.6**.: _Let \(\beta,\gamma\) be real numbers such that \(\beta+\gamma\geq 1\). Then for any graph \(G\), the partition function \(Z_{G}\) is positive._
**Remark 2**.: For \(\beta+\gamma\leq-2\), it is easy to find a graph \(G\) such that \(Z_{G}<0\) (e.g. a single self-loop or a triangle). When \(-2<\beta+\gamma<1\) and \(\min\{\beta,\gamma\}<0\) and \(\beta,\gamma\not\in\{(-1,1),(1,-1)\}\), Theorem 1.3 implies that \(Z_{G}\) is negative for some graph \(G\). When \((\beta,\gamma)\in\{(1,-1),(-1,1)\}\), \(Z_{G}\) is negative for \(G=K_{4}\) (the 4-clique). Combined with these observations, Theorem 1.6 completely determines the range of parameters \(\beta\) and \(\gamma\) for which the partition function \(Z_{G}\) is always nonnegative: the union of the half plane \(\beta+\gamma\geq 1\) and the first quadrant \(\beta,\gamma\geq 0\).
Theorem 1.6 suggests that approximating the partition function is unlikely #P-hard when \(\beta+\gamma\geq 1\), and hence the line \(\beta+\gamma=1\) is likely some threshold of approximation complexity.
The proof of Theorem 1.6 is by induction on the size of the graph and will be given in Section 5.1. In fact, such recursion methods have also been widely used to show zero-freeness of some partition functions on the complex plane (e.g. [12]), which in turn leads to deterministic approximation algorithms by the framework of [1]. For our partition function, we show in Section 5.2 that such recursions can be used to determine the largest zero-free disk around \(0\) for the range \(\{(\beta,\gamma):\gamma<0\text{ and }1\leq\beta+\gamma\leq 2\}\):
**Theorem 1.7**.: _Let \(\beta,\gamma\) be real numbers such that \(\gamma<0\) and \(1\leq\beta+\gamma\leq 2\). Then for any graph \(G\), the polynomial \(Z_{G}(x)\) as defined in Section 2.1 is zero-free on the disk \(\left\{z\in\mathbb{C}:|z|<\frac{\beta-1}{1-\gamma}\right\}\). Furthermore, \(\frac{\beta-1}{1-\gamma}\) is the maximum possible radius such that the zero-freeness holds for all graphs \(G\)._
Using the same type of recursion in a more sophisticated way, we are able to show that the partition function \(Z_{G}\) is efficiently computable if \(\beta+\gamma\) is sufficiently close to \(2\), by slightly extending the zero-free region of Theorem 1.7. This suggests the line \(\beta+\gamma=2\) is _not_ really a computational threshold:
**Theorem 1.8**.: _Let \(g:(1,+\infty)\to(0,1)\) be the following function:_
\[g(\beta)=\max\left\{\frac{\beta-2}{\beta^{2}-1},\frac{(\beta-1)^{2}}{\beta^{3 }+\beta^{2}-\beta}\right\}. \tag{1.1}\]
_Fix rational numbers \(\beta,\gamma\) such that \(\min\{\beta,\gamma\}<0\) and \(\beta+\gamma>2-g(\max\{\beta,\gamma\})\). For any positive integer \(\Delta\), there is an FPTAS for \(Z_{G}\), where \(G\) is an input graph of maximum degree no more than \(\Delta\) (without the bounded degree requirement, there is a quasi-polynomial time approximation scheme)._
Theorem 1.8 breaks the algorithmic barrier \(\beta+\gamma=2\) presented by Theorem 1.4 and shows that the line \(\beta+\gamma=2\) behaves in a completely different way from the line \(\beta+\gamma=-2\). The proof of Theorem 1.8 will be given in Section 5.3.
### More Related Work
Most of the literature studying 2-state spin systems is restricted to the case where the edge interactions \(\beta\) and \(\gamma\) and the vertex weights \(\lambda\) (i.e. external fields, see Section 2.1) are all nonnegative. But there are also some related lines of work where negative or even complex parameters have received more attention.
For instance, in the case of the Ising model, besides the results mentioned in Proposition 1.1 and Proposition 1.2, [10] studies the approximation complexity of \(Z(G;A)\), where \(A=\begin{bmatrix}\beta&1\\ 1&\beta\end{bmatrix}\) and
Figure 1: An illustration of the complexity classfication. The sky-blue dots \(\{(\beta,\gamma):\beta\gamma=1\}\cup\{(-1,1),(0,0),(1,-1)\}\) are where \(Z_{G}\) can be computed exactly in polynomial time [10, 10]. Sitting in the bottom-left corner of the first quadrant, the black region is where approximating the partition function is known to be NP-hard [13]. The dashed line stands for the uniqueness boundary for anti-ferromagnetic 2-spin systems. When \((\beta,\gamma)\) falls in the green regions, there is an FPTAS for \(Z_{G}\) on bounded degree graphs (due to Theorem 1.4 and [12]), and an FPRAS for \(Z_{G}\) on all graphs (due to Theorem 1.5 and [13]). The thin yellow strips to the left of the \(\beta+\gamma=2\) line are where an FPTAS for bounded degree graphs is given by Theorem 1.8, suggesting that \(\beta+\gamma=2\) is not a threshold. When \((\beta,\gamma)\) falls on the blue lines, there is an FPRAS for \(Z_{G}\) (the line \(\beta+\gamma=-2\) follows from Theorem 1.5, while the ray \(\beta=\gamma>1\) is due to [11]). In the red region, apart from the points \((-1,1)\) and \((1,-1)\), approximating \(Z_{G}\) is #P-hard (Theorem 1.3). On the orange line, approximating the partition function is equivalent to approximately counting perfect matchings [10].
\(\beta\) is any algebraic _complex_ number, partly motivated by the connection with quantum complexity classes.
Another line of research concerns the hard-core model (this corresponds to interactions \(\beta=1\) and \(\gamma=0\) with external fields). Regarding this model there has been much work on the complexity of approximating \(Z_{G}(\lambda)\) for bounded-degree graphs \(G\) varying parameter \(\lambda\in\mathbb{C}\)[11, 13, 14]. Here the study of the complexity of approximation is intimately related to the study of optimal zero-free regions of the polynomial \(Z_{G}(x)\)[1, 1]. The techniques used in Section 3 are directly analogous to those in [1] -- see Remark 5.
## 2 Preliminaries
As in Section 1.1, we consider a fixed symmetric matrix \(A=\begin{bmatrix}A_{00}&A_{01}\\ A_{10}&A_{11}\end{bmatrix}\in\mathbb{Q}^{2\times 2}\).
### Notations
For \(G=(V,E)\) and \(\boldsymbol{\lambda}\in\mathbb{R}^{V}\), let
\[Z_{G}(\boldsymbol{\lambda})=\sum_{\sigma\in\{0,1\}^{V}}\left(\prod_{\{u,v\}\in E }A_{\sigma(u),\sigma(v)}\prod_{v\in V}\lambda_{v}^{\sigma(v)}\right).\]
Here \(\boldsymbol{\lambda}\) is the vector of _external fields_. As a special case, we have \(Z_{G}=Z_{G}(\boldsymbol{1})\). By setting \(\lambda_{v}=x\) for all \(v\in V\), we get a univariate polynomial \(Z_{G}(x)\).
For \(v\in V\), let \([Z_{G,v}(\boldsymbol{\lambda})]\) be a \(2\times 1\) vector whose \(i\)-th coordinate is
\[\left[Z_{G,v}(\boldsymbol{\lambda})\right]_{i}=\sum_{\sigma\in\{0,1\}^{V}} \mathbbm{1}\{\sigma(v)=i\}\left(\prod_{\{u,v\}\in E}A_{\sigma(u),\sigma(v)} \prod_{v\in V}\lambda_{v}^{\sigma(v)}\right).\]
When \(\left[Z_{G,v}(\boldsymbol{\lambda})\right]_{0}\neq 0\), we define the ratio \(R_{G,v}(\boldsymbol{\lambda})=\left[Z_{G,v}(\boldsymbol{\lambda})\right]_{1}/ \left[Z_{G,v}(\boldsymbol{\lambda})\right]_{0}\).
For \(u,v\in V\), let \([Z_{G,u,v}(\boldsymbol{\lambda})]\) be a \(2\times 2\) matrix whose \((i,j)\) entry is
\[\left[Z_{G,u,v}(\boldsymbol{\lambda})\right]_{i,j}=\sum_{\sigma\in\{0,1\}^{V} }\mathbbm{1}\{\sigma(u)=i\}\mathbbm{1}\{\sigma(v)=j\}\left(\prod_{\{u,v\}\in E }A_{\sigma(u),\sigma(v)}\prod_{v\in V}\lambda_{v}^{\sigma(v)}\right).\]
### #CSP and Holant Problems
The problem of computing the partition function of a spin system can be seen as an instance of #CSP problem with a single symmetric binary constraint function. In fact, we may identify the symmetric matrix \(A\) with the binary function \(\psi\) defined by \(\psi(i,j)=A_{ij}\). Then we can denote by #CSP(\(\{\psi\}\)) the problem of computing \(Z(G;A)\) given \(G\).
In Sections 4.3 and 4.4, we will utilize the connection between #CSP problems and Holant problems. A Holant instance is a graph \(G=(V,E)\) with a variable on each edge and a constraint on each vertex. The constraint on a vertex \(v\) is a function \(F_{v}:\{0,1\}^{J_{v}}\to\mathbb{C}\), where \(J_{v}\) is the set of edges incident to \(v\).
**Remark 3**.: Self-loops might bring in some ambiguity here. But in this paper, we don't consider self-loops in the context of Holant problems, as we're not going to need them.
Let \(\mathcal{F}\) be a class of constraint functions. A Holant problem \(\mathsf{Holant}(\mathcal{F})\) asks for computing the partition function
\[\sum_{\sigma\in\{0,1\}^{E}}\prod_{v\in V}F_{v}(\sigma|_{J_{v}})\]
on input \((G,(F_{v})_{v\in V})\), where each \(F_{v}\in\mathcal{F}\).
A particular kind of constraint functions we will use in Sections 4.3 and 4.4 is the parity functions. For all positive integer \(d\) define \(\mathbf{Even}_{d},\mathbf{Odd}_{d}:\{0,1\}^{d}\to\{0,1\}\) by setting \(\mathbf{Even}_{d}(x_{1},\cdots,x_{d})=1\) if and only if \(x_{1}+\cdots+x_{d}\) is even and setting \(\mathbf{Odd}_{d}(x_{1},\cdots,x_{d})=1\) if and only if \(x_{1}+\cdots+x_{d}\) is odd.
## 3 #P-Hardness
Let's define the range of parameters
\[\Gamma=\{(\beta,\gamma)\in\mathbb{R}^{2}:(\beta>\gamma)\wedge(-2<\beta+\gamma <1)\wedge(\gamma<0)\}\setminus\{(1,-1)\},\]
which will appear many times in this section. Note that for \((\beta,\gamma)\in\Gamma\), \(\beta\gamma<(-2-\gamma)\gamma\leq 1\).
### Realizing Arbitrary Ratios
The starting point for proving the hardness result Theorem 1.3 is to show that the ratio \(R_{G,v}\) can take value in a dense subset of \(\mathbb{R}\).
**Definition 3.1**.: Given parameters \(\beta,\gamma\in\mathbb{R}\), we say that a real number \(r\) is realizable if there is a finite graph \(G\) and a vertex \(v\in V(G)\) such that \(\left[Z_{G,v}\right]_{0}\neq 0\) and \(R_{G,v}=r\).
**Lemma 3.2**.: _If \(r_{1},r_{2}\in\mathbb{R}\) are realizable under parameters \(\beta\) and \(\gamma\), then \(r_{1}r_{2}\) is also realizable._
Proof.: If \(R_{G_{1},v_{1}}=r_{1}\) and \(R_{G_{2},v_{2}}=r_{2}\), take \(G\) to be the "wedge sum" of \(G_{1}\) and \(G_{2}\), by first taking their disjoint union and then identifying \(v_{1}\) and \(v_{2}\) as a single vertex \(v\). Then
\[R_{G,v}=\frac{\left[Z_{G,v}\right]_{1}}{\left[Z_{G,v}\right]_{0}}=\frac{\left[ Z_{G_{1},v}\right]_{1}\cdot\left[Z_{G_{2},v}\right]_{1}}{\left[Z_{G_{1},v_{1}} \right]_{0}\cdot\left[Z_{G_{2},v_{2}}\right]_{0}}=R_{G_{1},v_{1}}\cdot R_{G_{2},v_{2}}=r_{1}r_{2},\]
hence \(r_{1}r_{2}\) is realizable.
**Lemma 3.3**.: _If \(r\in\mathbb{R}\) is realizable and \(r\neq-\beta\), then \(\frac{1+\gamma r}{\beta+r}\) is also realizable._
Proof.: If \(R_{G,v}=r\), define a graph \(G^{\prime}\) with vertex set \(V(G)\cup\{u\}\) and edge set \(E(G)\cup\{\{u,v\}\}\), i.e. we attach a new edge to the vertex \(v\) in \(G\). Then
\[R_{G^{\prime},u}=\frac{\left[Z_{G^{\prime},u}\right]_{1}}{\left[Z_{G^{\prime}, u}\right]_{0}}=\frac{\left[Z_{G,u,v}\right]_{1,0}+\left[Z_{G,u,v}\right]_{1,1}}{ \left[Z_{G,u,v}\right]_{0,0}+\left[Z_{G,u,v}\right]_{0,1}}=\frac{\left[Z_{G,v} \right]_{0}+\gamma\cdot\left[Z_{G,v}\right]_{1}}{\beta\cdot\left[Z_{G,v} \right]_{0}+\left[Z_{G,v}\right]_{1}}=\frac{1+\gamma r}{\beta+r},\]
hence \(\frac{1+\gamma r}{\beta+r}\) is realizable.
**Lemma 3.4**.: _Let \(\beta,\gamma\) be real numbers such that \((\beta,\gamma)\in\Gamma\). Then some real number in \((1,+\infty)\) is realizable._
Proof.: We divide the proof into the following cases:
Case 1: \(\beta+\gamma<0\) and \(\beta\neq 0\). Take \(V(G)=\{v\}\) and let \(E(G)\) consist of 2 self loops on \(v\). Then
\[R_{G,v}=\left(\frac{\gamma}{\beta}\right)^{2}=1+\frac{(\beta-\gamma)(-\beta- \gamma)}{\beta^{2}}>1.\]
Case 2: \(\beta=0\). Take \(V(G)=\{v_{1},v_{2}\}\) and \(E(G)=\{\{v_{1},v_{2}\},\{v_{2},v_{2}\}\}\). We have \(R_{G,v_{1}}=\frac{\beta+\gamma^{2}}{\beta^{2}+\gamma}=\gamma\). By applying Lemma 3.3 since \(\gamma\neq-\beta\), \((1+\gamma^{2})/(\beta+\gamma)\) is realizable. By applying Lemma 3.2, it follows that the real number \((1+\gamma^{2})^{2}/\gamma^{2}\) is realizable. Since \(-2<\gamma<0\) this quantity is at least 4. Case 3: \(\beta+\gamma\geq 0\) and \(\gamma\neq-\beta^{2}\). Take \(V(G)=\{v_{1},v_{2},v_{3}\}\) and \(E(G)=\{\{v_{1},v_{2}\},\{v_{1},v_{3}\},\{v_{2},v_{2}\},\)\(\{v_{3},v_{3}\}\}\). Then
\[R_{G,v_{1}}=\left(\frac{\beta+\gamma^{2}}{\beta^{2}+\gamma}\right)^{2}=1+\frac {(\beta-\gamma)(1-\beta-\gamma)(\beta^{2}+\gamma^{2}+\beta+\gamma)}{(\beta^{2 }+\gamma)^{2}}>1.\]
Case 4: \(\beta+\gamma\geq 0\) and \(\gamma=-\beta^{2}\). Since \((\beta,\gamma)\neq(1,-1)\), it follows that \(-1<\gamma<0\) and \(0<\beta+\gamma<1\). Take \(V(G)=\{v_{1},v_{2},v_{3}\}\) and \(E(G)=\{\{v_{1},v_{2}\},\{v_{2},v_{3}\},\{v_{3},v_{3}\}\}\). We have
\[R_{G,v_{1}}=\frac{\beta^{2}+\gamma+\gamma(\beta+\gamma^{2})}{\beta(\beta^{2}+ \gamma)+\beta+\gamma^{2}}=\gamma.\]
By applying Lemma 3.3 since \(\gamma\neq-\beta\), \((1+\gamma^{2})/(\beta+\gamma)>1\) is realizable.
**Lemma 3.5**.: _Let \(\beta,\gamma\) be real numbers such that \((\beta,\gamma)\in\Gamma\). Then some real number in \((-1,0)\) is realizable._
Proof.: Notice that in this range \(-2<\beta+\gamma<2\beta\) and so \(\beta>-1\). Consider the following 2 cases:
Case 1: \(\gamma<-1\). Let \(G\) consists of a single edge \(\{v_{1},v_{2}\}\). We have
\[R_{G,v_{1}}=\frac{1+\gamma}{\beta+1}=-1+\frac{\beta+\gamma+2}{\beta+1}>-1,\]
and, since \(\gamma<-1\), \(R_{G,v_{1}}\) is also less than 0. So \(R_{G,v_{1}}\) gives a realizable ratio in \((-1,0)\).
Case 2: \(\gamma\geq-1\). From Lemma 3.4, and since we can take arbitrary powers due to Lemma 3.2, we know some real number \(r>-\frac{1}{\gamma}\) is realizable. Moreover, since \(\beta\gamma<1\), we have \(-\beta<-1/\gamma\), so \(r\neq-\beta\) and we can apply Lemma 3.3. Also \(\beta+r>\beta-1/\gamma>0\). Appying the lemma, we have
\[\frac{1+\gamma r}{\beta+r}=\gamma+\frac{1-\beta\gamma}{\beta+r}>\gamma\geq-1.\]
Since \(r>-\frac{1}{\gamma}\), the quantity \((1+\gamma r)/(\beta+r)\) is also less than 0. So \(\frac{1+\gamma r}{\beta+r}\) is a realizable ratio in \((-1,0)\).
**Proposition 3.6**.: Fix real parameters \(\beta,\gamma\) such that \((\beta,\gamma)\in\Gamma\). For any real numbers \(R\neq 0\) and \(\varepsilon>0\), some real number strictly between \(e^{-\varepsilon}R\) and \(e^{\varepsilon}R\) is realizable.
Proof.: We first assume that \(R>0\). Take a realizable ratio \(r_{0}\) greater than 1 (which exists by Lemma 3.4), raise to the \(k\)th power (applying Lemma 3.2) for some \(k\) that is sufficiently large that \(r_{0}^{k}>-\beta\), and then apply Lemma 3.3. This realizes a ratio
\[r_{1}=\frac{1+\gamma r_{0}^{k}}{\beta+r_{0}^{k}}=\gamma+\frac{1-\beta\gamma}{ \beta+r_{0}^{k}}.\]
Since \(\beta\gamma<1\), for a sufficiently large \(k\), the ratio \(r_{1}\) lies in the interval \((\gamma,e^{-\varepsilon}\gamma)\). Let
\[r_{2}=\frac{1+\gamma r_{0}^{k+1}}{\beta+r_{0}^{k+1}},\]
which satisfies \(\gamma<r_{2}<r_{1}<\gamma e^{-\varepsilon}\). Thus, \(r_{2}/r_{1}\in(1,e^{\varepsilon})\).
By Lemma 3.5, a number \(r\in(-1,0)\) can be realized. By Lemma 3.2, the number \(rr_{0}^{j}\) can be realized for any positive integer \(j\). We will take \(j\) large enough that \(|rr_{0}^{j}|>1/|r_{1}|>1/|r_{2}|\). By Lemma 3.2, the quantities \(R_{1}=r_{1}rr_{0}^{j}\) and \(R_{2}=r_{2}rr_{0}^{j}\) can be realized. These are in the range \((1,+\infty)\) and have \(R_{2}/R_{1}=r_{2}/r_{1}\in(1,e^{\varepsilon})\). Moreover, by Lemma 3.2, the quantity \(R_{3}=r^{2}\in(0,1)\) can be realized. To finish we will show that the multiplicative semigroup generated by \(\{R_{1},R_{2},R_{3}\}\) intersects \((e^{-\varepsilon}R,e^{\varepsilon}R)\). To see this, consider the following system of inequalities:
\[\begin{cases}R_{3}^{m}R_{1}^{n}\leq e^{-\varepsilon}R\\ R_{3}^{m}R_{2}^{n}\geq e^{\varepsilon}R.\end{cases} \tag{3.1}\]
If some positive integers \(m,n\) satisfy the above system of inequalities, then due to the fact that \(R_{2}/R_{1}\in(1,e^{\varepsilon})\), at least one term of the geometric progression
\[R_{3}^{m}R_{1}^{n},\quad R_{3}^{m}R_{1}^{n-1}R_{2},\quad\cdots,\quad R_{3}^{m} R_{1}R_{2}^{n-1},\quad R_{3}^{m}R_{2}^{n}\]
falls in the interval \((e^{-\varepsilon}R,e^{\varepsilon}R)\). What's more, as a product of realizable numbers, each term is realizable under the parameters \((\beta,\gamma)\). So it only remains to show that the system (3.1) has a positive integer solution.
In order to ensure a solution for \(n\), the requirements on \(m\) are
\[R_{3}^{m}R_{1}<e^{-\varepsilon}R\]
(this ensures a positive solution for \(n\)) and
\[\log_{R_{1}}\left(\frac{e^{-\varepsilon}R}{R_{3}^{m}}\right)-\log_{R_{2}} \left(\frac{e^{\varepsilon}R}{R_{3}^{m}}\right)\geq 1\]
(this ensures an integer solution for \(n\)). Using \(R_{2}>R_{1}\), the latter simplifies to
\[m\log\frac{1}{R_{3}}\geq\frac{\log R_{1}\cdot\log R_{2}+\varepsilon(\log R_{1 }+\log R_{2})}{\log R_{2}-\log R_{1}}-\log R. \tag{3.2}\]
Since \(R_{3}<1\), a sufficiently large integer \(m\) satisfies both requirements. This concludes the proof in the case \(R>0\).
In the case \(R<0\), pick any negative realizable ratio \(r\), as in Lemma 3.5. Since \(R/r>0\), we already know some real number in \((e^{-\varepsilon}R/r,e^{\varepsilon}R/r)\) is realizable. Multiplying it by \(r\) gives a realizable ratio in \((e^{\varepsilon}R,e^{-\varepsilon}R)\).
**Remark 4**.: In Proposition 3.6, we showed that the set of realizable ratios is dense in \(\mathbb{R}\). However, we didn't control the size of the graph used in the approximation. It's worth noting that the dependency of the size on the accuracy parameter \(\varepsilon\) is at least inverse linear: since \(R_{2}/R_{1}=r_{2}/r_{1}=1+O(\varepsilon)\), i.e. \(\log R_{2}-\log R_{1}=O(\varepsilon)\), by requirement (3.2) the integer \(m\) must be \(\Omega(\varepsilon^{-1})\). This turns out to be insufficient on its own for proving #P-hardness. In the following section, we will strengthen the dependency on \(\varepsilon\) to polylogarithmic. In other words, we will approximately realize any ratio \(R\) with exponential accuracy.
### Exponential Accuracy
Actually, in addition to realizing with exponential accuracy, we must also _efficiently compute_ the graph \(G\) that realize a given ratio. This means it's necessary to quantify the computational expense. Assume \(\beta\) and \(\gamma\) are fixed real numbers, and that the input parameters \(R\) and \(\varepsilon\) are both rational numbers written in standard fraction forms. By "polynomial-time algorithm" we mean the running time is polynomial in the number of bits in the representations of the input parameters. In particular, since \(\varepsilon\) is representable using \(\log(\varepsilon^{-1})\) bits, the running time is polynomial in \(\log(\varepsilon^{-1})\).
**Theorem 3.7**.: _Fix rational numbers \(\beta,\gamma\) such that \((\beta,\gamma)\in\Gamma\). There is a polynomial-time algorithm that, given as input rational numbers \(R>0\) and \(\varepsilon>0\), outputs a graph \(G\) and a vertex \(v\in V(G)\) such that \(\frac{[Z_{G,v}]_{1}}{[Z_{G,v}]_{0}}\in(e^{-\varepsilon}R,e^{\varepsilon}R)\)._
Proof.: The proof is somewhat lengthy, so we divide it into several parts:
**Part I: Preparations.** Our graph \(G\) will have a path as its backbone, with additional gadgets attached on nodes:
Formally, let \((v_{0},v_{1},\cdots,v_{n})\) be a path, and let \(G_{0},G_{1},\cdots,G_{n}\) be graphs realizing ratios \(x_{0},x_{1},\cdots,x_{n}\).
We form the graph \(G\) by attaching \(G_{0},\cdots,G_{n}\) to the corresponding nodes on the path. It follows easily from Lemma 3.3 and Lemma 3.2 that if we denote the Mobius transformation \(r\mapsto\frac{1+\gamma r}{\beta+r}\) by \(f\), the iteration
\[y_{n}=x_{n},\text{ and }y_{k}=x_{k}\cdot f(y_{k+1}),\text{ for }0\leq k\leq n-1 \tag{3.3}\]
gives \(y_{0}=\frac{[Z_{G,v_{0}}]_{1}}{[Z_{G,v_{0}}]_{0}}\). So it suffices to compute the gadget graphs \(G_{0},\cdots,G_{n}\) such that the ratios \((x_{0},\cdots,x_{n})\) they realize produce a \(y_{0}\in(e^{-\varepsilon}R,e^{\varepsilon}R)\).
Before describing the algorithm, we need to prepare four "landmarks" \(a,b,c,d\) on the real line, with \(\gamma<a<b<0\leq|\beta|<c<d\). We require that \(b=f(c)\), \(a=f(d)\), and \(\frac{b}{2d}<f^{\prime}(c)\). The existence of such rational numbers \(a,b,c,d\) can be shown easily. For example, we can take \(d=2c\) and let \(c\) be sufficiently large. Observe that as \(x\to+\infty\), the function value \(f(x)\) approaches \(\gamma\) from the right, and \(f^{\prime}(x)\) tends to \(0\) faster than \(1/x\). So eventually \(f^{\prime}(c)\) gets closer to \(0\) than \(b/2d\).
We also prepare a gadget graph \(H_{1}\) realizing a ratio \(h_{1}\in\left(\sqrt{b/a},1\right)\), and a graph \(H_{2}\) realizing a ratio \(h_{2}\in(-\infty,d/b)\). Their existence follows from Proposition 3.6.
Figure 2: Structure of the Graph \(G\)
**Part II: The Algorithm.** Until now, we have been describing information that doesn't depend on the input \((R,\varepsilon)\), and is thus hard-wired into our algorithm. Next we introduce the algorithm:
```
Input :\(R,\varepsilon\in\mathbb{Q}^{>0}\) Output : A graph \(G\) and a vertex \(v_{0}\) of \(G\) such that \(\left[Z_{G,v_{0}}\right]_{1}/\left[Z_{G,v_{0}}\right]_{0}\in(e^{-\varepsilon}R, e^{\varepsilon}R)\). \(k\gets 0\), \(R_{1}^{(0)}\gets Re^{-\varepsilon},\quad R_{2}^{(0)}\gets Re^{\varepsilon}\) while\(R_{2}^{(k)}/R_{1}^{(k)}<\sqrt{a/b}\)do Compute a graph \(G_{k}\) realizing a ratio \(x_{k}\in\left(-R_{2}^{(k)}/\sqrt{ab},R_{2}^{(k)}/a\right)\) // Using \(H_{1},H_{2}\) \(R_{1}^{(k+1)}\gets f^{-1}(R_{1}^{(k)}/x_{k})\) \(k\gets k+1\) Compute a graph \(G_{k}\) realizing a ratio \(x_{k}\in(R_{1}^{(k)},R_{2}^{(k)})\) // Using \(H_{1},H_{2}\) \(n\gets k\) Form a graph \(G\) from \(G_{0},\cdots,G_{n}\) as in Figure 2
```
**Algorithm 1**Realize Arbitrary Ratio
**Part III: Correctness.** To prove the correctness of Algorithm 1, we first prove two claims about the numbers \(x_{0},\cdots,x_{n}\) and \(R_{1}^{(0)},\cdots R_{1}^{(n)},R_{2}^{(0)},\cdots,R_{2}^{(n)}\) computed in the course of the algorithm.
**Claim 1**.: \(0<R_{1}^{(0)}<R_{2}^{(0)}\) and for \(k\in\{1,2,\cdots,n\}\), \(c<R_{1}^{(k)}<R_{2}^{(k)}<d\).
Proof of Claim 1.: We perform induction on \(k\). The base case \(k=0\) is clear from the algorithm. Now assume the claim holds for \(k-1<n\). From the induction hypothesis and the choice of the ratio \(x_{k-1}\), we have \(x_{k-1}<R_{2}^{(k-1)}/a<0\). Since \(f^{-1}\) is decreasing on \((\gamma,+\infty)\), we have
\[R_{2}^{(k)}=f^{-1}\left(\frac{R_{2}^{(k-1)}}{x_{k-1}}\right)<f^{-1}\left( \frac{R_{2}^{(k-1)}}{R_{2}^{(k-1)}/a}\right)=f^{-1}(a)=d,\]
and by the while loop condition
\[R_{1}^{(k)}=f^{-1}\left(\frac{R_{1}^{(k-1)}}{x_{k-1}}\right)>f^{-1}\left( \frac{R_{2}^{(k-1)}/\sqrt{\frac{a}{b}}}{x_{k-1}}\right)>f^{-1}\left(\frac{R_{ 2}^{(k-1)}/\sqrt{\frac{a}{b}}}{-R_{2}^{(k-1)}/\sqrt{ab}}\right)=f^{-1}(b)=c.\]
From the induction hypothesis we also clearly have
\[R_{2}^{(k)}=f^{-1}\left(\frac{R_{2}^{(k-1)}}{x_{k-1}}\right)>f^{-1}\left( \frac{R_{1}^{(k-1)}}{x_{k-1}}\right)=R_{1}^{(k)}.\qed\]
**Claim 2**.: Running the iteration (3.3) on the ratios \((x_{0},\cdots,x_{n})\) produces a \(y_{0}\in(e^{-\varepsilon}R,e^{\varepsilon}R)\).
Proof of Claim 2.: Let the sequence \(\{y_{k}\}_{0\leq k\leq n}\) be as in the iteration (3.3). We inductively show that \(R_{1}^{(k)}<y_{k}<R_{2}^{(k)}\) holds for all \(0\leq k\leq n\). For \(k=n\), this is clear from the last **Compute** operation. Now assume \(k\in\{0,1,\cdots,n-1\}\) and the induction hypothesis holds for \(k+1\). Combining with Claim 1, we know that \(c<R_{1}^{(k+1)}<y_{k+1}<R_{2}^{(k+1)}<d\). Recall that \(c>|\beta|\), and this ensures that \(f\) is decreasing on \([c,+\infty)\). What's more, we know \(x_{k}<0\) from the proof of Claim 1. So
\[x_{k}\cdot f\left(R_{1}^{(k+1)}\right)<x_{k}\cdot f(y_{k+1})<x_{k}\cdot f\left( R_{2}^{(k+1)}\right),\]
i.e. \(R_{1}^{(k)}<y_{k}<R_{2}^{(k)}\). Now, taking \(k=0\) in the inductive hypothesis yields the claim.
The correctness of Algorithm 1 then follows directly from Claim 2 (see **Part I** of this proof).
**Part IV: Efficiency.** It remains to show the efficiency of our algorithm. The next claim serves to bound the number of **Compute** operations executed:
**Claim 3**.: For each \(k\in\{1,2,\cdots,n\}\), we have \(R_{2}^{(k)}/R_{1}^{(k)}>\left(R_{2}^{(k-1)}/R_{1}^{(k-1)}\right)^{2}\).
Proof of Claim 3.: We know from Algorithm 1 that \(R_{i}^{(k-1)}=x_{k-1}\cdot f\left(R_{i}^{(k)}\right)\), for \(i\in\{1,2\}\). So
\[\begin{split}\ln R_{2}^{(k-1)}-\ln R_{1}^{(k-1)}&= \ln\left(x_{k-1}\cdot f(R_{2}^{(k)})\right)-\ln\left(x_{k-1}\cdot f(R_{1}^{(k -1)})\right)\\ &=\ln\left(-f(R_{2}^{(k)})\right)-\ln\left(-f(R_{1}^{(k)})\right). \end{split} \tag{3.4}\]
On the interval \([c,d]\), the function \(x\mapsto\ln(-f(x))\) has derivative
\[\frac{f^{\prime}(x)}{f(x)}=\frac{\beta-1/\gamma}{(x+\beta)(x+1/\gamma)}\leq \frac{\beta-1/\gamma}{(c+\beta)(c+1/\gamma)}=\frac{f^{\prime}(c)}{f(c)},\]
where the inequality uses \(c+1/\gamma>0\) (this is because the definition of \(f\) ensures that \(c+1/\gamma=(\beta+c)f(c)\gamma^{-1}=(\beta+c)b\gamma^{-1}>0\)).
Consequently, since \(c<R_{1}^{(k)}<R_{2}^{(k)}<d\) by Claim 1, we have
\[\ln\left(-f(R_{2}^{(k)})\right)-\ln\left(-f(R_{1}^{(k)})\right)\leq\frac{f^{ \prime}(c)}{f(c)}\left(R_{2}^{(k)}-R_{1}^{(k)}\right). \tag{3.5}\]
On the other hand, the derivative of \(x\mapsto\ln x\) is at least \(1/d\) on the interval \([c,d]\). So by Claim 1 again
\[\ln R_{2}^{(k)}-\ln R_{1}^{(k)}\geq\frac{1}{d}\left(R_{2}^{(k)}-R_{1}^{(k)} \right). \tag{3.6}\]
Combining (3.4), (3.5) and (3.6), we have
\[\ln R_{2}^{(k-1)}-\ln R_{1}^{(k-1)}\leq\frac{f^{\prime}(c)}{f(c)}\cdot d\cdot \left(\ln R_{2}^{(k)}-\ln R_{1}^{(k)}\right)<\frac{1}{2}\left(\ln R_{2}^{(k)} -\ln R_{1}^{(k)}\right),\]
where the final inequality uses the crucial property \(b/(2d)<f^{\prime}(c)\), i.e. \(\frac{f^{\prime}(c)}{f(c)}\cdot d<\frac{1}{2}\).
From the while loop condition in Algorithm 1 we know that \(R_{2}^{(n-1)}/R_{1}^{(n-1)}<\sqrt{a/b}\), while from the initialization of variables \(R_{2}^{(0)}/R_{1}^{(0)}=e^{2\varepsilon}\). So it follows from Claim 3 that \(\sqrt{a/b}>R_{2}^{(n-1)}/R_{1}^{(n-1)}>(e^{2\varepsilon})^{2^{n-2}}\) so \(n=O(\log(1/\varepsilon))\).
The operation **Compute** is executed \((n+1)\) times in Algorithm 1. Now that we have shown the number of **Compute** executions is polynomial in the input size, it suffices to show that each **Compute** operation takes polynomial time.
The very first execution of **Compute** is a little bit different, where we need to realize a ratio in \((-e^{\varepsilon}R/\sqrt{ab},e^{\varepsilon}R/a)\). Similar to the methods in Proposition 3.6, let \(s\) be a sufficiently large odd number such that \(h_{2}^{s}<-e^{\varepsilon}R/\sqrt{ab}\), and then there must be an integer \(t>0\) such that \(h_{2}^{s}h_{1}^{t}\) falls into the interval, since \(h_{1}\in\left(\sqrt{b/a},1\right)\). Both \(s\) and \(t\) are \(O(|\log R|)\) and take polynomial time to compute. (The rational number \(R\) is represented with at least \(\Omega(|\log R|)\) bits.)
For \(1\leq k<n\), we need to realize a ratio in \(\left(-R_{2}^{(k)}/\sqrt{ab},R_{2}^{(k)}/a\right)\) in the execution of **Compute** when the while loop is entered with value \(k\). By Claim 1 the interval is contained in \((-d/a,0)\). So the same method above applies and this time with a constant running time.
Finally, we need to a realize a ratio in \((R_{1}^{(n)},R_{2}^{(n)})\). By Claim 1 we have \(0<c<R_{1}^{(n)}<R_{2}^{(n)}<d\), and the while loop condition in Algorithm 1 ensures that \(R_{2}^{(n)}/R_{1}^{(n)}\geq\sqrt{a/b}\). Let \(s\) be a sufficiently large even number such that \(h_{2}^{s}>d\), and then there must be an integer \(t>0\) such that \(h_{2}^{s}h_{1}^{t}\) falls into the interval \(\left(R_{1}^{(n)},R_{2}^{(n)}\right)\), since \(h_{1}\in\left(\sqrt{b/a},1\right)\). Both \(s\) and \(t\) take constant time to compute.
For future reference, we want to be able to not only realize some ratio in a given interval, but also calculate exactly the ratio we realized. To avoid making the preceding theorem overly cumbersome, we state this as a separate proposition below. It is identical to Theorem 3.7 except the addition of the final sentence, and that we also allow \(R<0\).
**Proposition 3.8**.: Fix rational numbers \(\beta,\gamma\) such that \((\beta,\gamma)\in\Gamma\). There is a polynomial-time algorithm that, given as input rational numbers \(R\neq 0\) and \(\varepsilon>0\), outputs a graph \(G\) and a vertex \(v\in V(G)\) such that \(\frac{[Z_{G,v}]_{1}}{[Z_{G,v}]_{0}}\) is strictly between \(e^{-\varepsilon}R\) and \(e^{\varepsilon}R\). The algorithm also outputs \([Z_{G,v}]_{1}\) and \([Z_{G,v}]_{0}\).
Proof.: We keep the notation in the proof of Theorem 3.7 and give an additional procedure to calculate \([Z_{G,v_{0}}]\) on top of Algorithm 1.
First, we calculate the vector \([Z_{G_{k},v_{k}}]\) for each gadget graph \(G_{k}\) attached to the path. Since each \(G_{k}\) is a wedge sum of constant-sized gadget graphs, both \([Z_{G_{k},v_{k}}]_{0}\) and \([Z_{G_{k},v_{k}}]_{1}\) can be efficiently computed by multiplication (see Lemma 3.2).
We then compute \([Z_{G,v_{0}}]\) by the following recursive procedure:
\[B_{n}=[Z_{G_{n},v_{n}}]\,,\text{ and }B_{k}=[Z_{G_{k},v_{k}}]\circ\left( \begin{bmatrix}\beta&1\\ 1&\gamma\end{bmatrix}B_{k+1}\right),\text{ for }0\leq k\leq n-1,\]
where \(\circ\) is the entry-wise product of two 2-by-1 vectors. It's easy to see that the result of the recursion, the vector \(B_{0}\), is exactly \([Z_{G,v_{0}}]\).
The case \(R<0\) is easy to cope with by attaching to \(v_{0}\) a gadget from Lemma 3.5.
**Remark 5**.: The results and proofs in this section are directly analogous to Proposition 15 of [1]. Most importantly, our graph \(G\) has the same path-iteration structure used in [1]. The main difference between our proof and the one in [1] is that, having no explicit "contraction maps" (see their Lemma 28) to rely on, our algorithm instead pivots on the landmarks \(a,b,c,d\) and especially on the property \(b/2d<f^{\prime}(c)\), which helps achieve a similar contraction effect (see Claim 3 in the proof of our Theorem 3.7).
### Simulating Ising Models
In order to present the reduction for proving Theorem 1.3, we need to be able to approximately realize a ferromagnetic Ising edge interaction using our interaction matrix \(\begin{bmatrix}\beta&1\\ 1&\gamma\end{bmatrix}\). This is formulated in the next lemma:
**Lemma 3.9**.: _Fix rational numbers \(\beta,\gamma\) such that \((\beta,\gamma)\in\Gamma\). There is a polynomial-time algorithm that, given as input rational numbers \(M^{*}>1\) and \(\varepsilon>0\), outputs a graph \(G\) and two vertices \(u,v\in V(G)\) such that_
\[[Z_{G,u,v}]=N\begin{bmatrix}M_{0}&1\\ 1&M_{1}\end{bmatrix},\]
_for some \(N>0\) and \(M_{0},M_{1}>M^{*}\) such that \(M_{1}/M_{0}\in(1,e^{\varepsilon})\). The algorithm also outputs the exact matrix \([Z_{G,u,v}]\) it realized._
Proof.: For technical reasons, we first assume \(\beta+\gamma\neq 0\). Let \(P\) be a length-2 path with endpoints \(u,v\). Formally, let its vertex set be \(\{u,w,v\}\) and its edge set be \(\{\{u,w\},\{w,v\}\}\). It's easy to calculate
\[[Z_{P,u,v}]=\begin{bmatrix}\beta^{2}+1&\beta+\gamma\\ \beta+\gamma&\gamma^{2}+1\end{bmatrix}.\]
Now let \(G\) be a parallel connection of \(2k\) paths like \(P\), with gadget graphs \(G_{1},G_{2}\) attached to the two end points:
The algorithm consists of the following steps:
* Compute the smallest integer \(k\) such that \(M\triangleq\left(\frac{(\beta^{2}+1)(\gamma^{2}+1)}{(\beta+\gamma)^{2}}\right) ^{k}>e^{\varepsilon}M^{*}\). Note that \[\frac{(\beta^{2}+1)(\gamma^{2}+1)}{(\beta+\gamma)^{2}}=1+\frac{(1-\beta\gamma )^{2}}{(\beta+\gamma)^{2}}>1.\]
* Compute a graph \(G_{1}\) that realizes a ratio \(R\) with \[\left(\frac{\beta^{2}+1}{\gamma^{2}+1}\right)^{k}<R<e^{\varepsilon/2}\left( \frac{\beta^{2}+1}{\gamma^{2}+1}\right)^{k},\] using the procedure in Theorem 3.7. Attach \(G_{1}\) to the vertex \(u\), and attach an isomorphic copy \(G_{2}\) to the vertex \(v\).
Since \(k=O(\log M^{*})\), the first step clearly runs in polynomial time. The guarantee of Theorem 3.7 tells us that the second step runs in \(\operatorname{poly}(k,\log\frac{1}{\varepsilon})\) time, which is again polynomial in the size of our inputs since \(k=O(\log M^{*})\). So we have verified the efficiency of the algorithm.
As to the correctness, it suffices to observe that
\[[Z_{G,u,v}]_{0,0} =(\beta^{2}+1)^{2k}\Big{(}\left[Z_{G_{1,u}}\right]_{0}\Big{)}^{2}, \tag{3.7}\] \[=(\beta+\gamma)^{2k}\Big{(}\left[Z_{G_{1,u}}\right]_{0}\Big{)}^{2 }R\triangleq N,\] (3.8) \[=(\gamma^{2}+1)^{2k}\Big{(}\left[Z_{G_{1,u}}\right]_{0}\Big{)}^{2 }R^{2}. \tag{3.9}\]
It follows that
* \(M_{0}\triangleq\frac{[Z_{G,u,v}]_{0,0}}{N}=\frac{(\beta^{2}+1)^{2k}}{(\beta+\gamma )^{2k}R}>\frac{(\beta^{2}+1)^{k}(\gamma^{2}+1)^{k}}{e^{\varepsilon/2}(\beta+ \gamma)^{2k}}=Me^{-\varepsilon/2}>M^{*}\), and
* \(\frac{M_{1}}{M_{0}}\triangleq\frac{[Z_{G,u,v}]_{1,1}}{[Z_{G,u,v}]_{0,0}}=\frac{ (\gamma^{2}+1)^{2k}R^{2}}{(\beta^{2}+1)^{2k}}\in(1,e^{\varepsilon})\).
Finally, given equations (3.7), (3.8) and (3.9), and Proposition 3.8, we can easily compute the 2-by-2 matrix \([Z_{G,u,v}]\) exactly.
This concludes the proof of the lemma in the case \(\beta+\gamma\neq 0\). The case \(\beta+\gamma=0\) can be solved with a small tweak of parameter. In fact, we can perturb the parameters using gadgets given by Proposition 3.6:
\[\tikzfig{fig:1}\]
For each edge with an interaction matrix \(\begin{bmatrix}\beta&1\\ 1&\gamma\end{bmatrix}\), by attaching a gadget graph realizing a ratio \(r\) to each of its endpoints, we can turn the interaction matrix into \(\begin{bmatrix}\beta r^{-1}&1\\ 1&\gamma r\end{bmatrix}\), up to a normalization factor. For any pair \((\beta,\gamma)\) with \(\beta+\gamma=0\) and in the range \(\Gamma\), one can always use Proposition 3.6 to prepare a gadget graph that realizes a ratio \(r\in(1,1+\frac{1}{\beta})\) so that the perturbed parameter pair \((\beta r^{-1},\gamma r)\) still lies in the range \(\Gamma\) but is no longer on the line \(\{x+y=0\}\). So we have reduced the case \(\beta+\gamma=0\) to the case \(\beta+\gamma\neq 0\), which is already solved.
### Proof of Theorem 1.3
Having the crucial Proposition 3.8 and Lemma 3.9 in place, we are finally ready to prove Theorem 1.3. The proof follows the approach of [1], reducing from the following problem:
**Name** #Minimum Cardinality \((s,t)\)-Cut.
**Instance** A graph \(G=(V,E)\) and distinguished vertices \(s,t\in V\).
**Output** \(|\{S\subseteq E:S\) is a minimum cardinality \((s,t)\)-cut in \(G\}|\).
Proof of Theorem 1.3.: If \(\beta=\gamma\), then from \(\beta+\gamma>-2\) and \(\min\{\beta,\gamma\}<0\) it follows that \(\beta=\gamma\in(-1,0)\). The theorem then follows from Proposition 1.1. So we may assume that \(\beta\neq\gamma\). By symmetry between \(\beta\) and \(\gamma\), it's then without loss of generality to assume \(\gamma<\beta\). This places us in the range \(\Gamma\), and hence in particular, Proposition 3.8 and Lemma 3.9 apply.
We give a Turing reduction from #Minimum Cardinality \((s,t)\)-Cut, which was shown to be #P-hard by [11], to the problem of determining the sign of the partition function \(Z_{G}\).
Let \((G,s,t)\) be an instance of #Minimum Cardinality \((s,t)\)-Cut. Assume without loss of generality that \(G\) is connected. Let \(n=|V(G)|\) and \(m=|E(G)|\). Let \(k\) be the size of a minimum cardinality \((s,t)\)-cut in \(G\), and let \(C\) be the number of size-\(k\)\((s,t)\)-cuts, both of which are unknown. In order to compute \(C\), we will create a sequence of graphs based on \(G\), and feed them into the oracle that computes the sign of the partition function.
First, we run the procedure in Lemma 3.9, on input \(M^{*}=2^{5m}\) and \(\varepsilon=2^{-4m}\). This gives us a gadget graph \(H\), two distinguished terminals among its vertices, and rational numbers \(N,M_{0},M_{1}>0\), such that
1. \(2^{5m}<M_{0}<M_{1}<e^{2^{-4m}}M_{0}\), and
2. The graph \(H\) realizes an interaction matrix \(N\begin{bmatrix}M_{0}&1\\ 1&M_{1}\end{bmatrix}\) between its two terminals.
Create a graph \(G^{\prime}\) by replacing every edge \(\{u,v\}\in E(G)\) with a copy of the gadget graph \(H\). We claim that the 2-by-2 matrix \(\left[Z_{G^{\prime},s,t}\right]\) contains very accurate information about \(C\). For example, assume that in a spin configuration of \(G\), the source \(s\) is fixed to have spin 0 and the sink \(t\) to have spin 1. Let \(\Omega=\{\sigma\in\{0,1\}^{V(G)}:\sigma(s)=0,\sigma(t)=1\}\) be all possible spin configurations conditional on the spins of \(s\) and \(t\). Then a minimum-cardinality \((s,t)\)-cut is equivalent to a configuration \(\omega\in\Omega\) that minimizes the number of edges with differing spins on the endpoints. This set of configurations corresponding to minimum-cardinality \((s,t)\)-cuts are denoted by \(\Omega_{0}\), which has exactly \(C\) elements. We then have
\[\frac{\left[Z_{G^{\prime},s,t}\right]_{1,0}}{N^{m}} =\frac{1}{N^{m}}\sum_{\sigma\in\Omega}\prod_{\{u,v\}\in E}A_{ \sigma(u),\sigma(v)} \left(\text{where }A=N\begin{bmatrix}M_{0}&1\\ 1&M_{1}\end{bmatrix}\right)\] \[=\sum_{\sigma\in\Omega_{0}}\prod_{\{u,v\}\in E}\frac{A_{\sigma(u ),\sigma(v)}}{N}+\sum_{\sigma\in\Omega\setminus\Omega_{0}}\prod_{\{u,v\}\in E }\frac{A_{\sigma(u),\sigma(v)}}{N}\] \[\leq CM_{1}^{m-k}+2^{m}M_{1}^{m-k-1}\] \[\leq CM_{1}^{m-k}(1+2^{m}M_{1}^{-1})\] \[\leq(1+2^{-4m})CM_{1}^{m-k}.\]
On the other hand, we have the obvious lower bound \(\left[Z_{G^{\prime},s,t}\right]_{1,0}/N^{m}\geq CM_{0}^{m-k}\). Similarly we can obtain estimates for the other entries of \(\left[Z_{G^{\prime},s,t}\right]\):
\[\left[Z_{G^{\prime},s,t}\right]_{0,0}/N^{m},\left[Z_{G^{\prime},s,t}\right]_{1,1}/N^{m} \in\left(M_{0}^{m},(1+2^{-4m})M_{1}^{m}\right) \tag{3.10}\] \[\left[Z_{G^{\prime},s,t}\right]_{1,0}/N^{m},\left[Z_{G^{\prime},s,t}\right]_{0,1}/N^{m} \in\left(CM_{0}^{m-k},(1+2^{-4m})CM_{1}^{m-k}\right)\]
Since \(M_{1}/M_{0}\in(1,e^{2^{-4m}})\), the lower bounds matches the upper bounds up to an exponentially small multiplicative constant. Thus, \(C\) and \(k\) are the crucial information determining the matrix \(\left[Z_{G^{\prime},s,t}\right]\). But in order to extract them exactly using the sign oracle, more work is needed.
Assume we can generate two gadget graph \(H_{1},H_{2}\) that realize vertex activity vectors \(\begin{bmatrix}N_{1}\\ N_{1}h_{1}\end{bmatrix}\) and \(\begin{bmatrix}N_{2}\\ N_{2}h_{2}\end{bmatrix}\), respectively, for some \(N_{1}\), \(h_{1}\), \(N_{2}\) and \(h_{2}\). By attaching them to the distinguished vertices \(s\) and respectively \(t\), we obtain a graph \(G^{\prime}_{H_{1},H_{2}}\) with a partition function
\[Z_{G^{\prime}_{H_{1},H_{2}}}=N_{1}N_{2}\Big{(}\left[Z_{G^{\prime},s,t}\right]_ {0,0}+h_{1}\left[Z_{G^{\prime},s,t}\right]_{1,0}+h_{2}\left[Z_{G^{\prime},s,t} \right]_{0,1}+h_{1}h_{2}\left[Z_{G^{\prime},s,t}\right]_{1,1}\Big{)}.\]
If the numbers \(N_{1},N_{2},h_{1},h_{2}\) are known and are nonzero, as is the case if we have generated \(H_{1}\) and \(H_{2}\) using Proposition 3.8, by feeding \(G^{\prime}_{H_{1},H_{2}}\) into the oracle for determining the sign of the partition function, we can determine the sign of the function
\[T(x,y):=\frac{1}{1+xy}\frac{\left[Z_{G^{\prime},s,t}\right]_{0,0}}{N^{m}}+ \frac{x}{1+xy}\frac{\left[Z_{G^{\prime},s,t}\right]_{1,0}}{N^{m}}+\frac{y}{1+ xy}\frac{\left[Z_{G^{\prime},s,t}\right]_{0,1}}{N^{m}}+\frac{xy}{1+xy}\frac{ \left[Z_{G^{\prime},s,t}\right]_{1,1}}{N^{m}}\]
at \(x=h_{1}\), \(y=h_{2}\). Our claim is that, by trying suitably generated \(H_{1}\)'s and \(H_{2}\)'s, from the values of all the \(\operatorname{sgn}(T(h_{1},h_{2}))\) we get, the number \(C\) can be determined exactly.
We proceed by a sandwiching argument. Define linear functions \(L(x)=M_{0}^{m}+(1+2^{-4m})CM_{1}^{m-k}x\) and \(U(x)=(1+2^{-4m})M_{1}^{m}+CM_{0}^{m-k}x\). Using the bounds (3.10), it's easy to see that for all \(x,y<0\)
\[L\left(\frac{x+y}{1+xy}\right)<T(x,y)<U\left(\frac{x+y}{1+xy}\right).\]
In particular, if the oracle tells us \(T(h_{1},h_{2})>0\) then we know that \(U(\frac{h_{1}+h_{2}}{1+h_{1}h_{2}})>0\), and otherwise we know \(L(\frac{h_{1}+h_{2}}{1+h_{1}h_{2}})<0\). Combining this observation with a standard binary search procedure, we can approximately determine the zeros of \(L\) and \(U\), where the information about \(C\) and \(k\) actually lies.
```
Input : Oracle access to the function \((H_{1},H_{2})\mapsto\operatorname{sgn}(T(\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}))\) Output :\(p,q<0\) such that \(L(q)<0\), \(U(p)>0\), and \(q/p<e^{2^{-4m}}\) Initialize variables \(p\leftarrow-4,\quad q\leftarrow-M_{1}^{m}\)// Clearly\(L(q)<0\)and\(U(p)>0\)hold while\(q/p\geq\exp(2^{-4m})\)do Pick rational numbers \(r\in(p^{4/9}q^{5/9},p^{5/9}q^{4/9})\) and \(\varepsilon\leq(100|r|)^{-1}2^{-4m}\) Use Proposition 3.8 with inputs \(R_{1}=-1/2\) and \(\varepsilon\) to get a graph \(H_{1}\) which realizes a ratio \(h_{1}\in(e^{\varepsilon}R_{1},e^{-\varepsilon}R_{1})\) Use Proposition 3.8 with inputs \(R_{2}=\frac{2r+1}{2+r}\) and \(\varepsilon\) to get a graph \(H_{2}\) which realizes a ratio \(h_{2}\in(e^{-\varepsilon}R_{2},e^{\varepsilon}R_{2})\)// By Lemma 3.10, \(\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}\in(p^{1/3}q^{2/3},p^{2/3}q^{1/3})\) Use the oracle to get the sign of \(T(h_{1},h_{2})\) if\(T(h_{1},h_{2})>0\)then Then \(U(\frac{h_{1}+h_{2}}{1+h_{1}h_{2}})>0\), and let \(p\leftarrow\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}\)// \(L(q)<0\)and\(U(p)>0\)still hold else Then \(L(\frac{h_{1}+h_{2}}{1+h_{1}h_{2}})<0\), and let \(q\leftarrow\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}\)// \(L(q)<0\)and\(U(p)>0\)still hold
```
**Algorithm 2**Binary Search for Zero
The reason Algorithm 2 runs in polynomial time is as follows:
* In each iteration of the while loop, since \(\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}\in(p^{1/3}q^{2/3},p^{2/3}q^{1/3})\) by Lemma 3.10, at the end of the iteration \(\ln(q/p)\) shrinks by at least a factor of \(1/3\).
* The initial value of \(\ln(q/p)\) is no more than \(m\ln(M_{1})\), which is at most polynomial in \(m\) since the algorithm of Lemma 3.9 outputs \(M_{1}\) in polynomial time given input \(M^{*}=2^{5m}\).
The outcome of Algorithm 2 is a pair \(p,q<0\) with \(q/p\in(1,e^{2^{-4m}})\) such that \(L(q)<0\) and \(U(p)>0\). Now, we have
\[U(p)>0\Rightarrow C<\frac{(1+2^{-4m})M_{1}^{m}}{(-p)M_{0}^{m-k}} \tag{3.11}\]
and
\[L(q)<0\Rightarrow C>\frac{M_{0}^{m}}{(1+2^{-4m})(-q)M_{1}^{m-k}}. \tag{3.12}\]
The ratio of the upperbound for \(C\) to the lowerbound is at most
\[(1+2^{-4m})^{2}\cdot\frac{q}{p}\left(\frac{M_{1}}{M_{0}}\right)^{2m-k}\leq\exp \left(2\cdot 2^{-4m}+2^{-4m}+2m\cdot 2^{-4m}\right)<\frac{2^{m}}{2^{m}-1}.\]
This means that, for any given \(k\), there is at most one integer in \(\{1,2,\cdots,2^{m}\}\) that lies between the lower bound and the upper bound. Since \(C\) must be in \(\{1,2,\cdots,2^{m}\}\), if \(k\) is determined, we can obtain a unique solution for \(C\). But on the other hand, the fact \(M_{0},M_{1}>2^{4m}\) implies that there is at most one value of \(k\) that gives a solution \(C\) in the right range. Therefore, both \(k\) and \(C\) are efficiently computable using the bounds (3.11) and (3.12).
During the above proof of Theorem 1.3, we have made use of the following technical lemma:
**Lemma 3.10**.: _Let \(q<p\leq-4\) be real numbers. Suppose \(p^{4/9}q^{5/9}<r<p^{5/9}q^{4/9}\) and \(0<\varepsilon\leq(100|r|)^{-1}\cdot\min\{\ln(q/p),1\}\). If_
\[-\frac{1}{2}e^{\varepsilon}<h_{1}<-\frac{1}{2}e^{-\varepsilon}\]
_and_
\[\frac{2r+1}{2+r}e^{-\varepsilon}<h_{2}<\frac{2r+1}{2+r}e^{\varepsilon},\]
_then \(p^{1/3}q^{2/3}<\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}<p^{2/3}q^{1/3}\)._
Proof.: Since \(r<p\leq-4\), we have \(R:=\frac{2r+1}{2+r}>2\). Note that it is guaranteed that \(\varepsilon\leq 1/100\). Using the inequalities \(e^{\varepsilon}\leq 1+2\varepsilon\), \(e^{-\varepsilon}\geq 1-\varepsilon\) and \(1+6\varepsilon\leq e^{6\varepsilon}\leq 1+12\varepsilon\), we have
\[h_{1}+h_{2}<Re^{\varepsilon}-\frac{1}{2}e^{-\varepsilon} =\left(R-\frac{1}{2}\right)e^{6\varepsilon}+R(e^{\varepsilon}-e^ {6\varepsilon})+\frac{1}{2}(e^{6\varepsilon}-e^{-\varepsilon})\] \[\leq\left(R-\frac{1}{2}\right)e^{6\varepsilon}+2(2\varepsilon-6 \varepsilon)+\frac{1}{2}(12\varepsilon+\varepsilon)\] \[\leq\left(R-\frac{1}{2}\right)e^{6\varepsilon}.\]
Using the estimates \(e^{\varepsilon}\leq 1+2\varepsilon\), \(e^{-\varepsilon}\geq 1-\varepsilon\) and \(1-6\varepsilon\leq e^{-6\varepsilon}\leq 1-3\varepsilon\), we have
\[h_{1}+h_{2}>Re^{-\varepsilon}-\frac{1}{2}e^{\varepsilon} =\left(R-\frac{1}{2}\right)e^{-6\varepsilon}+R(e^{-\varepsilon}- e^{-6\varepsilon})+\frac{1}{2}(e^{-6\varepsilon}-e^{\varepsilon})\] \[\geq\left(R-\frac{1}{2}\right)e^{-6\varepsilon}+2\left(- \varepsilon+3\varepsilon\right)+\frac{1}{2}(-6\varepsilon-2\varepsilon)\] \[=\left(R-\frac{1}{2}\right)e^{-6\varepsilon}.\]
It's also guaranteed that \(|r|\varepsilon<1/100\). Using the estimates \(e^{-2\varepsilon}\geq 1-2\varepsilon\) and \(e^{-4|r|\varepsilon}\leq 1-2|r|\varepsilon\), we have
\[-h_{1}h_{2}-1>\frac{R}{2}e^{-2\varepsilon}-1 =\left(\frac{R}{2}-1\right)\left(1-\frac{R}{R-2}(1-e^{-2 \varepsilon})\right)\] \[\geq\left(\frac{R}{2}-1\right)(1-|r|\cdot 2\varepsilon)\] \[\geq\left(\frac{R}{2}-1\right)e^{-4|r|\varepsilon}.\]
Using the estimates \(e^{2\varepsilon}\leq 1+4\varepsilon\) and \(e^{4|r|\varepsilon}\geq 1+4|r|\varepsilon\) we have
\[-h_{1}h_{2}-1<\frac{R}{2}e^{2\varepsilon}-1 =\left(\frac{R}{2}-1\right)\left(1+\frac{R}{R-2}(e^{2\varepsilon}- 1)\right)\] \[\leq\left(\frac{R}{2}-1\right)(1+|r|\cdot 4\varepsilon)\] \[\leq\left(\frac{R}{2}-1\right)e^{4|r|\varepsilon}.\]
In conclusion, we have
\[\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}r^{-1}=\frac{h_{1}+h_{2}}{R-1/2}\left(\frac{h_ {1}h_{2}+1}{1-R/2}\right)^{-1}<\exp(6\varepsilon+4|r|\varepsilon)\leq\left( \frac{q}{p}\right)^{1/9}\]
and
\[\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}r^{-1}=\frac{h_{1}+h_{2}}{R-1/2}\left(\frac{h_ {1}h_{2}+1}{1-R/2}\right)^{-1}>\exp(-6\varepsilon-4|r|\varepsilon)\geq\left( \frac{q}{p}\right)^{-1/9}.\]
Combining these with the assumption \(p^{4/9}q^{5/9}<r<p^{5/9}q^{4/9}\), we conclude that \(p^{1/3}q^{2/3}<\frac{h_{1}+h_{2}}{1+h_{1}h_{2}}<p^{2/3}q^{1/3}\).
## 4 Approximation Schemes
In this section, we give the two approximation schemes promised in Theorem 1.4 and Theorem 1.5.
### Preliminaries for the FPTAS
The deterministic approximation scheme of Theorem 1.4 will mainly rely on the powerful zero-freeness framework. In particular, our main tool is the following lemma developed and proved in [1] and [1].
**Lemma 4.1**.: _Fix rational numbers \(\beta\) and \(\gamma\). Let \(U\) be an open set in the complex plane that contains the real interval \([0,\lambda]\) for some \(\lambda\in\mathbb{Q}^{+}\). Suppose that for all graphs \(G\) the polynomial \(Z_{G}(x)\) has no complex root in \(U\). Then for any positive integer \(\Delta\), there exists an FPTAS for \(Z_{G}(\lambda)\), where \(G\) is an input graph of maximum degree no more than \(\Delta\) (without the bounded degree requirement, there is a quasi-polynomial time approximation scheme)._
Our method for showing zero-freeness is the classical contraction method. It was first introduced in [1] to give a simple proof for the Lee-Yang circle theorem [13], and was further extended in [11]. These results have been used previously in the area of approximate counting, e.g., by Sinclair and Srivastava [14] and by Guo, Liao, Lu, and Zhang [15]. Note especially that [15] uses these results for approximation algorithms. We restate the theorem in [11] in the following form:
**Lemma 4.2**.: _For each \(i\in[m]\), let \(K_{i}\) be a subset of the complex plane \(\mathbb{C}\) that doesn't contain 0. Suppose the complex multi-affine polynomial_
\[P(z_{1},\cdots,z_{m})=\sum_{I\subseteq[m]}F(I)\prod_{i\in I}z_{i},\]
_where each \(F(I)\) is a complex coefficient, vanishes only when \(z_{i}\in K_{i}\) for some \(i\in[m]\). Write \([m]\) as a disjoint union of subsets \(I_{1},\cdots,I_{n}\). Then the complex multi-affine polynomial_
\[Q(w_{1},\cdots,w_{n}):=\sum_{J\subseteq[n]}F\left(\bigcup_{j\in J}I_{j}\right) \prod_{j\in J}w_{j}\]
_can vanish only when \(w_{j}\in(-1)^{|I_{j}|+1}\prod_{i\in I_{j}}K_{i}\) for some \(j\in[n]\), where the product is the Minkowski product of sets, meaning that \(\prod_{i\in I_{j}}K_{i}:=\Big{\{}\prod_{i\in I_{j}}x_{i}\mid\forall i\in I_{j},\;x_{i}\in K_{i}\Big{\}}\)._
The following corollary is all we need Lemma 4.2 for:
**Corollary 4.3**.: Fix real parameters \(\beta\) and \(\gamma\). Assume that the polynomial \(\gamma z_{1}z_{2}+z_{1}+z_{2}+\beta\) doesn't vanish when \(|z_{1}|,|z_{2}|<r\), for some \(r>0\). Then for any graph \(G\), the partition function \(Z_{G}(\boldsymbol{\lambda})\) doesn't vanish if \(|\lambda_{v}|<r^{\deg_{G}(v)}\) for all \(v\in V(G)\).
Proof.: Let \(G=(V,E)\) with \(|V|=n\). Without loss of generality, assume \(V=[n]\). To use Lemma 4.2, we first need create a ground set \([m]\). For each edge \(e=\{u,v\}\in E\), let \(u_{e}\) and \(v_{e}\) be a copy of the vertex \(u\) and \(v\), respectively. Then consider the ground set \(\bigcup_{e=\{u,v\}\in E}\{u_{e},v_{e}\}\), which has size \(m:=2|E(G)|\). Let
\[P(\boldsymbol{z})=\prod_{e=\{u,v\}\in E}\left(\gamma z_{u_{e}}z_{v_{e}}+z_{u _{e}}+z_{v_{e}}+\beta\right).\]
Let \(K=\{z\in\mathbb{C}:|z|\geq r\}\). The assumption in the statement of the corollary guarantees that \(P(\boldsymbol{z})\) vanishes only if some \(z_{i}\in K\).
We can write \(P(\boldsymbol{z})\) in the form from Lemma 4.2 by defining a coefficient \(F(I)\) for every subset \(I\) of the ground set. To do this, partition \(E\) into sets \(E_{0}\), \(E_{1}\), and \(E_{2}\) where \(E_{0}\) is the set of \(e=\{u,v\}\) such at \(u_{e}\) and \(v_{e}\) are both out of \(I\), \(E_{1}\) is the set of \(e=\{u,v\}\) with exactly one of \(u_{e},v_{e}\) in \(I\) and \(E_{2}\) is the set of \(e=\{u,v\}\) with both of \(u_{e}\) and \(v_{e}\) in \(I\). Then \(F(I)=\gamma^{|E_{2}|}\beta^{|E_{0}|}\).
Now for each \(v\in V\), let \(I_{v}\) be the set of all ground set elements corresponding to vertex \(v\). That is, \(I_{v}=\{v_{e}\mid e\in E,v\in e\}\). Consider the polynomial
\[Q(w_{1},\cdots,w_{n}):=\sum_{J\subseteq V}F\left(\bigcup_{v\in J}I_{j}\right) \prod_{j\in J}w_{j}.\]
We can think of the set \(J\) as the set of vertices with spin \(1\). Then \(Q(\boldsymbol{w})=Z_{G}(\boldsymbol{w})\). So Lemma 4.2 guarantees that \(Q(w_{1},\ldots,w_{n})\) vanishes only when, for some \(v\in V\), \(w_{v}\in(-1)^{|I_{v}|+1}\prod_{i\in I_{v}}K\), proving the corollary.
### Proof of Theorem 1.4
In light of Corollary 4.3 and Lemma 4.1, it only remains to show zero-freeness for the single polynomial \(\gamma z_{1}z_{2}+z_{1}+z_{2}+\beta\).
**Lemma 4.4**.: _For real numbers \(\beta,\gamma\) such that \(\beta>\gamma\) and \(\beta+\gamma>2\), there exists \(r>1\) such that the polynomial \(\gamma z_{1}z_{2}+z_{1}+z_{2}+\beta\) doesn't vanish when \(|z_{1}|,|z_{2}|<r\)._
Proof.: Let \(D(0,r)\) denote the open disk \(\{z\in\mathbb{C}:|z|<r\}\). Let \(g:\mathbb{C}\cup\{\infty\}\to\mathbb{C}\cup\{\infty\}\) be the Mobius transformation \(z\mapsto-\frac{z+\beta}{\gamma z+1}\). Since \(g(z)\) is the unique solution to the equation \(\gamma z\cdot g(z)+z+g(z)+\beta=0\), it suffices to show for some \(r>1\) that \(g\) maps \(D(0,r)\) into \(D(0,r)^{c}\).
Note that since \(\beta,\gamma\in\mathbb{R}\), the transformation \(g\) maps the \(\mathbb{R}\cup\{\infty\}\) into \(\mathbb{R}\cup\{\infty\}\). By conformality, \(g\) maps any circle centered on \(\mathbb{R}\cup\{\infty\}\) to a circle centered on \(\mathbb{R}\cup\{\infty\}\). In particular, \(g(D(0,r))\) is a disk centered on \(\mathbb{R}\cup\{\infty\}\). So \(g(D(0,r))\) and \(D(0,r)\) are disjoint as long as their intersections with \(\mathbb{R}\cup\{\infty\}\) are disjoint. It suffices to show that \(g\) maps the real interval \((-r,r)\) into \((-r,r)^{c}\), for some \(r>1\). By continuity of \(g\), it also suffices to show that \(g\) maps the interval \([-1,1]\) into \([-1,1]^{c}\).
Now take any real number \(z\in[-1,1]\). From \(\beta>\gamma\) and \(\beta+\gamma>2\) we know \(\beta>1\). we have
\[|g(z)|>1 \Leftrightarrow|z+\beta|/|\gamma z+1|>1\] \[\Leftrightarrow(z+\beta)^{2}>(\gamma z+1)^{2} \text{(since $\beta,\gamma,z\in\mathbb{R}$)}\] \[\Leftrightarrow\left(1-z\frac{\gamma-1}{\beta-1}\right)\left(1+z \frac{\gamma+1}{\beta+1}\right)>0 \text{(since $\beta>1$)}.\]
It follows from \(\beta>\gamma\) that \(\frac{\gamma-1}{\beta-1}<1\) and \(\frac{\gamma+1}{\beta+1}<1\), while it follows from \(\beta+\gamma>2\) that \(\frac{\gamma-1}{\beta-1}>-1\) and \(\frac{\gamma+1}{\beta+1}>-1\). So both \(|\frac{\gamma-1}{\beta-1}|\) and \(|\frac{\gamma+1}{\beta+1}|\) are less than \(1\). Since \(|z|\leq 1\), we have
\[1-z\frac{\gamma-1}{\beta-1}>0\text{ and }1+z\frac{\gamma+1}{\beta+1}>0.\]
This proves \(|g(z)|>1\) and hence \(g\) maps the interval \([-1,1]\) into \([-1,1]^{c}\).
**Corollary 4.5**.: For real numbers \(\beta,\gamma\) such that \(\beta<\gamma\) and \(\beta+\gamma<-2\), there exists \(r>1\) such that the polynomial \(\gamma z_{1}z_{2}+z_{1}+z_{2}+\beta\) doesn't vanish when \(|z_{1}|,|z_{2}|<r\).
Proof.: By Lemma 4.4, the polynomial \((-\gamma)(-z_{1})(-z_{2})+(-z_{1})+(-z_{2})+(-\beta)\) doesn't vanish when \(|z_{1}|,|z_{2}|<r\). So its negation, \(\gamma z_{1}z_{2}+z_{1}+z_{2}+\beta\), doesn't vanish for \(|z_{1}|,|z_{2}|<r\) either.
Now we are ready to prove Theorem 1.4.
Proof of Theorem 1.4.: The range of parameters can be divided into \(4\) regions:
Case 1: \(\beta>\gamma\) and \(\beta+\gamma>2\). Combining Lemma 4.4 and Corollary 4.3, there is a disk \(D(0,r)\) containing \(1\) such that, for all graphs \(G\) the polynomial \(Z_{G}(x)\) doesn't vanish on \(D(0,r)\). An FPTAS is thus given by Lemma 4.1.
Case 2: \(\beta<\gamma\) and \(\beta+\gamma>2\). This case follows by symmetry from Case 1, as switching \(\beta\) and \(\gamma\) preserves \(Z_{G}\).
Case 3: \(\beta<\gamma\) and \(\beta+\gamma<-2\). In a similar way to Case 1, this case follows by combining Corollary 4.5, Corollary 4.3 and Lemma 4.1.
Case 4: \(\beta>\gamma\) and \(\beta+\gamma<-2\). This case follows by symmetry from Case 3.
### Preliminaries for the FPRAS
Our randomized approximation scheme for Theorem 1.5 closely resembles the one in [11]. The first main ingredient in [11] is the "subgraphs-world" transformation that reduce a spin system problem to a Holant problem. Here, we need to use a slightly generalized version of the subgraphs-world transformation. Though it has appeared in various forms in the literature (e.g. [12]), we introduce it here for the sake of completeness.
**Definition 4.6**.: For any function \(\psi:\{0,1\}^{2}\to\mathbb{R}\), define its Fourier transform to be the function \(\widehat{\psi}:\{0,1\}^{2}\to\mathbb{R}\) given by
\[\widehat{\psi}(a,b)=\frac{1}{4}\left(\psi(0,0)+(-1)^{b}\psi(0,1)+(-1)^{a}\psi( 1,0)+(-1)^{a+b}\psi(1,1)\right),\quad\forall a,b\in\{0,1\}.\]
Let \(\chi_{a,b}(x_{1},x_{2})=(-1)^{ax_{1}+bx_{2}}\), for \(a,b,x_{1},x_{2}\in\{0,1\}\). Then we have the identity
\[\psi=\sum_{a,b\in\{0,1\}}\widehat{\psi}(a,b)\cdot\chi_{a,b}.\]
**Proposition 4.7**.: Let \(\psi:\{0,1\}^{2}\to\mathbb{Q}^{\geq 0}\). An FPRAS for \(\mathsf{Holant}\left(\{\widehat{\psi}\}\cup\{\mathbf{Even}_{k}:k\geq 1\}\right)\) implies an FPRAS for \(\#\mathsf{CSP}(\{\psi\})\).
Proof.: Let \(G=(V,E)\) be an instance of \(\#\mathsf{CSP}(\{\psi\})\). Let \(G^{\prime}=(V^{\prime},E^{\prime})\) be defined by
\[V^{\prime}=V\cup E\text{ and }E^{\prime}=\bigcup_{e=\{i,j\}\in E}\{\{i,e\},\{j,e \}\}.\]
For every vertex \(v\in V\subset V^{\prime}\), let \(F_{v}=\mathbf{Even}_{d}\), where \(d:=\deg_{G}v\). For every vertex \(e\in E\subset V^{\prime}\), let \(F_{e}=\widehat{\psi}\). In this way, we form a Holant instance \(\phi\) with base graph \(G^{\prime}\). We have
\[Z_{G} =\sum_{x\in\{0,1\}^{V}}\prod_{\{i,j\}\in E}\psi(x_{i},x_{j})\] \[=\sum_{x\in\{0,1\}^{V}}\prod_{\{i,j\}\in E}\sum_{a,b\in\{0,1\}} \widehat{\psi}(a,b)(-1)^{ax_{i}+bx_{j}}\] \[=\sum_{x\in\{0,1\}^{V}}\sum_{y\in\{0,1\}^{E^{\prime}}}\prod_{e=\{ i,j\}\in E}\widehat{\psi}(y_{i,e},y_{j,e})(-1)^{y_{i,e}x_{i}+y_{j,e}x_{j}}\] \[=\sum_{y\in\{0,1\}^{E^{\prime}}}\left(\prod_{e=\{i,j\}\in E} \widehat{\psi}(y_{i,e},y_{j,e})\right)\left(\sum_{x\in\{0,1\}^{V}}\prod_{e=\{i, j\}\in E}(-1)^{y_{i,e}x_{i}+y_{j,e}x_{j}}\right)\] \[=\sum_{y\in\{0,1\}^{E^{\prime}}}\left(\prod_{e=\{i,j\}\in E} \widehat{\psi}(y_{i,e},y_{j,e})\right)\left(\sum_{x\in\{0,1\}^{V}}\prod_{i\in V }(-1)^{x_{i}(\sum_{\{i,e\}\in E^{\prime}}y_{i,e})}\right)\] \[=\sum_{y\in\{0,1\}^{E^{\prime}}}\left(\prod_{e=\{i,j\}\in E} \widehat{\psi}(y_{i,e},y_{j,e})\right)\left(\prod_{i\in V}\left(1+(-1)^{(\sum_ {\{i,e\}\in E^{\prime}}y_{i,e})}\right)\right)\] \[=2^{|V|}\sum_{y\in\{0,1\}^{E^{\prime}}}\prod_{e=\{i,j\}\in E} \widehat{\psi}(y_{i,e},y_{j,e})\prod_{i\in V}\mathbf{Even}\left((y_{i,e})_{\{ i,e\}\in E^{\prime}}\right).\] \[=2^{|V|}[\![\phi]\!],\]
where \([\![\phi]\!]\) denotes the partition function of the Holant instance \(\phi\).
In [10], the next step is to prove the rapid mixing of a Markov chain associated to the Holant problem and compute the partition function using an MCMC algorithm. But fortunately for us, we don't even need to define the Markov chain, as the powerful framework of [11] has reduced all these efforts to verifying some simple criteria:
**Definition 4.8**.: For any finite set \(J\) and any configuration \(x\in\{0,1\}^{J}\), define \(\mathcal{M}_{x}\) to be the set of partitions of \(\{i|x_{i}=1\}\) into pairs and at most one singleton. A function \(F:\{0,1\}^{J}\to\mathbb{Q}^{\geq 0}\) is **windable** if there exist values \(B(x,y,M)\geq 0\) for all \(x,y\in\{0,1\}^{J}\) and all \(M\in\mathcal{M}_{x\oplus y}\) satisfying:
1. \(F(x)F(y)=\sum_{M\in\mathcal{M}_{x\oplus y}}B(x,y,M)\) for all \(x,y\in\{0,1\}^{J}\), and
2. \(B(x,y,M)=B(x\oplus S,y\oplus S,M)\) for all \(x,y\in\{0,1\}^{J}\) and all \(S\in M\in\mathcal{M}_{x\oplus y}\).
Here \(x\oplus S\) denotes the vector obtained by changing \(x_{i}\) to \(1-x_{i}\) for the one or two elements \(i\) in \(S\).
**Lemma 4.9**.: _Any function \(\{0,1\}^{2}\to\mathbb{Q}^{\geq 0}\) is windable._
Proof.: The statement follows directly by combining Lemma 7 and Lemma 15 in [11].
**Definition 4.10**.: A function \(F:\{0,1\}^{J}\mapsto\mathbb{Q}^{\geq 0}\) is **strictly terraced** if
\[F(x)=0\implies F(x\oplus e_{i})=F(x\oplus e_{j})\qquad\text{ for all }x\in\{0,1\}^{J}\text{ and all }i,j\in J.\]
Here \(x\oplus e_{i}\) denotes the vector obtained by changing \(x_{i}\) to \(1-x_{i}\).
**Lemma 4.11** (Theorem 4 in [11]).: _If \(\mathcal{F}\) is a finite class of strictly terraced windable functions, then there is an FPRAS for \(\mathsf{Holant}(\mathcal{F})\)._
**Corollary 4.12**.: If \(\mathcal{F}\) is a finite class of strictly terraced windable functions, then there is an FPRAS for \(\mathsf{Holant}(\mathcal{F}\cup\{\mathbf{Even}_{k}:k\geq 1\})\).
Proof.: Since an \(\mathbf{Even}_{k}\) constraint can easily be realized using \((k-2)\) copies of \(\mathbf{Even}_{3}\) or \(\mathbf{Odd}_{3}\) constraints and \((k-3)\) additional variables, it suffices to show that there is an FPRAS for \(\mathsf{Holant}(\mathcal{F}\cup\{\mathbf{Even}_{3},\mathbf{Odd}_{3}\})\). Since \(\mathbf{Even}_{3}\) and \(\mathbf{Odd}_{3}\) are both windable (see [11, Lemma 17]) and strictly terraced, the claim follows from Lemma 4.11.
### Proof of Theorem 1.5
Now, it suffices to verify that certain constraint functions are windable and strictly terraced.
**Lemma 4.13**.: _For rational numbers \(\beta,\gamma\) such that \(\beta>\gamma\) and \(\beta+\gamma\geq 2\), the function \(\psi:\{0,1\}^{2}\to\mathbb{Q}\) defined by \(\begin{bmatrix}\psi(0,0)&\psi(0,1)\\ \psi(1,0)&\psi(1,1)\end{bmatrix}=\begin{bmatrix}\beta&1\\ 1&\gamma\end{bmatrix}\) satisfies the property that \(\widehat{\psi}\) is windable and strictly terraced._
Proof.: Since \(\begin{bmatrix}\widehat{\psi}(0,0)&\widehat{\psi}(0,1)\\ \widehat{\psi}(1,0)&\widehat{\psi}(1,1)\end{bmatrix}=\frac{1}{4}\begin{bmatrix} \beta+\gamma+2&\beta-\gamma\\ \beta-\gamma&\beta+\gamma-2\end{bmatrix}\), we have \(\widehat{\psi}(x)\geq 0\) for all \(x\in\{0,1\}^{2}\), and the only possibility of \(\widehat{\psi}(x)=0\) is when \(\beta+\gamma=2\) and \(x=(1,1)\). In that case, we have \(\widehat{\psi}(1,0)=\widehat{\psi}(0,1)=\frac{\beta-\gamma}{4}\). It follows that \(\widehat{\psi}\) is strictly terraced.
The windablity of \(\widehat{\psi}\) follows from Lemma 4.9.
**Lemma 4.14**.: _For rational numbers \(\beta,\gamma\) such that \(\beta<\gamma\) and \(\beta+\gamma\leq-2\), the function \(\psi:\{0,1\}^{2}\to\mathbb{Q}\) defined by \(\begin{bmatrix}\psi(0,0)&\psi(0,1)\\ \psi(1,0)&\psi(1,1)\end{bmatrix}=\begin{bmatrix}\beta&1\\ 1&\gamma\end{bmatrix}\) satisfies the property that \(-\widehat{\psi}\) is windable and strictly terraced._
Proof.: Since \(\begin{bmatrix}-\widehat{\psi}(0,0)&-\widehat{\psi}(0,1)\\ -\widehat{\psi}(1,0)&-\widehat{\psi}(1,1)\end{bmatrix}=\frac{1}{4}\begin{bmatrix} -2-\beta-\gamma&\gamma-\beta\\ \gamma-\beta&2-\beta-\gamma\end{bmatrix}\), we have \(-\widehat{\psi}(x)\geq 0\) for all \(x\in\{0,1\}^{2}\), and the only possibility of \(-\widehat{\psi}(x)=0\) is when \(\beta+\gamma=-2\) and \(x=(0,0)\). In that case, we have \(-\widehat{\psi}(1,0)=\widehat{\psi}(0,1)=\frac{\gamma-\beta}{4}\). It follows that \(-\widehat{\psi}\) is strictly terraced.
The windablity of \(-\widehat{\psi}\) follows from Lemma 4.9.
Now we are ready to prove Theorem 1.5.
Proof of Theorem 1.5.: The range of parameters can be divided into \(4\) regions:
Case 1: \(\beta>\gamma\) and \(\beta+\gamma\geq 2\). Combining Lemma 4.13 and Corollary 4.12, there is an FPRAS for \(\mathsf{Holant}\left(\widehat{\psi}\cup\{\mathbf{Even}_{k}:k\geq 1\}\right)\), where \(\psi:\{0,1\}^{2}\to\mathbb{Q}\) defined by \(\begin{bmatrix}\psi(0,0)&\psi(0,1)\\ \psi(1,0)&\psi(1,1)\end{bmatrix}=\begin{bmatrix}\beta&1\\ 1&\gamma\end{bmatrix}\). An FPRAS for \(\#\mathsf{CSP}(\{\psi\})\)is thus given by Proposition 4.7.
Case 2: \(\beta<\gamma\) and \(\beta+\gamma\geq 2\). This case follows by symmetry from Case 1, as switching \(\beta\) and \(\gamma\) preserves \(Z_{G}\).
Case 3: \(\beta<\gamma\) and \(\beta+\gamma\leq-2\). In a similar way to Case 1, this case follows by combining Lemma 4.14, Corollary 4.12 and Proposition 4.7.
Case 4: \(\beta>\gamma\) and \(\beta+\gamma\leq-2\). This case follows by symmetry from Case 3.
## 5 The Recursion Method
This section collects the proofs of Theorems 1.6 to 1.8. All of three proofs are based on the recursion method, but they recurse with different subsets of the complex plane.
### Recursion with Real Intervals
Proposition 3.6 shows that when \(\gamma<0\) and the parameter point \((\beta,\gamma)\) lie slightly to the left of the line \(\beta+\gamma=1\), the realizable ratios are dense in \(\mathbb{R}\). The next lemma says that, if \((\beta,\gamma)\) lie to the right of the line \(\beta+\gamma=1\), the realizable ratios are bounded by an interval, even if certain external fields are allowed in the system. This is in stark contrast to Proposition 3.6, and indicates that some phase transition happens at the line \(\beta+\gamma=1\). As a byproduct, the next lemma also shows that the partition function is always positive, in contrast with Theorem 1.3.
**Lemma 5.1**.: _Let \(\beta,\gamma\) be real numbers such that \(\beta+\gamma\geq 1\) and \(\gamma<0\). Then for any graph \(G\), any external field \(\boldsymbol{\lambda}\in[\frac{\gamma}{\beta},1]^{V(G)}\) and any \(v\in V(G)\),_
1. _the ratio_ \(R_{G,v}(\boldsymbol{\lambda})\) _is well defined and falls in the interval_ \([\frac{\gamma}{\beta},1]\)_;_
2. _the partition function_ \(Z_{G}(\boldsymbol{\lambda})\) _is positive._
Proof.: Since \(\gamma<0\) and \(\beta\geq 1-\gamma>-\gamma\), we have \(\frac{\gamma}{\beta}\in(-1,0)\). Observe that a self-loop on a vertex \(v\) has the same effect as multiplying the local external field \(\lambda_{v}\) by \(\frac{\gamma}{\beta}\) and multiplying the partition function by \(\beta>0\). The new local external field would still be in \([\frac{\gamma}{\beta},1]^{V(G)}\) because \(\frac{\gamma}{\beta}\in(-1,0)\). So we may assume \(E(G)\) doesn't contain self-loops.
We then perform induction on \(|V(G)|+|E(G)|\) for the two statements together. The base case is where \(V(G)\) is a singleton \(\{v\}\) and \(E(G)=\emptyset\), in which case \(R_{G,v}(\boldsymbol{\lambda})=\lambda_{v}\) and \(Z_{G}(\boldsymbol{\lambda})=1+\lambda_{v}>0\). Now assume \(G\) is a graph such that \(|V(G)|+|E(G)|\geq 2\). Assume also that the induction hypotheses (1) and (2) hold for all graphs with a smaller combined number of vertices and edges. Consider a vertex \(v\in V(G)\). The induction step is to prove statements (1) and (2) for the pair \((G,v)\).
If no edge is incident to \(v\), let \(G-v\) be the graph obtained from \(G\) by deleting the vertex \(v\), and let \(\boldsymbol{\lambda}^{\prime}\) be \(\boldsymbol{\lambda}\) restricted on \(V(G)\setminus\{v\}\). We have \([Z_{G,v}(\boldsymbol{\lambda})]_{0}=Z_{G-v}(\boldsymbol{\lambda}^{\prime})>0\) from the induction hypothesis (2) applied on \(G-v\), so \(R_{G,v}\) is well-defined. Clearly \(R_{G,v}=\lambda_{v}\in[\frac{\gamma}{\beta},1]\) and \(Z_{G}(\boldsymbol{\lambda})=(1+\lambda_{v})Z_{G-v}(\boldsymbol{\lambda}^{ \prime})>0\), completing the induction step. In the following, we deal with the harder case where there is an edge \(\{u,v\}\) incident to \(v\).
Define \(G_{1}=G-\{u,v\}\), the graph obtained from \(G\) by deleting the edge \(\{u,v\}\). The following equations are immediate consequences:
\[[Z_{G,v}(\boldsymbol{\lambda})]_{0} =\beta\cdot\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,0}+ \left[Z_{G,u,v}(\boldsymbol{\lambda})\right]_{1,0},\] \[[Z_{G,v}(\boldsymbol{\lambda})]_{1} =\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,1}+\gamma \cdot\left[Z_{G,u,v}(\boldsymbol{\lambda})\right]_{1,1}.\]
To shorten expressions, let \(A=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,0}\), \(B=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{1,0}\), \(C=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,1}\), and \(D=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{1,1}\). So the ratio \(R_{G,v}(\boldsymbol{\lambda})=\dfrac{\left[Z_{G,v}(\boldsymbol{ \lambda})\right]_{1}}{\left[Z_{G,v}(\boldsymbol{\lambda})\right]_{0}}\) can be written as \(\frac{C+\gamma D}{\beta A+B}\). The entire goal of the remaining proof is to use the induction hypothesis to show that
\[\beta A+B>0\quad\text{and}\quad\frac{\gamma}{\beta}\leq\frac{C+\gamma D}{\beta A +B}\leq 1. \tag{5.1}\]
Clearly proving (5.1) would imply that the first statement holds for the pair \((G,v)\). The second statement would also follow, since we would have \(Z_{G}(\boldsymbol{\lambda})=\beta A+B+C+\gamma D\geq(1+\frac{\gamma}{\beta})( \beta A+B)>0\).
To prove (5.1), we make use of the induction hypotheses in 6 different ways:
* First, since \(\left|V(G_{1})\right|+\left|E(G_{1})\right|=\left|V(G)\right|+\left|E(G) \right|-1\), the induction hypothesis can be applied to \(G_{1}\). Since \(R_{G_{1},v}(\boldsymbol{\lambda})=\dfrac{\left[Z_{G_{1},v}(\boldsymbol{ \lambda})\right]_{1}}{\left[Z_{G_{1},v}(\boldsymbol{\lambda})\right]_{0}}= \dfrac{C+D}{A+B}\), the induction hypothesis (1) gives \[\frac{\gamma}{\beta}\leq\frac{C+D}{A+B}\leq 1.\] (5.2)
* If we define \(\boldsymbol{\lambda^{\prime}}\in[\frac{\gamma}{\beta},1]^{V(G)}\) to be \(\lambda^{\prime}_{u}=\frac{\gamma}{\beta}\lambda_{u}\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq u\), we would have \(R_{G_{1},v}(\boldsymbol{\lambda^{\prime}})=\dfrac{\left[Z_{G_{1},u,v}( \boldsymbol{\lambda^{\prime}})\right]_{0,1}+\left[Z_{G_{1},u,v}(\boldsymbol{ \lambda^{\prime}})\right]_{1,1}}{\left[Z_{G_{1},u,v}(\boldsymbol{\lambda^{ \prime}})\right]_{0,0}+\left[Z_{G_{1},u,v}(\boldsymbol{\lambda^{\prime}}) \right]_{1,0}}=\dfrac{C+\frac{\gamma}{\beta}D}{A+\frac{\gamma}{\beta}B}=\dfrac{ \beta C+\gamma D}{\beta A+\gamma B}\). The induction hypothesis (1) gives \[\frac{\gamma}{\beta}\leq\frac{\beta C+\gamma D}{\beta A+\gamma B}\leq 1.\] (5.3)
* Similarly we can define \(\boldsymbol{\lambda^{\prime}}\in[\frac{\gamma}{\beta},1]^{V(G)}\) to be \(\lambda^{\prime}_{v}=\frac{\gamma}{\beta}\lambda_{v}\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq v\). The induction hypothesis on \(R_{G_{1},u}(\boldsymbol{\lambda^{\prime}})\) gives \[\frac{\gamma}{\beta}\leq\frac{\beta B+\gamma D}{\beta A+\gamma C}\leq 1.\] (5.4)
* Consider \(G_{2}=G/\{u,v\}\). This means contracting the edge \(\{u,v\}\): create a new vertex \(w\), delete \(\{u,v\}\) from \(E(G)\), and in every other member of \(E(G)\), substitute \(w\) for any appearance of \(u\) and \(v\) as endpoints. If we define \(\boldsymbol{\lambda^{\prime}}\in[\frac{\gamma}{\beta},1]^{V(G_{2})}\) to be \(\lambda^{\prime}_{w}=\lambda_{u}\lambda_{v}\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq w\), it is clear that \(\left[Z_{G_{2},w}(\boldsymbol{\lambda^{\prime}})\right]_{0}=A\) and \(\left[Z_{G_{2},w}(\boldsymbol{\lambda^{\prime}})\right]_{1}=D\). Using the induction hypothesis on \(G_{2}\), we get \[\frac{\gamma}{\beta}\leq\frac{D}{A}\leq 1.\] (5.5)
* If we define \(\boldsymbol{\lambda^{\prime}}\in[\frac{\gamma}{\beta},1]^{V(G)}\) to be \(\lambda^{\prime}_{u}=0\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq u\), this has the effect of "pinning" the spin on vertex \(u\) to \(0\). We thus have \(\left[Z_{G_{1},v}(\boldsymbol{\lambda^{\prime}})\right]_{0}=A\) and \(\left[Z_{G_{1},v}(\boldsymbol{\lambda^{\prime}})\right]_{1}=C\). Using the induction hypothesis (1) on \(G_{1}\), we have \(\frac{\gamma}{\beta}\leq\frac{C}{A}\leq 1\). Similarly, by pinning the spin of \(v\) to \(0\), we get \(\frac{\gamma}{\beta}\leq\frac{B}{A}\leq 1\).
* If we define \(\mathbf{\lambda^{\prime}}\) to be \(\lambda^{\prime}_{u}=\lambda^{\prime}_{v}=0\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq u,v\), we can pin the spins of both \(u\) and \(v\) to \(0\). Then \(A=Z_{G_{1}}(\mathbf{\lambda^{\prime}})>0\), by the induction hypothesis (2) on \(G_{1}\). Therefore, using \(\frac{\gamma}{\beta}\leq\frac{B}{A}\leq 1\), we get \(A+B\geq(1+\frac{\gamma}{\beta})A>0\) and \(\beta A+\gamma B\geq(\beta+\gamma)A>0\). Similarly we also have \(\beta A+\gamma C>0\). In conclusion, \[A+B,\quad\beta A+\gamma B,\quad\beta A+\gamma C\text{ and }A\text{ are all positive.}\] (5.6)
We claim that given the condition (5.6), the inequalities (5.2), (5.3), (5.4) and (5.5) together imply (5.1). This is, in fact, purely elementary algebra. Recall that \(\beta\geq 1-\gamma>1\). So the first half of (5.1) is obvious: \(\beta A+B=(\beta-1)A+(A+B)\) is also positive. Now, using the four inequalities (5.2) through (5.5), we have
\[1-\frac{C+\gamma D}{\beta A+B}=\frac{A+B}{\beta A+B}\left(1-\frac{C+D}{A+B} \right)+\frac{\beta A}{\beta A+B}\left(\frac{D}{A}-\frac{\gamma}{\beta}\right) +\frac{(\beta+\gamma-1)A}{\beta A+B}\left(1-\frac{D}{A}\right)\geq 0, \tag{5.7}\]
and
\[\begin{split}\frac{C+\gamma D}{\beta A+B}-\frac{\gamma}{\beta}= \frac{(\beta^{2}+\gamma\beta+\gamma^{2})}{(\beta+\gamma)(\beta^{2}+\gamma^{2} )}\cdot\frac{\beta A+\gamma B}{\beta A+B}\left(\frac{\beta C+\gamma D}{\beta A +\gamma B}-\frac{\gamma}{\beta}\right)+\\ \frac{-\gamma\beta}{(\beta+\gamma)(\beta^{2}+\gamma^{2})}\cdot \frac{\beta A+\gamma C}{\beta A+B}\left(\frac{\beta B+\gamma D}{\beta A+ \gamma C}-\frac{\gamma}{\beta}\right)+\\ \frac{-\gamma(\beta+\gamma-1)}{(\beta+\gamma)}\cdot\frac{A}{\beta A +B}\left(1-\frac{D}{A}\right)\geq 0.\end{split} \tag{5.8}\]
This completes the proof of the second half of (5.1). Note that both (5.7) and (5.8) crucially rely on the condition \(\beta+\gamma\geq 1\).
Theorem 1.6 then follows almost immediately:
Proof of Theorem 1.6.: If both \(\beta\) and \(\gamma\) is nonnegative, it is clear that \(Z_{G}>0\). So by symmetry we can assume \(\gamma\) is negative, and then the theorem follows from the statement (2) of Lemma 5.1, since \(Z_{G}=Z_{G}(\mathbf{1})\).
### Recursion with Circular Regions
The one major range of parameters where approximation complexity is unsettled is where \(1\leq\beta+\gamma<2\) and (without loss of generality) \(\gamma<0\). As with the the proof of Theorem 1.4, zero-freeness is still one of the most natural approaches to try, in terms of proving approximation efficiency.
However, in this range, the contraction method doesn't seem to apply easily. Instead, we will try to attack this range using another important method: induction on the number of vertices, whose power is already well demonstrated in Section 5.1, where we proved Theorem 1.6. In fact, Theorem 1.6 itself can be viewed as a "zero-freeness" result: it implies that the partition function \(Z_{G}(x)\) is zero-free on the real interval \([\frac{\gamma}{\beta},1]\). The only weakness is, in order to make use of Lemma 4.1 we must prove a zero-free neighborhood of \([0,1]\) on _the complex plane_.
Unfortunately, as mentioned in Section 1.2, we are unable to do this for the whole range \(\{(\beta,\gamma):\gamma<0\text{ and }1\leq\beta+\gamma\leq 2\}\). In general, it appears challenging to prove optimal zero-free regions in the complex plane (c.f. [1, 1]). In this section, we will prove Theorem 1.7, which gives an optimal _circular_ zero-free region.
**Lemma 5.2**.: _Let \(\beta,\gamma\) be real numbers such that \(\gamma<0\) and \(1\leq\beta+\gamma\leq 2\). Let \(r=\frac{\beta-1}{1-\gamma}\in(0,1]\) and let \(K\) denote the region \(\{z\in\mathbb{C}:|z|\leq r\}\setminus\{-1\}\) (remark: when \(\beta+\gamma<2\), the exclusion of \(\{-1\}\) is redundant since \(r<1\)). Then for any external field \(\boldsymbol{\lambda}\in K^{V(G)}\) and any \(v\in V(G)\),_
1. _the ratio_ \(R_{G,v}(\boldsymbol{\lambda})\) _is well defined and falls in_ \(K\)_;_
2. _the partition function_ \(Z_{G}(\boldsymbol{\lambda})\) _is nonzero._
**Remark 6**.: The proof below follows the general structure of the proof of Theorem 1.6. The main differences are in the ways we use the induction hypotheses. Although the first half of the proofs are mostly identical, there are occasionally minor differences. So we still present the complete proof.
Proof of Lemma 5.2.: Recall that a self-loop on a vertex \(v\) has the same effect as multiplying the local external field \(\lambda_{v}\) by \(\frac{\gamma}{\beta}\), which doesn't change the fact that \(\boldsymbol{\lambda}\in K^{V(G)}\) because \(\frac{\gamma}{\beta}\in(-1,0)\). So we may assume \(E(G)\) doesn't contain self-loops.
We then perform induction on \(|V(G)|+|E(G)|\) for the two statements together. The base case is where \(V(G)\) is a singleton \(\{v\}\) and \(E(G)=\emptyset\), in which case \(R_{G,v}(\boldsymbol{\lambda})=\lambda_{v}\) and \(Z_{G}(\boldsymbol{\lambda})=1+\lambda_{v}\neq 0\) (this is where we need the exclusion of \(-1\)). Now assume \(G\) is a graph such that \(|V(G)|+|E(G)|\geq 2\). Assume also that the induction hypotheses (1) and (2) hold for all graphs with a smaller combined number of vertices and edges. Consider a vertex \(v\in V(G)\). The induction step is to prove statements (1) and (2) for the pair \((G,v)\).
If no edge is incident to \(v\), let \(G-v\) be the graph obtained from \(G\) by deleting the vertex \(v\), and let \(\boldsymbol{\lambda}^{\prime}\) be \(\boldsymbol{\lambda}\) restricted on \(V(G)\setminus\{v\}\). We have \([Z_{G,v}(\boldsymbol{\lambda})]_{0}=Z_{G-v}(\boldsymbol{\lambda}^{\prime})\neq 0\) from the induction hypothesis (2) applied on \(G-v\), so \(R_{G,v}\) is well-defined. Clearly \(R_{G,v}=\lambda_{v}\in K\) and \(Z_{G}(\boldsymbol{\lambda})=(1+\lambda_{v})Z_{G-v}(\boldsymbol{\lambda}^{ \prime})\neq 0\), completing the induction step. In the following, we deal with the harder case where there is an edge \(\{u,v\}\) incident to \(v\).
Define \(G_{1}=G-\{u,v\}\), the graph obtained from \(G\) by deleting the edge \(\{u,v\}\). We have:
\[[Z_{G,v}(\boldsymbol{\lambda})]_{0} =\beta\cdot\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,0 }+\left[Z_{G,u,v}(\boldsymbol{\lambda})\right]_{1,0},\] \[[Z_{G,v}(\boldsymbol{\lambda})]_{1} =\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,1}+\gamma \cdot\left[Z_{G,u,v}(\boldsymbol{\lambda})\right]_{1,1}.\]
To shorten expressions, let \(A=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,0}\), \(B=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{1,0}\), \(C=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,1}\), and \(D=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{1,1}\). So the ratio \(R_{G,v}(\boldsymbol{\lambda})=\dfrac{\left[Z_{G,v}(\boldsymbol{\lambda}) \right]_{1}}{[Z_{G,v}(\boldsymbol{\lambda})]_{0}}\) can be written as \(\frac{C+\gamma D}{\beta A+B}\).
We make use of the induction hypotheses in the following 3 ways:
* If we define \(\boldsymbol{\lambda}^{\prime}\in K^{V(G)}\) to be \(\lambda_{v}^{\prime}=0\) and \(\lambda_{x}^{\prime}=\lambda_{x}\) for all \(x\neq v\), we would have \(R_{G_{1},u}(\boldsymbol{\lambda}^{\prime})=\dfrac{\left[Z_{G_{1},u}( \boldsymbol{\lambda}^{\prime})\right]_{1}}{\left[Z_{G_{1},u}(\boldsymbol{ \lambda}^{\prime})\right]_{0}}=\dfrac{B}{A}\). The induction hypothesis (1) gives \(\frac{B}{A}\in K\).
* If we define \(\boldsymbol{\lambda}^{\prime}\in K^{V(G)}\) to be \(\lambda_{u}^{\prime}=0\) and \(\lambda_{x}^{\prime}=\lambda_{x}\) for all \(x\neq u\), we would have \(R_{G_{1},v}(\boldsymbol{\lambda}^{\prime})=\dfrac{\left[Z_{G_{1},v}( \boldsymbol{\lambda}^{\prime})\right]_{1}}{\left[Z_{G_{1},v}(\boldsymbol{ \lambda}^{\prime})\right]_{0}}=\dfrac{C}{A}\). The induction hypothesis (1) gives \(\frac{C}{A}\in K\).
* Consider \(G_{2}=G/\{u,v\}\). This means contracting the edge \(\{u,v\}\): create a new vertex \(w\), delete \(\{u,v\}\) from \(E(G)\), and in every other member of \(E(G)\), substitute \(w\) for any appearance of \(u\) and \(v\) as endpoints. If we define \(\boldsymbol{\lambda}^{\prime}\in K^{V(G_{2})}\) to be \(\lambda_{w}^{\prime}=\lambda_{u}\) and \(\lambda_{x}^{\prime}=\lambda_{x}\) for all \(x\neq w\), using the induction hypothesis on \(G_{2}\), we get \(\dfrac{\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{1}}{\left[Z_{ G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{0}}\in K\). Now, we still have \(\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{0}=A\), but \(D=\lambda_{v}\cdot\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{1}\). So \(\frac{D}{A}=\lambda_{v}\dfrac{\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime}) \right]_{1}}{\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{0}}\in K\cdot K\).
Now we are ready to complete the induction step. Recall that \(\beta\geq 1-\gamma>1\). It follows from \(\frac{B}{A}\in K\) and \(-\beta\not\in K\) that \(\beta A+B\neq 0\). So \(R_{G,v}(\boldsymbol{\lambda})=\frac{C+\gamma D}{\beta A+B}\) is well defined. What's more,
\[r\cdot|\beta A+B| \geq r\beta|A|-r|B| \tag{5.9}\] \[\geq r\beta|A|-r^{2}|A| \text{(since $\frac{B}{A}\in K$)}\] (5.10) \[=r(1-\gamma r)|A| \text{(since $r=\frac{\beta-1}{1-\gamma}$)}\] \[\geq|C|+(-\gamma)|D| \text{(since $\frac{C}{A}\in K$ and $\frac{D}{A}\in K\cdot K$)}\] (5.11) \[\geq|C+\gamma D|, \tag{5.12}\]
so \(R_{G,v}(\boldsymbol{\lambda})=\frac{C+\gamma D}{\beta A+B}\in\{z\in\mathbb{C} :|z|\leq r\}\). The only possibility of \(R_{G,v}=-1\) is when \(r=1\) and all the inequalities above hold with equality. But then the equalities in (5.9) and (5.10) together imply that \(B=-rA=-A\), violating the induction hypothesis \(\frac{B}{A}\in K\). So we conclude that \(R_{G,v}\in\{z\in\mathbb{C}:|z|\leq r\}\setminus\{-1\}=K\), proving statement (1) for the pair \((G,v)\). Finally, from \(R_{G,v}=\frac{C+\gamma D}{\beta A+B}\neq-1\) it immediately follows that \(Z_{G}(\boldsymbol{\lambda})=\beta A+B+C+\gamma D=\left(1+\frac{C+\gamma D}{ \beta A+B}\right)(\beta A+B)\neq 0\), proving statement (2) for the pair \((G,v)\).
We also give a complementary result showing that the radius \(r=\frac{\beta-1}{1-\gamma}\) in Lemma 5.2 is optimal:
**Lemma 5.3**.: _Let \(\beta,\gamma\) be real numbers such that \(\gamma<0\) and \(1\leq\beta+\gamma\leq 2\). For any \(r>\frac{\beta-1}{1-\gamma}\), there exists a graph \(G\) such that the polynomial \(Z_{G}(x)\) has a root in the disk \(\{z\in\mathbb{C}:|z|<r\}\)._
Proof.: Consider the graph \(G_{n}\) with \(V(G_{n})=\{v_{0},v_{1},\cdots,v_{n}\}\) and
\[E(G_{n})=\{\{v_{0},v_{1}\},\{v_{0},v_{2}\},\cdots,\{v_{0},v_{n}\}\}\]
(a star graph). It's easy to compute that \(Z_{G_{n}}(x)=(\beta+x)^{n}+x(1+\gamma x)^{n}\). For any \(\frac{\beta-1}{1-\gamma}<r<\beta\), we have \(\frac{1-\gamma r}{\beta-r}>1\). So for sufficiently large \(n\), \(Z_{G_{n}}(-r)=(\beta-r)^{n}(1-r(\frac{1-\gamma r}{\beta-r})^{n})<0\). But \(Z_{G_{n}}(0)=\beta^{n}>0\), so if follows from the intermediate value theorem that the polynomial \(Z_{G_{n}}(x)\) has a root in the real interval \((-r,0)\).
Proof of Theorem 1.7.: The zero-freeness follows from statement (2) of Lemma 5.2, and the optimality of the radius follows from Lemma 5.3.
### Recursion with Uncentered Circular Regions
In this section, we give a proof to Theorem 1.8. The idea is similar to the proof of Theorem 1.7 in Section 5.2, but this time we recurse with a circular region that's _not_ centered at \(0\). The main new ingredient is that we treat isolated vertices and non-isolated vertices of the graph separately through casework.
**Lemma 5.4**.: _Let \(g:(1,+\infty)\to(0,1)\) be the following function:_
\[g(\beta)=\max\left\{\frac{\beta-2}{\beta^{2}-1},\frac{(\beta-1)^{2}}{\beta^{3} +\beta^{2}-\beta}\right\}. \tag{5.13}\]
_For fixed real parameters \(\beta,\gamma\) with \(\gamma<0\) and \(\beta+\gamma>2-g(\beta)\), there exists an open neighborhood \(U\) of \([\frac{\gamma}{\beta},1]\) on the complex plane and a closed disk \(K\subset\mathbb{C}\) such that for any graph \(G\) and any external field \(\boldsymbol{\lambda}\in U^{V(G)}\),_
1. _the ratio_ \(R_{G,v}(\boldsymbol{\lambda})\) _is well defined and falls in_ \(K\) _for any non-isolated vertex_ \(v\in V(G)\)_;_
2. _the partition function_ \(Z_{G}(\boldsymbol{\lambda})\) _is nonzero._
Before proving the lemma, we first need to specify the regions \(U\) and \(K\). Let \(U=\{z\in\mathbb{C}:\exists x\in[\frac{\gamma}{\beta},1]\text{ s.t. }|z-x|<\varepsilon\}\), where \(\varepsilon>0\) is a sufficiently small constant. Let \(K\) be the closed disk with the real interval \([a,b]\) as its diameter, where \(a\in(-1,0)\) and \(b\in(0,+\infty)\) are constants to be determined later. In fact, during the proof of the lemma, we will impose several requirements on \(a\) and \(b\), and in the end we will show that these requirements are jointly satisfiable in the parameter range \(\{(\beta,\gamma):\gamma<0\text{ and }\beta+\gamma>2-g(\beta)\}\).
Proof of Lemma 5.4.: Since \(\gamma<0\) and \(\beta\geq 2-\gamma-g(\beta)>-\gamma\), we have \(\frac{\gamma}{\beta}\in(-1,0)\). Given the choice of the region \(U\), It is clear that multiplying any component of \(\boldsymbol{\lambda}\) by \(\frac{\gamma}{\beta}\) doesn't change the fact that \(\boldsymbol{\lambda}\in U^{V(G)}\). So we may assume \(E(G)\) doesn't contain self-loops.
We then perform induction on \(|E(G)|\) for the two statements together. The base case is where \(E(G)=\emptyset\). In this case all vertices are isolated, and statement (1) holds vacuously. As for statement (2), we have \(Z_{G}(\boldsymbol{\lambda})=\prod_{v\in V(G)}(1+\lambda_{v})\neq 0\), since \(-1\not\in U\) provided that \(\varepsilon\) is sufficiently small. Now assume \(G\) is a graph with \(|E(G)|\geq 1\). Assume also that the induction hypotheses (1) and (2) hold for all graphs with a smaller number of edges. Consider a non-isolated vertex \(v\in V(G)\). The induction step is to prove statements (1) and (2) for the pair \((G,v)\).
Let vertex \(u\) be a neighbor of \(v\). We first note that statement (1) implies (2): granted (1), we have \(Z_{G}(\boldsymbol{\lambda})=[Z_{G,v}(\boldsymbol{\lambda})]_{0}\cdot(1+R_{G,v })\neq 0\), since \(-1\not\in K\) due to the requirement \(a\in(-1,0)\). In the following, we prove statement (1) by dividing into \(4\) cases.
Case 1: \(\deg_{G}v=1\) and \(\deg_{G}u=1\). In this case, the edge \(\{u,v\}\) is itself a connected component of \(G\). Let \(H\) be the graph obtained from \(G\) by deleting the vertices \(u\) and \(v\). Let \(\boldsymbol{\lambda^{\prime}}\) be the restriction of \(\boldsymbol{\lambda}\) to \(V\setminus\{u,v\}\). We have \([Z_{G,v}(\boldsymbol{\lambda})]_{0}=(\beta+\lambda_{u})Z_{H}(\boldsymbol{ \lambda^{\prime}})\neq 0\), by the induction hypothesis on \(H\) and that \(-\beta\not\in U\). This means \(R_{G,v}\) is well defined. We then have \(R_{G,v}=\dfrac{1+\gamma\lambda_{u}}{\beta+\lambda_{u}}\lambda_{v}\in f(U)\cdot U\), where \(f\) denotes the Mobius transformation \(r\mapsto\frac{1+\gamma r}{\beta+r}\) and \(f(U)\cdot U\) denotes the Minkowski product of \(f(U)\) with \(U\). In order to ensure statement (1), we want \(f(U)\cdot U\subset K\). Since \(U\) is taken to be an arbitrarily small neighborhood of the real interval \(J:=[\frac{\gamma}{\beta},1]\), the requirement on \(K\) is simply \(f(J)\cdot J\subset\operatorname{int}(K)\) (the interior of \(K\)). Using the parameter relations \(\beta+\gamma>1\) and \(\gamma<0\), it's easy to verify that \(f(J)\cdot J\) is contained in the interval \(\left(\frac{\gamma}{\beta},\frac{\beta+\gamma^{2}}{\beta^{2}+\gamma}\right]\). So \(f(J)\cdot J\subset\operatorname{int}(K)\) holds if \(a\) and \(b\) satisfy the following two requirements:
\[a \leq\frac{\gamma}{\beta}, \tag{5.14}\] \[b >\frac{\beta+\gamma^{2}}{\beta^{2}+\gamma}. \tag{5.15}\]
Case 2: \(\deg_{G}v\geq 2\) and \(\deg_{G}u=1\). Let \(H\) be the graph obtained from \(G\) by deleting the vertex \(u\) and the edge \(\{u,v\}\). Let \(\boldsymbol{\lambda^{\prime}}\) be the restriction of \(\boldsymbol{\lambda}\) to \(V\setminus\{u\}\). Since \(v\) is not an isolated vertex in \(H\), we may use the induction hypothesis (1) on the pair \((H,v)\) and get \(R_{H,v}\in K\). Now \([Z_{G,v}]_{0}=(\beta+\lambda_{u})[Z_{H,v}]_{0}\neq 0\), by the induction hypothesis on \((H,v)\) and that \(-\beta\not\in U\). This means \(R_{G,v}\) is well-defined. We then have
\[R_{G,v}=\frac{[Z_{G,v}]_{1}}{[Z_{G,v}]_{0}}=\frac{[Z_{H,v}]_{0}+\lambda_{u} \gamma[Z_{H,v}]_{0}}{\beta[Z_{H,u}]_{0}+\lambda_{u}[Z_{H,u}]_{0}}=f(\lambda_{u} )\cdot R_{H,v}\in f(U)\cdot K.\]
In order to ensure statement (1), we want \(f(U)\cdot K\subset K\). We impose the requirement
\[-a<b\leq 1. \tag{5.16}\]
The requirements (5.14) and (5.16) together implies that
\[\left\{z\in\mathbb{C}:|z|\leq\left|\frac{\gamma}{\beta}\right|\right\}\subseteq K \subseteq\{z\in\mathbb{C}:|z|\leq 1\}.\]
In particular, we have \(\frac{\gamma}{\beta}\cdot K\subset K\) and by convexity of \(K\), also \(J\cdot K\subset K\) (recall that \(J:=[\frac{\gamma}{\beta},1]\)). Now, since \(U\) is an arbitrarily small neighborhood of \(J\), we have \(f(U)\subset f(J)+W\) (the Minkowski sum of sets), where \(W\) is an arbitrarily small neighborhood of \(0\). Using the parameter relations \(\beta+\gamma>1\) and \(\gamma<0\), it's easy to verify there exists a constant \(0<c<1\) such that \(f(J)\subset c\cdot J\). We thus have
\[f(U)\cdot K\subset(f(J)+W)\cdot K\subset c\cdot J\cdot K+W\cdot K\subset c \cdot K+W\cdot K\subset K.\]
For the last inclusion, note that since \(K\) is convex, \(c\cdot K+(1-c)\cdot K=K\), so it suffices to let \(W\) be sufficiently small such that \(W\cdot K\subset(1-c)\cdot K\).
Case 3: \(\deg_{G}v=1\) and \(\deg_{G}u\geq 2\). Let \(H\) be the graph obtained from \(G\) by deleting the vertex \(v\) and the edge \(\{u,v\}\). Let \(\boldsymbol{\lambda}^{\prime}\) be the restriction of \(\boldsymbol{\lambda}\) to \(V\setminus\{v\}\). Since \(u\) is not an isolated vertex in \(H\), we may use the induction hypothesis (1) on the pair \((H,u)\) and get \(R_{H,u}\in K\). Now \([Z_{G,v}]_{0}=\beta[Z_{H,u}]_{0}+[Z_{H,u}]_{1}=[Z_{H,u}]_{0}\cdot(\beta+R_{H,u })\neq 0\), by the induction hypothesis on \((H,u)\) and that \(-\beta\not\in K\). This means that \(R_{G,v}\) is well defined. We then have
\[R_{G,v}=\frac{[Z_{G,v}]_{1}}{[Z_{G,v}]_{0}}=\frac{\lambda_{v}[Z_{H,u}]_{0}+ \lambda_{v}\gamma[Z_{H,u}]_{1}}{\beta[Z_{H,u}]_{0}+[Z_{H,u}]_{1}}=f(R_{H,u}) \cdot\lambda_{v}\in f(K)\cdot U.\]
In order to ensure statement (1), we want \(f(K)\cdot U\subset K\). By the conformity of Mobius transformations, the map \(f\) preserves orthogonality with the real line. \(f\) is also decreasing as a real function on \((-\beta,+\infty)\). It follows that \(f(K)\) is the disk with the real interval \([f(b),f(a)]\) as its diameter. So if we impose the requirements
\[a<f(b)\text{ and }f(a)<b, \tag{5.17}\]
then \(f(K)\subset\operatorname{int}(K)\). Let \(0<c<1\) be a constant so that \(f(K)\subset c\cdot K\) and let \(W=\{z\in\mathbb{C}:|z|\leq\varepsilon\}\) so that \(U=J+W\). We thus have
\[f(K)\cdot U\subset c\cdot K\cdot(J+W)\subset c\cdot K\cdot J+c\cdot W\cdot K \subset c\cdot K+c\cdot W\cdot K\subset K.\]
For the last inclusion, since \(K\) is convex, \(c\cdot K+(1-c)\cdot K=K\), so it suffices to let \(W\) be sufficiently small such that \(c\cdot W\cdot K\subset(1-c)\cdot K\).
Case 4: \(\deg_{G}v\geq 2\) and \(\deg_{G}u\geq 2\). Let \(G_{1}\) be the graph obtained from \(G\) by deleting the edge \(\{u,v\}\). To shorten expressions, let \(A=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,0}\), \(B=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{1,0}\), \(C=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{0,1}\), and \(D=\left[Z_{G_{1},u,v}(\boldsymbol{\lambda})\right]_{1,1}\). Since neither \(u\) nor \(v\) is an isolated vertex in \(G_{1}\), we may use the induction hypothesis in similar ways as we did in Lemma 5.2:
* If we define \(\boldsymbol{\lambda}^{\prime}\in K^{V(G)}\) to be \(\lambda^{\prime}_{v}=0\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq v\), we would have \(R_{G_{1},u}(\boldsymbol{\lambda}^{\prime})=\frac{\left[Z_{G_{1},u}(\boldsymbol{ \lambda}^{\prime})\right]_{1}}{\left[Z_{G_{1},u}(\boldsymbol{\lambda}^{ \prime})\right]_{0}}=\frac{B}{A}\). The induction hypothesis (1) gives \(\frac{B}{A}\in K\).
* If we define \(\boldsymbol{\lambda}^{\prime}\in K^{V(G)}\) to be \(\lambda^{\prime}_{u}=0\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq u\), we would have \(R_{G_{1},v}(\boldsymbol{\lambda}^{\prime})=\frac{\left[Z_{G_{1},v}(\boldsymbol{ \lambda}^{\prime})\right]_{1}}{\left[Z_{G_{1},v}(\boldsymbol{\lambda}^{\prime })\right]_{0}}=\frac{C}{A}\). The induction hypothesis (1) gives \(\frac{C}{A}\in K\).
* Consider \(G_{2}=G/\{u,v\}\). This means contracting the edge \(\{u,v\}\): create a new vertex \(w\), delete \(\{u,v\}\) from \(E(G)\), and in every other member of \(E(G)\), substitute \(w\) for any appearance of \(u\) and \(v\) as endpoints. If we define \(\boldsymbol{\lambda}^{\prime}\in K^{V(G_{2})}\) to be \(\lambda^{\prime}_{w}=\lambda_{u}\) and \(\lambda^{\prime}_{x}=\lambda_{x}\) for all \(x\neq w\), using the induction hypothesis on \(G_{2}\), we get \(\frac{\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{1}}{\left[Z_{G _{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{0}}\in K\). Now, we still have \(\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{0}=A\), but \(D=\lambda_{v}\cdot\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{1}\). So \(\frac{D}{A}=\lambda_{v}\frac{\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime}) \right]_{1}}{\left[Z_{G_{2},w}(\boldsymbol{\lambda}^{\prime})\right]_{0}}\in U \cdot K\).
To analyze the set \(U\cdot K\), let \(W=\{z\in\mathbb{C}:|z|\leq\varepsilon\}\), and recall that \(J\) denotes the interval \([\frac{\gamma}{\beta},1]\). We have the estimation
\[U\cdot K=(J+W)\cdot K\subset J\cdot K+W\cdot K=K+W\cdot K\subset K+W.\]
The last inclusion is due to the requirement (5.16). Now, \(\frac{C+\gamma D}{A}\in K+\gamma U\cdot K\subset K+\gamma K+\gamma W\). Note that the Minkowski sum of two disks is again a disk, and in particular, \(K+\gamma K\) is the closed disk with the real interval \([a+\gamma b,b+\gamma a]\) as its diameter. So
\[\left|\frac{C+\gamma D}{A}\right|\leq\max\{|a+\gamma b|,|b+\gamma a|\}+| \gamma|\cdot\varepsilon.\]
It follows from \(\frac{B}{A}\in K\) and \(-\beta\not\in K\) that \(\beta A+B\neq 0\), and hence \(R_{G,v}=\frac{C+\gamma D}{\beta A+B}\) is well-defined. What's more, \(\left|\frac{\beta A+B}{A}\right|\geq\min_{z\in K}|\beta+z|=\beta+a\). Finally, we arrive at the estimation
\[\left|\frac{C+\gamma D}{\beta A+B}\right|=\left|\frac{C+\gamma D}{A}\right|/ \left|\frac{\beta A+B}{A}\right|\leq\frac{\max\{|a+\gamma b|,|b+\gamma a|\}+ |\gamma|\cdot\varepsilon}{\beta+a}.\]
We impose the final and the most crucial requirement:
\[\max\{|a+\gamma b|,|b+\gamma a|\}<|a|\cdot(\beta+a). \tag{5.18}\]
This says that when \(\varepsilon\) is sufficiently small, \(|R_{G,v}|=\left|\frac{C+\gamma D}{\beta A+B}\right|<|a|\). Since the disk \(\{z\in\mathbb{C}:|z|\leq|a|\}\) is contained in \(K\) (by requirement (5.16)), it follows that \(R_{G,v}\in K\), concluding statement (1) and the entire induction step for the pair \((G,v)\).
What remains is to show that all the requirements we imposed on the constants \(a\) and \(b\) during the course of the proof are jointly satisfiable in the parameter range \(\{(\beta,\gamma):\gamma<0\text{ and }\beta+\gamma>2-g(\beta)\}\). We defer this work to the next lemma.
**Lemma 5.5**.: _Let \(g:(1,+\infty)\to(0,1)\) be the function defined in (5.13). For fixed real parameters \(\beta,\gamma\) with \(\gamma<0\) and \(\beta+\gamma>2-g(\beta)\), there are constants \(a\in(-1,0)\) and \(b\in(0,\infty)\) that satisfy the requirements (5.14), (5.15), (5.16), (5.17) and (5.18)._
Proof.: Let functions \(g_{1},g_{2},g_{3}:(1,+\infty)\to\mathbb{R}\) be defined by
\[g_{1}(\beta):=\frac{\beta-2}{\beta^{2}-1},\quad g_{2}(\beta):=\frac{(\beta-1)^{ 2}}{\beta^{3}+\beta^{2}-\beta}\text{ and }g_{3}(\beta):=\frac{\beta^{3}+\beta^{2}-\beta}{\beta^{3}+\beta^{2}-\beta-1}.\]
So \(g(\beta)=\max\{g_{1}(\beta),g_{2}(\beta)\}\). Also observe that \(g_{3}(\beta)>1\) for all \(\beta\in(1,+\infty)\). What's more, by direct computation, we have the relation
\[3-\beta-g_{1}(\beta)=g_{3}(\beta)\cdot\left(3-\beta-g_{2}(\beta)\right). \tag{5.19}\]
We divide the proof of the lemma into two cases.
Case 1: \(\gamma\leq-1\). In this case, for \(\beta\) ranging in \((1,+\infty)\), by (5.19) and \(g_{3}(\beta)>1\),
\[3-\beta-g_{2}(\beta)<1+\gamma\quad\Rightarrow\quad 3-\beta-g_{1}(\beta)<1+\gamma,\]
Therefore
\[\beta+\gamma>2-\max\{g_{1}(\beta),g_{2}(\beta)\}\quad\Leftrightarrow\quad \beta+\gamma>2-g_{1}(\beta).\]
We choose \(a=\frac{\gamma}{\beta}\) and \(b=\max\left\{\frac{\beta+\gamma^{2}}{\beta^{2}+\gamma},-a\right\}+\varepsilon= \max\{f(a),-a\}+\varepsilon\), where \(\varepsilon>0\) is a sufficiently small constant. The requirements (5.14), (5.15), (5.16) and (5.17) are easy to verify. Since \(b>-a\) and \(\gamma\leq-1\), the left hand side of (5.18) simplifies to \(-a-\gamma b\), and the right hand side simplifies to \(-a(\beta+a)\). Since the \(\varepsilon\) in the formula for \(b\) is arbitrarily small, (5.18) reduces to
\[-\frac{\gamma}{\beta}-\gamma\max\left\{f\left(\frac{\gamma}{\beta}\right),- \frac{\gamma}{\beta}\right\}<-\frac{\gamma}{\beta}\left(\beta+\frac{\gamma}{ \beta}\right),\]
which simplifies to two inequalities:
\[\frac{\gamma}{\beta^{2}(\beta^{2}+\gamma)}(\beta-\gamma)(\beta^{2}-1)\left( \beta+\gamma-2+\frac{\beta-2}{\beta^{2}-1}\right)<0,\]
and
\[\frac{\gamma}{\beta}\left(\beta+\gamma-1+\frac{\gamma}{\beta}\right)<0.\]
The first of the two inequalities is clearly satisfied since we know that \(\beta+\gamma>2-g_{1}(\beta)\). The second is satisfied because
\[\beta+\gamma-1+\frac{\gamma}{\beta} >\beta+\gamma-2+\frac{2-g_{1}(\beta)}{\beta} \text{(since $\beta+\gamma>2-g_{1}(\beta)$)}\] \[>\beta+\gamma-2+\frac{1}{\beta} \text{(since $g_{1}(\beta)<1$)}\] \[>\beta+\gamma-2+\frac{\beta-2}{\beta^{2}-1}\] \[>0 \text{(since $\beta+\gamma>2-g_{1}(\beta)$)}.\]
Case 2: \(\gamma>-1\). In this case, for \(\beta\) ranging in \((1,+\infty)\), by (5.19) and \(g_{3}(\beta)>1\),
\[3-\beta-g_{1}(\beta)<1+\gamma\quad\Rightarrow\quad 3-\beta-g_{2}(\beta)<1+\gamma,\]
Therefore
\[\beta+\gamma>2-\max\{g_{1}(\beta),g_{2}(\beta)\}\quad\Leftrightarrow\quad \beta+\gamma>2-g_{2}(\beta).\]
We choose \(a=-\frac{1}{\beta}\) and \(b=f(a)+\varepsilon\), where \(\varepsilon>0\) is a sufficiently small constant. (5.14), (5.15), (5.16) and (5.17) are easy to verify. Since \(b>-a\) and \(\gamma>-1\), the left hand side of (5.18) simplifies to \(b+\gamma a\), and the right hand side simplifies to \(-a(\beta+a)\). Since the \(\varepsilon\) in the formula for \(b\) is arbitrarily small, (5.18) reduces to
\[f\left(-\frac{1}{\beta}\right)-\gamma\cdot\frac{1}{\beta}<-\frac{\gamma}{\beta }\left(\beta-\frac{1}{\beta}\right),\]
which simplifies to
\[\frac{\beta^{3}+\beta^{2}-\beta}{\beta^{2}(\beta^{2}-1)}\left(\beta+\gamma-2+ \frac{(\beta-1)^{2}}{\beta^{3}+\beta^{2}-\beta}\right)>0.\]
It is satisfied since \(\beta+\gamma>2-g_{2}(\beta)\).
Now we are ready to prove Theorem 1.8.
Proof of Theorem 1.8.: By symmetry between \(\beta\) and \(\gamma\), we may assume \(\gamma<0\) and thus \(\beta+\gamma>2-g(\beta)\). The theorem now follows by combining Lemma 4.1 with the statement (2) in Lemma 5.4.
## 6 Concluding Remarks
The obvious problem left open by this work is to fully classify the complexity of approximating \(Z_{G}\) in the parameter range \(1\leq\beta+\gamma<2\) and (without loss of generality) \(\gamma<0\). Observe that there is an NP-hard region in this range: when \((\beta,\gamma)\) is sufficiently close to \((1,0)\), by a \(2\)-thickening (i.e. replacing every edge by \(2\) parallel edges) we get a reduction from the same problem at \(A=\begin{bmatrix}\beta^{2}&1\\ 1&\gamma^{2}\end{bmatrix}\), which lies in the region of "non-uniqueness" and is known to be NP-hard by [13]. However, this only gives us a small bounded region of NP-hardness, since the region of non-uniqueness is bounded (for a rough image, see Figure 1).
Theorem 1.8 shows that in the other direction, there also exists some tractable region in the range \(\{(\beta,\gamma):\gamma<0\text{ and }1\leq\beta+\gamma\leq 2\}\). Although the region where tractability is proved extends to infinity, it is rather thin (having width \(g(\beta)\approx 0.1\) for small \(\beta\)) and its width tends to zero as \(\beta\to+\infty\) (we have \(g(\beta)=O(1/\beta)\)). Is it possible to prove larger tractable regions?
**Problem 6.1**.: Does there exist some \(\varepsilon>0\) such that approximating \(Z_{G}\) is tractable whenever \(\min\{\beta,\gamma\}<0\) and \(\beta+\gamma>2-\varepsilon\)?
Possibly the best hope for a complete classification of approximation complexity in the range \(\{(\beta,\gamma):\gamma<0\text{ and }1\leq\beta+\gamma\leq 2\}\) is to extend the uniqueness line in the positive quadrant to the negative regime.
**Problem 6.2**.: Is there a natural extension of the uniqueness/non-uniqueness phase transition to the case where \(\min\{\beta,\gamma\}<0\)?
Note that our method for proving Theorem 1.5 is to transform the problem to another problem with exclusively nonnegative parameters and use the techniques developed specifically for nonnegative problems. Interestingly, Theorem 1.6 shows that the partition function is always positive in the range \(\beta+\gamma\geq 1\). This points to another direction: can we reduce the problem to an "intrinsically positive" one?
**Problem 6.3**.: Is it possible to transform the problem of computing \(Z_{G}\) in the range \(\beta+\gamma\geq 1\) to a problem with only nonnegative parameters, like the way we did in Section 4.3?
## Acknowledgements
We thank Mingji Xia for many very helpful conversations about this work. |
2309.14532 | Towards Ivanov's meta-conjecture for geodesic currents | Given a closed, orientable surface $S$ of negative Euler characteristic, we
study two automorphism groups: $Aut(\mathscr{C})$ and $Aut(\mathcal{ML})$,
groups of homeomorphisms that preserve the intersection form in the space
$\mathscr{C}$ of geodesic currents and the space $\mathcal{ML}$ of measured
laminations. We prove that except in a few special cases, $Aut(\mathcal{ML})$
is isomorphic to the extended mapping class group. This theorem is a special
case of Ivanov's meta-conjecture. We investigate this question for
$Aut(\mathscr{C})$.
We also answer a question of Leininger about whether closed curves can be
strongly simple intersection equivalent and not length equivalent by providing
an infinite family of counterexamples. These examples demonstrate the
difficulty in proving Ivanov's conjecture for $Aut(\mathscr{C})$. | Meenakshy Jyothis | 2023-09-25T21:14:16Z | http://arxiv.org/abs/2309.14532v2 | # Ivanov's meta-conjecture in the context of measured laminations
###### Abstract.
Given a closed, genus \(g\) surface \(S\), we consider \(Aut(\mathcal{ML})\), the group of homeomorphism of \(\mathcal{ML}\) that preserves the intersection number. We prove that except in a few special cases, \(Aut(\mathcal{ML})\) is isomorphic to the extended mapping class group. The theorem is a special case of Ivanov's _meta conjecture_ which states that any "sufficiently rich" object naturally associated to a surface has automorphism group isomorphic to the extended mapping class group. Some of the results in this paper can be generalized to the setting of geodesic currents. To illustrate the challenges encountered when extending the theorem to the context of currents, we construct infinite family of pairs of closed curves that have the same marked length spectra and self intersection number.
## 1. Introduction
Let \(S\) be a closed, orientable, finite type surface of genus \(g\geq 2\) and let \(Mod^{\pm}(S)\) denote the extended mapping class group of \(S\). Ivanov's meta conjecture states that when \(g\geq 3\), then every object naturally associated to a surface \(S\) that has a "sufficiently rich" structure has \(Mod^{\pm}(S)\) as its group of automorphisms, and makes a similar claim for when \(g=2\)[20].
In his seminal work, Ivanov proved that the automorphism group of the curve complex of \(S\) is \(Mod^{\pm}(S)\)[20]. Ivanov's theorem inspired a number of results in the following years ([17], [18],[19], [2]). Many of these results considered different complexes associated to a surface and showed that their automorphism group is \(Mod^{\pm}(S)\), and their proofs use Ivanov's original theorem.
In this paper, we consider the space of measured laminations on a surface, \(\mathcal{ML}(S)\). Given a hyperbolic structure on \(S\), a measured lamination is a closed subset of \(S\) foliated by geodesics, equipped with a transverse measure. We aim to prove Ivanov's meta conjecture for a specific automorphism group of \(\mathcal{ML}(S)\).
Weighted simple closed multicurves are dense in \(\mathcal{ML}(S)\). By work of Kerckhoff, the geometric intersection number on simple closed curves extends to an 'intersection form' \(i(.,.)\) on \(\mathcal{ML}(S)\). In particular, the intersection form is a continuous bilinear map \(i(.,.):\mathcal{ML}(S)\times\mathcal{ML}(S)\rightarrow\mathbb{R}\), such that for any two simple closed curves \(\gamma\) and \(\delta\), the intersection form \(i(\gamma,\delta)\) agrees with the their geometric intersection number.
In this paper we will be considering the automorphism group \(Aut(\mathcal{ML})\) that consists of homeomorphisms on \(\mathcal{ML}(S)\) that preserve the intersection form. Let
\[Aut(\mathcal{ML})=\{\phi:\mathcal{ML}(S)\rightarrow\mathcal{ML}(S)\text{ homeo }|i(\lambda,\mu)=i(\phi(\lambda),\phi(\mu))\}\]
We show that in most cases \(Aut(\mathcal{ML})\) is isomorphic to the extended mapping class group.
**Theorem 1.1**.: _Let \(S_{g}\) be a closed, orientable, finite type surface of genus \(g\geq 2\) and let \(Aut(\mathcal{ML})\) denote the group of homeomorphisms on \(\mathcal{ML}(S_{g})\) that preserve the intersection form. Then, for all \(g\neq 2\)_
\[Aut(\mathcal{ML})\cong Mod^{\pm}(S_{g})\]
_For the surface of genus 2,_
\[Aut(\mathcal{ML})\cong Mod^{\pm}(S_{2})/H\]
_Where, \(H\) is the order two subgroup generated by the hyperelliptic involution._
**Remark 1.2**.: _The above theorem is equivalent to the statement that \(Aut(\mathcal{ML})\) is isomorphic to the automorphism group of the curve complex for all surface \(S_{g}\) with \(g\geq 2\)._
### The case of geodesic currents
The space of geodesic currents on a surface, \(\mathcal{C}(S)\), is an extension of \(\mathcal{ML}\) that contains weighted _non-simple_ closed curves as a dense subset in the same way as \(\mathcal{ML}\) contains weighted simple closed multicurves as a dense subspace. By work of Bonahon, the intersection form also extends to \(\mathcal{C}(S)\)[1]. The space of geodesic currents comes equipped with the weak* topology.
Let \(Aut(\mathcal{C})\) denote the group of homeomorphisms on \(\mathcal{C}(S)\) that preserve the intersection form. It is already known that \(Mod^{\pm}(S)\) embeds in \(Aut(\mathcal{C})\). For \(g\geq 3\), the theorem in this paper also gives us a surjection.
\[Mod^{\pm}(S)\hookrightarrow Aut(\mathcal{C})\stackrel{{ f}}{{ \longrightarrow}}Aut(\mathcal{ML})\stackrel{{\cong}}{{ \longrightarrow}}Mod^{\pm}(S)\]
We are interested to know if Ivanov's theorem holds for \(Aut(\mathcal{C})\).
After we finished writing, we discovered that Ken'ichi Ohshika and Athanase Papadopoulos also prove that \(Aut(\mathcal{ML})\) is isomorphic to \(Mod^{\pm}(S)\)[2]. However, they use quite different techniques from us. In particular, many of the results in this paper also apply to the general setting of geodesic currents. But, the Ivanov's meta conjecture for \(Aut(\mathcal{C})\) does not follow immediately from these results. One reason why this is not immediate is because it is hard to show that the surjective map \(f:Aut(\mathcal{C})\twoheadrightarrow Aut(\mathcal{ML})\) is also injective. In fact, we construct infinite family of pairs of closed curves (\(\gamma_{n}\), \(\gamma_{n}^{\prime}\)) with the same self intersection number and simple marked length spectra defined in Section 2.2. The kernel of \(f\) could contain automorphisms of currents that map \(\gamma_{n}\) to \(\gamma_{n}^{\prime}\).
**Theorem 1.3**.: _Let \(S\) be a surface of genus at least 2. We can find infinitely many pairs of closed curves \(\gamma_{n}\) and \(\gamma_{n}^{\prime}\) on \(S\) such that \(\gamma_{n}\) and \(\gamma_{n}^{\prime}\) have the same self intersection number and the same simple marked length spectra._
There are results similar to Theorem 1.3 in the literature. Two non-isotopic closed curves \(\alpha\) and \(\beta\) are said to be \(k\)-equivalent if they intersect all the closed curves with self intersection \(k\) the same number of times [15]. For any given \(k>0\), Parlier and Xu construct closed curves \(\alpha\) and \(\beta\) that are not \(k\)-equivalent, but are \(k^{\prime}\)-equivalent for any \(k^{\prime}<mk^{2}\) different from \(k\)[11]. In particular, the curves \(\alpha\) and \(\beta\) constructed have the same simple marked length spectra. However, the curves \(\alpha\) and \(\beta\) have different self intersection number. This means that any map \(\phi\) in the kernel of \(f\) cannot map \(\alpha\) to \(\beta\). For the purpose of this paper we are interested in pairs of closed curves that share the same self intersection number and the same simple marked length spectrum.
### Plan of the paper
The paper is organized as follows. Section 2 provides background on measured laminations and geodesic currents. In section 3 we prove some properties of elements in \(Aut(\mathcal{ML})\). The proof of Theorem 1.1 follows immediately from these properties. In particular, in section 3 we prove that any \(\phi\in Aut(\mathcal{ML})\) is linear, when \(\mathcal{ML}\) is viewed as a subset of \(\mathcal{C}(S)\). In this section, we also show that any such \(\phi\) maps simple closed curves to weighted simple closed curves. As a consequence of these properties, we get that any \(\phi\in Aut(\mathcal{ML})\) maps simple closed curves to simple closed curves, and therefore has an action on the curve complex. In section 4, we prove Theorem 1.1 using Ivanov's original theorem and a density argument. In section 5, we talk about some obstructions that arise when we try to generalize Theorem 1.1 to the setting of currents. In this section we construct examples of pairs of closed curve that have the same self intersection number and simple marked length spectrum.
### Acknowledgements
The author would like to thank her advisor Eugenia Sapir for her support throughout this project and for the many insightful conversations that greatly contributed to this work. The author would also like to thank Didac Martinez Granado for bringing Ken'ichi Ohshika's and Athanase Papadopoulos' work and Hugo Parlier's and Binbin Xu's work to her attention.
## 2. Background
Let S be a hyperbolic surface with a complete metric defined by \(\mathbb{H}^{2}/\Gamma\), for \(\Gamma\leq\) PSL(2,\(\mathbb{R}\)). We can identify the universal cover of the surface \(S\) with \(\mathbb{H}^{2}\). The spaces \(\mathcal{ML}(S)\) and \(\mathcal{C}(S)\), that were discussed in the introduction, are independent of the choice of hyperbolic metric on \(S\)[10]. However, in order to define a geodesic current or a measured lamination we will need to fix a hyperbolic metric.
Throughout the paper, subsurfaces of \(S\) and closed curves will be considered up to isotopy. The complexity of a closed surface of genus \(g\) is defined to be \(3g\) - \(3\).
### Laminations:
A _geodesic lamination_\(\lambda\) is a closed subset of \(S\) foliated by simple, complete geodesics. An example of a geodesic lamination is a multicurve consisting of pairwise disjoint, simple closed geodesics on \(S\). Another fundamental example arises from considering a set of disjoint geodesics in \(\mathbb{H}^{2}\) that are invariant under \(\Gamma\), whose union is closed. Projecting this set onto S gives us a geodesic lamination. We say a geodesic lamination is _minimal_ if it contains no proper non-empty sublamination.
A _complementary region_ of a geodesic lamination \(\lambda\) is a connected component of the open complement \(S\backslash\lambda\). A geodesic lamination that _fills_ a subsurface \(S^{\prime}\) of \(S\) intersects every essential, non-peripheral simple closed curve on \(S^{\prime}\). The complementary regions of such a filling lamination are either ideal polygons on \(S^{\prime}\) or crowns homotopic to a boundary curve of \(S^{\prime}\).
Some geodesic laminations can be equipped with a transverse measure, which assigns a positive measure to each arc \(\tau\) that intersects the lamination \(\lambda\) transversely. This measure is invariant under homotopy transverse to \(\lambda\) and is supported on \(\tau\cap\lambda\). A _measured lamination_ is a geodesic lamination with a transverse measure that has full support. Abusing notation, we will use \(\lambda\) to denote the transverse measure. It is important to note that a measured lamination \(\lambda\) can come equipped with
different transverse measures. To avoid confusion between measures and their supports we have used notation more carefully later in this paper. A description of this can be found under 'Notational Conventions' in Section 2.3.
### Geodesic currents:
Measured laminations can be thought of as a subset of a larger space of measures called geodesic currents defined as follows. Let \(\mathcal{G}\) be the set of all unparameterized, unoriented complete geodesics in \(\mathbb{H}^{2}\). Any geodesic in \(\mathbb{H}^{2}\) can be determined by its extreme points on the boundary of \(\mathbb{H}^{2}\). Hence there is a natural bijection \(\mathcal{G}\cong(S^{1}\times S^{1}\setminus\Delta)/\sim\), where \(\Delta\) is the diagonal in \(S^{1}\times S^{1}\) and the equivalence relation \(\sim\) identifies the coordinates \((a,b)\) and \((b,a)\). We assign \(\mathcal{G}\) the topology of \((S^{1}\times S^{1}\setminus\Delta)/\sim\). The fundamental group, \(\pi_{1}(S)\) embeds as a subgroup \(\Gamma\) of PSL(2,\(\mathbb{R}\)). The isometry group PSL(2,\(\mathbb{R}\)) of \(\mathbb{H}^{2}\) acts naturally on \(\mathcal{G}\). A geodesic current is a \(\Gamma\)-invariant, locally finite, positive, Borel measure on \(\mathcal{G}\). The space of all geodesic currents endowed with a weak* topology is denoted by \(\mathcal{C}(S)\).
The set of closed curves on \(S\) embeds into the space of currents. To see this consider a closed curve \(\gamma\) on \(S\). The lifts of \(\gamma\) are a discrete subset of \(\mathcal{G}\). The Dirac measure on this discrete set is a geodesic current, and by abuse of notation we will denote the geodesic current by \(\gamma\) as well. Since positive measures are closed under addition and scalar multiplication by positive reals, weighted multicurves are also examples of geodesic currents.
By work of Bonahon, it is known that weighted closed curves are dense in \(\mathcal{C}(S)\)[1]. Bonahon also shows that geometric intersection number \(i(.,.)\) defined on a pair of closed curve can be extended bilinearly and continuously to intersection form on a pair of currents. For two any two currents \(\mu\) and \(\nu\), the intersection form \(i(\mu,\nu)\) is defined as \(\mu\times\nu(\mathcal{J}/\Gamma)\), where \(\mathcal{J}\) is a subset of \(\mathcal{G}\times\mathcal{G}\) consisting of all pairs of incident distinct geodesics in \(\mathbb{H}^{2}\).
Any geodesic current \(\mu\) satisfying \(i(\mu,\mu)=0\) is supported on a \(\Gamma\)-invariant geodesic lamination on \(\mathbb{H}^{2}\), and \(\mu\) induces a transverse measure on its support. Because of this, \(\mathcal{ML}\) embeds into \(\mathcal{C}(S)\).
For a geodesic current \(\mu\), one can consider the intersection of \(\mu\) with every closed curve on \(S\). The infinite coordinate \(\{i(\mu,\gamma)\}\), where \(\gamma\) is any closed curve on \(S\) is the marked length spectrum of \(\mu\). Otal proved that every geodesic current \(\mu\) can be identified using its unique marked length spectrum. [10]. The simple marked length spectrum of \(\mu\) is the infinite coordinate \(\{i(\mu,s)\}\), where \(s\) is any simple closed curve on \(S\).
### Notational Convention:
For a geodesic current \(\mu\), consider the support of \(\mu\) in \(\mathcal{G}\) and take its projection onto \(S\). We will denote this set by \(|\mu|\). We will often use this notation to avoid confusion between the current and its support. For instance, the geodesic current corresponding to a closed curve \(\gamma\) will be denoted by \(\gamma\), but the curve itself will be denoted by \(|\gamma|\). Similarly, the transverse measure on a measured lamination will be denoted by \(\lambda\), and the lamination itself will be denoted by \(|\lambda|\). For example, if a measured lamination is not uniquely ergodic and supports two distinct measures we will denote them using distinct symbols \(\lambda\) and \(\eta\). But, we will have \(|\lambda|=|\eta|\).
### Curve Complex and Ivanov's theorem:
The curve complex, \(CC\) on a surface \(S\) is a simplicial complex whose vertices correspond to isotopy classes of essential simple closed curves on
\(S\), and whose edges correspond to pairs of simple closed curves that have geometric intersection number zero.
Let us denote the vertex set of the curve complex by \(V(S)\). An automorphism of a curve complex is a bijection on \(V(S)\) that takes simplices to simplices. We will use \(Aut(CC)\) to denote the automorphism group of the curve complex on a surface \(S\). Ivanov's theorem states that \(Aut(\mathcal{CC})\cong Mod^{\pm}(S)\) for surfaces of genus at least three. For \(S_{2}\), the closed, orientable surface of genus \(2\), \(Aut(\mathcal{CC})\cong Mod^{\pm}(S_{2})/H\), where, \(H\) denotes the subgroup generated by the hyperelliptic involution.
## 3. Properties of automorphisms that preserve the intersection form
In this section we prove two properties of elements in \(Aut(\mathcal{ML})\) that play a key role in proving \(Aut(\mathcal{ML})\cong Mod^{\pm}(S)\). Namely, we show that any \(\phi\in Aut(\mathcal{ML})\) is linear and that any such \(\phi\) maps simple closed curves to simple closed curves.
Most of the content in this section remains true in the setting of geodesic currents. In fact, the statements of Propositions 3.1, 3.3 and 3.13 hold true for any \(\phi\in Aut(\mathcal{C})\). The remarks in this section concern about how the results generalizes to the space of currents.
### Linearity
**Proposition 3.1**.: _Let \(\phi\in Aut(\mathcal{ML})\) and let \(\lambda,\nu\in\mathcal{ML}\) such that \(\lambda+\nu\in\mathcal{ML}\). Let \(c\) be a positive real number. Then_
\[\phi(\lambda+\mathrm{c}\nu)=\phi(\lambda)+\mathrm{c}\phi(\nu)\]
Proof.: Observe that \(Aut(\mathcal{ML})\) is a group, and if \(\phi\) preserves the intersection form then \(\phi^{-1}\) preserves the intersection form as well. Now, for any simple closed curve \(\gamma\), we have the following equality
\[i(\gamma,\phi(\lambda+\mathrm{c}\nu)) =i(\phi^{-1}(\gamma),\lambda+\mathrm{c}\nu)\] \[=i(\phi^{-1}(\gamma),\lambda)+\mathrm{c}i(\phi^{-1}(\gamma),\nu)\] \[=i(\gamma,\phi(\lambda))+\mathrm{c}i(\gamma,\phi(\nu))\] \[=i(\gamma,\phi(\lambda))+i(\gamma,\mathrm{c}\phi(\nu))\] \[=i(\gamma,\phi(\lambda)+\mathrm{c}\phi(\nu))\]
This implies \(\phi(\lambda+\mathrm{c}\nu)\) and \(\phi(\lambda)+\mathrm{c}\phi(\nu)\) have the same simple marked length spectrum, and therefore \(\phi(\lambda+\mathrm{c}\nu)=\phi(\lambda)+\mathrm{c}\phi(\nu)\)[12].
**Remark 3.2**.: _If \(\gamma\) is allowed to be any closed curve, then the same proof can be used to see that any \(\phi\in Aut(\mathcal{C})\) is linear. In this case, we will be looking at the marked length spectrum of a current instead of its simple marked length spectrum. However, from Otal's work on currents it is known that a geodesic current is uniquely determined by its marked length spectrum [10]._
### Mapping simple closed curves to weighted simple closed curves
**Proposition 3.3**.: _Automorphisms of the space of measured laminations that preserve the intersection form map simple closed curves to weighted simple closed curves._
We prove this proposition after proving Lemmas 2.3 - 2.7.
**Lemma 3.4**.: _Let \(\lambda_{1}\) and \(\lambda_{2}\) be any two geodesic currents satisfying \(|\lambda_{1}|=|\lambda_{2}|\). Then for any geodesic current \(\mu\), \(i(\lambda_{1},\mu)\neq 0\Longleftrightarrow i(\lambda_{2},\mu)\neq 0\)._
_In particular, if \(\lambda_{1}\) and \(\lambda_{2}\) are two measured laminations satisfying \(|\lambda_{1}|=|\lambda_{2}|\), then for any measured lamination \(\mu\), \(i(\lambda_{1},\mu)\neq 0\Longleftrightarrow i(\lambda_{2},\mu)\neq 0\)._
Proof.: Let \(\mathcal{J}\) be the set defined in Section 2.2. The support of \(\lambda_{1}\times\mu\) in \(\mathcal{J}\) is the set of ordered pair of geodesics \((g_{1},g_{2})\) such that \(g_{1}\in|\lambda_{1}|\) and \(g_{2}\in|\mu|\)[1]. That means, the support of \(\lambda_{1}\times\mu\) in \(\mathcal{J}\) is same as the support of \(\lambda_{2}\times\mu\) in \(\mathcal{J}\). Therefore, the supports of \(\lambda_{1}\times\mu\) and \(\lambda_{2}\times\mu\) in \(\mathcal{J}/\Gamma\) are the same. But then, \(i(\lambda_{1},\mu)\neq 0\) implies \(i(\lambda_{2},\mu)\neq 0\) and vice versa.
The next lemma was previously known, see for example [10]. But we include the proof here for completeness.
**Lemma 3.5**.: _Let \(\lambda\) be a minimal measured lamination that fills a subsurface \(S^{\prime}\) of \(S\) and let \(\mu\) be any other measured lamination with non-empty support satisfying:_
1. \(|\mu|\) _is contained in the interior of_ \(S^{\prime}\)_._
2. \(i(\lambda,\mu)=0\)__
_Then \(|\mu|=|\lambda|\)._
Proof.: Let \(\omega\) be a measured lamination consisting of leaves which are common to \(|\lambda|\) and \(|\mu|\). Since \(\lambda\) is minimal, \(|\omega|\) has to be either empty or is all of \(|\lambda|\).
**Case 1.** If \(\omega\) is empty, then the geodesics in \(|\lambda|\) and \(|\mu|\) neither coincide nor intersect. Also, observe that \(|\mu|\) cannot intersect any of the boundary curves of \(S^{\prime}\). This implies that, in the universal cover, any lift of \(|\mu|\) has to live in the complementary regions formed by lifts of \(|\lambda|\) and the lifts of the boundary curves of \(S^{\prime}\). As \(\lambda\) is filling, such complementary regions will either be ideal polygons in \(\mathbb{H}^{2}\) bounded by geodesics in the lift of \(|\lambda|\) or they will be crowns bounded by both the geodesics in the lifts of \(|\lambda|\) and the geodesics in the lifts of boundary curves of \(S^{\prime}\) (see Figure 1).
Figure 1: Two types of complimentary region of \(|\lambda|\). Here, \(\gamma\) is a peripheral curve of \(S^{\prime}\)
Any leaf in \(\mu\) can only go from one ideal vertex of such an ideal polygons to another. But any such leaf is an isolated open leaf and cannot be in the support of a measured lamination [1]. This contradicts our assumption that \(\mu\) is a measured lamination with a non empty support.
**Case 2.** Now, if \(\omega\) is all of \(|\lambda|\), we get \(|\lambda|=|\omega|\subseteq|\mu|\). Let \(\mu_{\omega}\) denote the restriction of \(\mu\) to \(\omega\). Notably, \(|\omega|\) is closed. Using decomposition of laminations into minimal components, we can write \(\mu=\mu_{\omega}+\mu^{\prime}\). Here \(\mu^{\prime}\) represents the measured lamination that encompasses the portion of \(\mu\) disjoint from \(\omega\). Observe that, both the conditions (1) and (2) in the statement of the lemma are fulfilled by \(\mu^{\prime}\). As we have already established in the proof of case 1, \(\mu^{\prime}\) cannot support a measured lamination. Consequently, we conclude that \(|\lambda|\) is equal to \(|\mu|\).
Before proving more results, we want to define the set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\), a generalized version of the set \(\mathcal{E}_{\lambda}\) constructed in [1][1]. For a measured lamination \(\lambda\), the set \(\mathcal{E}_{\lambda}\) is defined as the set of all closed geodesics \(c\) on \(S\) that satisfy the following two conditions.
1. \(i(\lambda,c)=0\)
2. For every closed curves \(c^{\prime}\) with \(i(c,c^{\prime})\neq 0\), \(i(\lambda,c^{\prime})\neq 0\).
If \(\lambda\) is a measured lamination that fills a subsurface \(S^{\prime}\) of \(S\), then \(\mathcal{E}_{\lambda}\) consists of all simple geodesics that forms the boundary of \(S^{\prime}\)[1]. We will call such curves peripheral curves of \(S^{\prime}\).
The set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) generalizes this construction to measured laminations and is defined as follows. For a measured lamination \(\lambda\), the set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) is the set of all _measured laminations_\(\mu\) on \(S\) that satisfy the following two conditions.
1. \(i(\lambda,\mu)=0\)
2. For every measured lamination \(\kappa\) with \(i(\mu,\kappa)\neq 0\), \(i(\lambda,\kappa)\neq 0\).
We make the following claims about \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\).
**Claim 3.6**.: \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) _only consists of pairwise disjoint measured laminations._
Proof.: Any two measured laminations that intersect cannot simultaneously satisfy both conditions in the definition of \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\). Let \(\mu_{1}\) and \(\mu_{2}\) be two measured laminations such that \(\mu_{1}\in\mathcal{E}_{\lambda}^{\mathcal{ML}}\) and \(i(\mu_{1},\mu_{2})\neq 0\). Then by the second condition in the definition of \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\), we get \(i(\lambda,\mu_{2})\neq 0\). Any such \(\mu_{2}\) fails to belong to \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\).
**Claim 3.7**.: _The set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) is closed under addition and scalar multiplication by positive reals._
Proof.: The proof readily follows from the bilinearity of the intersection form.
Now, we will discuss what elements are contained in the set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) for some specific cases of \(\lambda\).
**Lemma 3.8**.: _If \(\lambda\) is a simple closed curve, then \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) consists of all the positive scalar multiples of \(\lambda\)._
\[\mathcal{E}_{\lambda}^{\mathcal{ML}}=\{c\lambda:c\in\mathbb{R}_{+}\}.\]
Proof.: It is easy to see that all measured laminations \(c\lambda\) with \(c\in\mathbb{R}_{+}\) belong to \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\).
The set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) does not contain any other simple closed curve as they all fail to satisfy the second condition in the definition of \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\). For every simple closed curve \(|\gamma|\neq|\lambda|\), we can find a simple closed curve that intersects \(|\gamma|\) and not \(|\lambda|\).
The set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) also cannot contain any measured lamination that fills a subsurface \(S^{\prime}\) of \(S\). Any subsurface \(S^{\prime}\) that \(\mu\) fills will have complexity at least one. This means that we can find a curve \(|\gamma|\) intersecting \(\mu\) and contained entirely in \(S^{\prime}\). Now, if \(\mu\) satisfies \(i(\lambda,\mu)=0\), then the curve \(|\lambda|\) lies entirely outside of \(S^{\prime}\) and has zero intersection with \(\gamma\). Therefore, \(\mu\) fails to satisfy the second condition in the definition of \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\).
**Lemma 3.9**.: _If \(\lambda\) is a minimal measured lamination that fills a subsurface \(S^{\prime}\) of \(S\), then the set \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) is the smallest set that is closed under addition and scalar multiplication (by positive reals), that consists of:_
1. _peripheral curves that form the boundary of the subsurface_ \(S^{\prime}\)_,_
2. _measured laminations supported on_ \(|\lambda|\)_._
Proof.: The proof is divided into two parts. The first part shows that peripheral curves of \(S^{\prime}\) are the simple closed curves in \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\). The second part focuses on measured laminations supported on \(|\lambda|\).
**Peripheral curves of \(\mathbf{S^{\prime}}\):** To see the peripheral curves of \(S^{\prime}\) are the only simple closed curves in \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\), consider the complimentary subsurface \(S\backslash S^{\prime}\). We will denote this subsurface by \(S^{\prime\prime}\). If \(S^{\prime\prime}\) were a pair of pants, then it does not contain any other simple closed curve. And if it is a surface of a higher complexity, then for every simple closed curve \(\delta\) in the interior of \(S^{\prime\prime}\) we can find an another simple closed curve in the interior of \(S^{\prime\prime}\) that also intersects \(\delta\). So, simple closed curve in the interior of \(S^{\prime\prime}\) cannot satisfy both the conditions of \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\). Also, any closed curve contained entirely in \(S^{\prime}\) will intersect \(\lambda\) and therefore cannot be contained in \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\).
Figure 2. Example of a lamination \(k\) that intersects \(\gamma\) but not \(\lambda\).
Now, we show that all the peripheral curves of \(S^{\prime}\) belong to \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\). Consider \(\gamma\), a peripheral curve of \(S^{\prime}\). As \(\gamma\in\mathcal{E}_{\lambda}\) we know that \(i(\lambda,\gamma)=0\) and that if \(\gamma\) intersects any closed curve then \(\lambda\) intersects the same closed curve as well. It is left to check whether all the measured laminations that intersect \(\gamma\) also intersect \(\lambda\). To see that, consider the lift of \(\gamma\) and \(|\lambda|\) on \(\mathbb{H}^{2}\). The complimentary region bounded by these geodesics will give us a crown as given in Figure 2.
If there exist a lamination \(k\) that intersects \(\gamma\) but not \(\lambda\), then the lift of \(k\) will contain a leaf that has its one point in an ideal vertex of the crown, as given in Figure 2. But any such leaf will be an isolated leaf and cannot be in the support of a measured lamination [1]. Therefore, no such \(k\) exist.
**Measured laminations supported on \(|\lambda|\) :** If \(\mu\) is a measured lamination that fills a subsurface \(S^{\prime\prime}\), and if \(S^{\prime\prime}\setminus S^{\prime}\) is non empty, then \(\mu\) will intersect a closed curve that does not intersect \(\lambda\). This implies that any filling measured lamination in \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) must have its support inside \(S^{\prime}\). But since these measured lamination also do not intersect \(\lambda\), by Lemma 3.5 its support will be \(|\lambda|\).
Since the criteria we used for defining \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) only depends on the intersection form, \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) itself depends on only the support \(|\lambda|\). That is, if \(|\lambda_{1}|=|\lambda_{2}|\), then \(\mathcal{E}_{\lambda_{1}}^{\mathcal{ML}}=\mathcal{E}_{\lambda_{2}}^{\mathcal{ ML}}\). However, the converse is not true in general.
**Lemma 3.10**.: _Let \(\lambda\) be a minimal measured lamination and \(\mu\) be any measured lamination. Then \(\mathcal{E}_{\lambda}^{\mathcal{ML}}=\mathcal{E}_{\mu}^{\mathcal{ML}}\) if and only if \(\mu=\lambda^{\prime}+c_{1}\gamma_{1}+c_{2}\gamma_{2}+\cdots+c_{n}\gamma_{n}\); where \(\lambda^{\prime}\) is a measured lamination supported on \(|\lambda|\), \(\gamma_{i}\) is a boundary curve of the subsurface filled by \(\lambda\) and \(c_{i}\geq 0\) for all \(i\)._
_Consequently, if \(\lambda\) and \(\mu\) are minimal measured laminations, then \(\mathcal{E}_{\lambda}^{\mathcal{ML}}=\mathcal{E}_{\mu}^{\mathcal{ML}}\) if and only if \(|\lambda|=|\mu|\)._
Proof.: If \(\lambda\) is a measure supported on a simple closed curve, then by Lemma 3.8
\[\mathcal{E}_{\lambda}^{\mathcal{ML}}=\{c\lambda:c\in\mathbb{R}_{+}\}= \mathcal{E}_{\mu}^{\mathcal{ML}}\]
if and only if \(|\lambda|=|\mu|\).
Now, assume \(\lambda\) is a minimal lamination that fills a subsurface \(S^{\prime}\) of \(S\) and \(\mu=\lambda^{\prime}+c_{1}\gamma_{1}+c_{2}\gamma_{2}+\cdots+c_{n}\gamma_{n}\); where \(\gamma_{i}^{\prime}s\) are the boundary curves of \(S^{\prime}\) and \(|\lambda^{\prime}|=|\lambda|\). For any measured lamination \(\delta\),
\[i(\mu,\delta)=i(\lambda^{\prime},\delta)+c_{1}i(\gamma_{1},\delta)+\cdots+c_{ n}i(\gamma_{n},\delta).\]
Any measured lamination that has zero intersection with \(\mu\) must have a zero intersection with \(\lambda^{\prime}\), and therefore a zero intersection with \(\lambda\). If \(\delta\) belonged to \(\mathcal{E}_{\mu}^{\mathcal{ML}}\), then for any measured lamination \(c\),
\[i(\delta,c)\neq 0\Rightarrow i(\mu,c)\neq 0.\]
This would mean at least one of the summand in
\[i(\mu,c)=i(\lambda^{\prime},c)+c_{1}i(\gamma_{1},c)+\cdots+c_{n}i(\gamma_{n},c)\]
must be non zero. Since \(\lambda^{\prime}\) and the peripheral curves \(\gamma_{i}\) all belong in \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\), any one of those summand being non zero implies \(i(\lambda,c)\neq 0\). This gives us,
\[\mathcal{E}_{\mu}^{\mathcal{ML}}\subseteq\mathcal{E}_{\lambda}^{\mathcal{ML}}\]
To see the other inclusion, observe that \(\mathcal{E}_{\mu}^{\mathcal{ML}}\) contains measured laminations that are supported on its minimal components, \(|\lambda|\), \(|\gamma_{1}|\), \(|\gamma_{2}|\),... and \(|\gamma_{n}|\). But, from Lemma 3.9 we get
\[\mathcal{E}_{\lambda}^{\mathcal{ML}}\subseteq\mathcal{E}_{\mu}^{\mathcal{ML}}.\]
To see the converse, assume that \(\mathcal{E}_{\mu}^{\mathcal{ML}}=\mathcal{E}_{\lambda}^{\mathcal{ML}}.\) The lamination \(\mu\) cannot have a minimal component \(\lambda_{1}\), that fills some subsurface of \(S\) but is not supported in \(|\lambda|\). If it does, then \(\lambda_{1}\) will be contained in \(\mathcal{E}_{\mu}^{\mathcal{ML}}\), but not in \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\). For the same reason, \(|\mu|\) cannot contain any simple closed curves that are not a boundary curve of the subsurface \(S^{\prime}\). This means that \(|\lambda|\) and \(|\mu|\) can only differ by the simple closed curves that form the boundary curves of the subsurface filled by \(\lambda\).
**Remark 3.11**.: _The construction \(\mathcal{E}_{\lambda}\) can be generalized even further to the setting of geodesic currents. We will denote it by \(\mathcal{E}_{\lambda}^{c}\), and it is defined as follows: If \(\lambda\) is a geodesic current_
\[\mathcal{E}_{\lambda}^{c}:=\begin{cases}\mu\in\mathcal{C}:&i(\lambda,\mu)=0 \text{ and }\\ &i(\lambda,k)\neq 0\text{ for every geodesic current }k\text{ with }i(\mu,k)\neq 0.\end{cases}\]
_It can be shown that \(\mathcal{E}_{\lambda}^{c}\) only consists of measured laminations. In fact, for a minimal measured lamination \(\lambda\), the sets \(\mathcal{E}_{\lambda}^{c}\) and \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) are the same. For a closed curve \(\gamma\), the sets \(\mathcal{E}_{\gamma}^{c}\) and \(\mathcal{E}_{\gamma}^{\mathcal{ML}}\) are the same as well. For this reason, replacing \(\mathcal{E}_{\lambda}^{\mathcal{ML}}\) by \(\mathcal{E}_{\lambda}^{c}\) and \(\mathcal{E}_{\mu}^{\mathcal{ML}}\) by \(\mathcal{E}_{\mu}^{c}\) in the statement of Lemma 3.10 still gives us a true statement. The proof of this statement follows closely to the proof of Lemma 3.10._
In the following lemma we prove that for any measured lamination \(\lambda\) the operations \(\mathcal{E}_{-}^{\mathcal{ML}}\) and \(\phi\) commute for each \(\phi\in Aut(\mathcal{ML})\)
**Lemma 3.12**.: _Let \(\lambda\) be a measured lamination and \(\phi\in Aut(\mathcal{ML})\). Then \(\phi(\mathcal{E}_{\lambda}^{\mathcal{ML}})=\mathcal{E}_{\phi(\lambda)}^{ \mathcal{ML}}.\)_
Proof.: Let \(\mu\) be a measured lamination. It is enough to prove that \(\mu\in\mathcal{E}_{\phi(\lambda)}^{\mathcal{ML}}\) if and only if \(\phi^{-1}(\mu)\in\mathcal{E}_{\lambda}^{\mathcal{ML}}\).
Observe that, \(i(\mu,\phi(\lambda))=0\Leftrightarrow i(\phi^{-1}(\mu),\lambda)=0.\)
Let \(\mu\in\mathcal{E}_{\phi(\lambda)}^{\mathcal{ML}}\). This gives, \(i(\mu,c^{\prime})\neq 0\Rightarrow i(\phi(\lambda),c^{\prime})\neq 0\) for any measured lamination \(c^{\prime}.\) But now,
\[i(\phi^{-1}(\mu),c^{\prime}) \neq 0\] \[\Rightarrow i(\mu,\phi(c^{\prime})) \neq 0\] \[\Rightarrow i(\phi(\lambda),\phi(c^{\prime})) \neq 0\] \[\Rightarrow i(\lambda,c^{\prime}) \neq 0\]
So, \(\phi^{-1}(\mu)\in\mathcal{E}_{\lambda}^{\mathcal{ML}}\). A very similar argument will give the other implication, that is, if \(i(\phi^{-1}(\mu),c^{\prime})\neq 0\Rightarrow i(\lambda,c^{\prime})\neq 0\), then \(i(\mu,c^{\prime})\neq 0\Rightarrow i(\phi(\lambda),c^{\prime})\neq 0.\) This gives us, \(\mu\in\mathcal{E}_{\phi(\lambda)}^{\mathcal{ML}}\Leftrightarrow\phi^{-1}( \mu)\in\mathcal{E}_{\lambda}^{\mathcal{ML}}\Leftrightarrow\mu\in\phi( \mathcal{E}_{\lambda}^{\mathcal{ML}}).\)
**Remark 3.13**.: _The same proof can be used to show that for a geodesic current \(\lambda\) and \(\phi\in Aut(\mathcal{C})\), \(\phi(\mathcal{E}_{\lambda}^{c})=\mathcal{E}_{\phi(\lambda)}^{c}\)._
**Lemma 3.14**.: _Automorphisms of measured laminations that preserve the intersection form map minimal measured laminations to minimal measured laminations._
Proof.: Let \(\lambda\) be a minimal measured lamination on \(S\) and let \(\phi\in Aut(\mathcal{ML})\). Let
\[\phi(\lambda)=\lambda_{1}+\lambda_{2}+\cdots+\lambda_{n}\]
be the decomposition of the measured lamination \(\phi(\lambda)\) into minimal measured laminations. Since \(\phi^{-1}\in Aut(\mathcal{ML})\) is linear, we get
\[\lambda=\phi^{-1}(\lambda_{1})+\phi^{-1}(\lambda_{2})+\cdots+\phi^{-1}( \lambda_{n})\]
Thus, support \(|\phi^{-1}(\lambda_{i})|\) is contained in \(|\lambda|\) for all \(i\). But since \(\lambda\) is minimal \(|\phi^{-1}(\lambda_{i})|=|\lambda|\) for all \(i\). Therefore, for any two minimal components \(\lambda_{i}\) and \(\lambda_{j}\)
\[|\phi^{-1}(\lambda_{i})| =|\phi^{-1}(\lambda_{j})|\] \[\Rightarrow \mathcal{E}_{\phi^{-1}(\lambda_{i})}^{\mathcal{ML}} =\mathcal{E}_{\phi^{-1}(\lambda_{j})}^{\mathcal{ML}}\] \[\Rightarrow \phi(\mathcal{E}_{\phi^{-1}(\lambda_{i})}^{\mathcal{ML}}) =\phi(\mathcal{E}_{\phi^{-1}(\lambda_{j})}^{\mathcal{ML}})\] \[\Rightarrow \mathcal{E}_{\lambda_{i}}^{\mathcal{ML}} =\mathcal{E}_{\lambda_{j}}^{\mathcal{ML}}\]
Since \(\lambda_{i}\) and \(\lambda_{j}\) are minimal laminations, by Lemma 3.10 we can conclude that \(|\lambda_{i}|=|\lambda_{j}|\). Therefore, all the minimal components of \(\phi(\lambda)\) have the same support and hence \(\phi(\lambda)\) is a minimal measured lamination.
**Remark 3.15**.: _Any \(\phi\in Aut(\mathcal{C})\) also maps minimal measured laminations to minimal measured laminations. To see this replace the set \(\mathcal{E}_{-}^{\mathcal{ML}}\) in the above proof with \(\mathcal{E}_{-}^{c}\)._
_A similar alteration to the proof of Proposition 3.3 will show that any \(\phi\in Aut(\mathcal{C})\) maps simple closed curves to weighted simple closed curves._
### Proof of Proposition 3.3:
Proof.: Let \(\gamma\) be a simple closed curve on \(S\). Then,
\[\mathcal{E}_{\gamma}^{\mathcal{ML}}=\{c\gamma:c\in\mathbb{R}_{+}\}.\]
This implies,
\[\phi(\mathcal{E}_{\gamma}^{\mathcal{ML}})=\{c\phi(\gamma):c\in\mathbb{R}_{+}\}\]
and so,
\[\mathcal{E}_{\phi(\gamma)}^{\mathcal{ML}}=\{c\phi(\gamma):c\in\mathbb{R}_{+}\}. \tag{1}\]
From Lemma 3.14, we know that \(\phi(\gamma)\) is minimal. By Lemma 3.8 and Lemma 3.9, \(\mathcal{E}_{\phi(\gamma)}^{\mathcal{ML}}=\{c\phi(\gamma):c\in\mathbb{R}_{+}\}\) if and only if \(|\phi(\gamma)|\) satisfies one of the following two scenarios:
1. \(|\phi(\gamma)|\) is a simple closed curve.
2. \(|\phi(\gamma)|\) is a uniquely ergodic minimal measured lamination that fills the entire surface \(S\).
Assume case 2. Now, consider any simple closed curve \(\alpha\) in \(S\) such that \(i(\gamma,\alpha)=0\). This gives us \(i(\phi(\gamma),\phi(\alpha))=0\). Since \(\phi(\lambda)\) is a minimal lamination that fills all of \(S\), Lemma 3.5 implies that \(|\phi(\gamma)|=|\phi(\alpha)|\).
Let us now pick another simple closed curve \(\beta\), such that \(i(\gamma,\beta)=0\) and \(i(\alpha,\beta)\neq 0\). This implies, \(i(\phi(\gamma),\phi(\beta))=0\) and \(i(\phi(\alpha),\phi(\beta))\neq 0\). But earlier we concluded that \(|\phi(\gamma)|=|\phi(\alpha)|\). By Lemma 3.4, this is a contradiction.
The only possible case is 1, \(|\phi(\gamma)|\) is a simple closed curve. Therefore, \(\phi(\gamma)\) is a weighted simple closed curve.
### Preserving weights of simple closed curves
We can now go one step further, and show that any \(\phi\in Aut(\mathcal{ML})\) also preserves the weights of simple closed curves. That is, if \(\phi\in Aut(\mathcal{ML})\), then \(\phi\) maps a simple closed curve with unit weight to a simple closed curve with unit weight. In order to prove this, we consider the action of \(\phi\) on the curve complex on \(S\).
**Lemma 3.16**.: _For any \(\phi\in Aut(\mathcal{ML})\), we can find \(\phi^{\prime}\in Aut(\mathcal{ML})\) so that for any simple closed curve \(\gamma\), there is some \(k\) so that \(\phi^{\prime}\circ\phi(\gamma)=k\gamma\)._
Proof.: Let \([\gamma]\) denote the vertex in the curve complex that corresponds to the simple closed curve \(\gamma\). Consider a \(\phi\in Aut(\mathcal{ML})\). Let's say \(\phi(\gamma)=k\gamma^{\prime}\), where \(k\) depends on the choice of \(\gamma\). We define the action of \(\phi\) on the curve complex by
\[\phi^{*}([\gamma]):=\left[\gamma^{\prime}\right].\]
This action is well defined as \(\phi\) preserves disjointedness. If \(i(\gamma_{1},\gamma_{2})=0\), then \(i(\phi(\gamma_{1}),\phi(\gamma_{2}))=0\).
By Ivanov's theorem we can find an \(f\in Mod^{\pm}(S)\) such that its action on the curve complex agrees with \(\phi^{*}\). That is
\[f^{*}([\gamma])=\phi^{*}([\gamma])\text{, for any closed curve }\gamma.\]
But then \((f^{-1})^{*}([\gamma^{\prime}])=[\gamma]\). This means that the action of \(f^{-1}\) on \(\mathcal{ML}(S)\) maps \(\gamma^{\prime}\) to \(\gamma\). Let us use \(\phi^{\prime}\) to denote the action of \(f^{-1}\) on \(\mathcal{ML}(S)\). We have,
\[\phi^{\prime}(\gamma^{\prime})=\gamma\]
The map \(\phi^{\prime}\) satisfies the desired relation
\[\phi^{\prime}\circ\phi(\gamma)=\phi^{\prime}(k\gamma^{\prime})=k\gamma\]
**Proposition 3.17**.: _Automorphisms of measured laminations that preserve the intersection form map simple closed curves to simple closed curves._
Proof.: Let \(\phi\in Aut(\mathcal{ML})\). From Proposition 3.3, we know that \(\phi\) maps simple closed curves to weighted simple closed curves. By Lemma 3.16, we can find a \(\phi^{\prime}\in Aut(\mathcal{ML})\) such that \(\phi^{\prime}\circ\phi(\gamma)=k\gamma\) for all simple closed curves \(\gamma\), and \(k\) depending on \(\gamma\). We will now show that the weight k has to be one for any simple closed curve.
Fix a simple closed curve \(\gamma_{1}\) on our surface. Find simple closed curves \(\gamma_{2},\gamma_{3}\) so that \(\gamma_{1},\gamma_{2}\) and \(\gamma_{3}\) all pairwise intersect. All of these can be summarised by the following notations
\[\phi^{\prime}\circ\phi(\gamma_{i})=k_{i}\gamma_{i}\text{ for all }i\in\{1,2,3\}\]
and
\[i(\gamma_{i},\gamma_{j})\neq 0\text{ for all }i,j\in\{1,2,3\}\]
Observe that
\[i(\gamma_{i},\gamma_{j})=i(\phi^{\prime}\circ\phi(\gamma_{i}),\phi^{\prime} \circ\phi(\gamma_{j}))=i(k_{i}\gamma_{i},k_{j}\gamma_{j})=k_{i}k_{j}\cdot i( \gamma_{i},\gamma_{j})\]
for all \(i,j=1,2,3\). This implies
\[k_{1}k_{2}=k_{1}k_{3}=1\] \[\Rightarrow k_{1}(k_{2}-k_{3})=0\] \[\Rightarrow k_{1}=0\text{ or }k_{2}=k_{3}\]
Since \(k_{1}\) divides one, \(k_{1}\) does not equal \(0\). This gives us \(k_{2}=k_{3}\). Furthermore, \(k_{2}k_{3}=1\) and the weights \(k_{2}\) and \(k_{3}\) are both positive. This proves that \(k_{2}=k_{3}=1\) and therefore \(k_{1}=1\).
Since the choice of our simple closed curve \(\gamma_{1}\) was arbitrary, it follows that \(\phi^{\prime}\circ\phi(\gamma)=\gamma\) for all simple closed curve \(\gamma\). The proof of Lemma 3.16 establishes that \(\phi^{\prime}\) is induced by an element belonging to \(Mod^{\pm}(S)\). Consequently, this implies that \(\phi(\gamma)=\gamma\) for all simple closed curve \(\gamma\).
## 4. Proof of the theorem
**Theorem 1.1**.: _Let \(S_{g}\) be a closed, orientable, finite type surface of genus \(g\geq 2\) and let \(Aut(\mathcal{ML})\) denote the group of homeomorphisms on \(\mathcal{ML}(S_{g})\) that preserve the intersection form. Then_
\[Aut(\mathcal{ML})\cong Mod^{\pm}(S_{g})\]
_for all \(g\neq 2\). For the surface of genus 2,_
\[Aut(\mathcal{ML})\cong Mod^{\pm}(S_{2})/H\]
_Where, \(H\) is the order two subgroup generated by the hyperelliptic involution._
Proof.: Let us denote the automorphism group of the curve complex on a surface \(S_{g}\) by \(Aut(CC)\). From Ivanov's theorem we know that \(Aut(CC)\cong Mod^{\pm}(S_{g})\), when \(g\neq 2\). And for the surface of genus 2, \(Aut(CC)\cong Mod^{\pm}(S_{2})/H\). We will show that \(Aut(\mathcal{ML})\cong Aut(CC)\).
Consider the map \(\psi:Aut(\mathcal{ML})\to Aut(CC)\) that maps automorphisms on \(\mathcal{ML}\) to its action on the curve complex.
\[\psi(\phi)=\phi^{*}\]
To see \(\psi\) is injective, consider any element \(\phi\) in the kernel of \(\psi\). By Proposition 3.17 any such \(\phi\) fixes all the simple closed curves. Because \(\phi\) is linear by Proposition 3.1, it also fixes all the weighted multicurves. But, weighted multicurves are dense in \(Aut(\mathcal{ML})\). Therefore, \(\phi\) is identity on \(\mathcal{ML}\), and consequently \(\psi\) is injective.
Observe that, \(Mod^{\pm}(S_{g})\) has a natural action on \(\mathcal{ML}(S_{g})\) that also preserves the intersections form. For every \(f\in Mod^{\pm}(S_{g})\), we can find a \(\widetilde{f}\in Aut(\mathcal{ML})\). This shows that \(\psi\) is surjective.
## 5. Obstruction when generalizing to currents
For a surface \(S\) of genus at least 3, Theorem 1.1 gives us a surjection from \(Aut(\mathcal{C})\) to \(Mod^{\pm}(S)\).
\[Aut(\mathcal{C})\stackrel{{ f}}{{\longrightarrow}}Aut( \mathcal{ML})\cong Aut(CC)\cong Mod^{\pm}(S)\]
We are interested to know if the map \(f\) is injective. The kernel of \(f\) consists of maps in \(Aut(\mathcal{C})\) that induces an identity map on \(Aut(CC)\). In other words, any \(\phi\) in the kernel of \(f\) fixes all simple closed curves. Now, if \(f\) is injective, then the kernel of \(f\) will only contain the identity map on currents. This means that \(f\) being injective will imply the following statement: Any automorphism of currents that preserves the intersection form and fixes all simple closed curves is the identity on currents.
Clearly, if \(\phi\) is in the kernel of \(f\) then it preserves the simple marked length spectrum of any current \(\mu\). However, this is not enough to prove \(f\) is injective. In fact, in this section we construct an infinite family of closed curves that have the same self intersection number and simple marked length spectrum. Let \(\gamma\) and \(\delta\) be two such closed curves. It is not immediate why there cannot exist a \(\phi\in ker(f)\) that maps \(\gamma\) to \(\delta\). The remaining part of this section will be focused on constructing these examples.
**Theorem 1.3**.: _Let \(S\) be a surface of genus at least 2. We can find infinitely many pairs of closed curves \(\gamma_{n}\) and \(\gamma_{n}^{\prime}\) on \(S\) such that \(\gamma_{n}\) and \(\gamma_{n}^{\prime}\) have the same self intersection number and the same simple marked length spectra._
Proof.: Let \(P\) be a pair of pants on \(S\) and let \(\partial P\) be its boundary consisting of oriented curves \(X,Y\) and \(Z\) labeled below. We consider on \(P\) the family of closed curves as given in Figure 3.
Figure 3. Closed curve with number of half twists \(a=15\), \(b=11\) and \(c=3\). The figure shows the curve \(\gamma_{(15,11,3)}\) intersecting itself minimally.
For each triple \((a,b,c)\) where \(a\), \(b\), \(c\) are positive odd integers with \(a\geq b\geq c\) we consider the closed curve \(\gamma_{(a,b,c)}\). The path of the curve \(\gamma_{(a,b,c)}\) is described as follows: (1) One full twist about \(X\), (2) \(b\) half twists about \(Y\), (3) one full twist about \(Z\), (4) \(a\) half twists about \(Y\) in the orientation opposite to \(Y\), (5) one full twist about \(Z\) in the orientation opposite to \(Z\), (6) \(c\) half twists about \(Y\) in the orientation opposite to \(Y\), and then the curve closes. Any two curves in this family are identical, except for the number of half twists \(a\), \(b\) and \(c\).
The curve \(\gamma_{(a,b,c)}\) is in the minimal position that realizes its self intersection as long as \(a\geq b\geq c\). Counting the number of intersections we get
\[i(\gamma_{(a,b,c)},\gamma_{(a,b,c)})=\Big{(}\frac{a-1}{2}\Big{)}+3\Big{(}\frac {b-1}{2}\Big{)}+5\Big{(}\frac{c-1}{2}\Big{)}+5\]
Any two curves \(\gamma_{(a,b,c)}\) and \(\gamma_{(a^{\prime},b^{\prime},c^{\prime})}\) with the same self intersection number will satisfy
\[\Big{(}\frac{a-1}{2}\Big{)}+3\Big{(}\frac{b-1}{2}\Big{)}+5\Big{(}\frac{c-1}{2} \Big{)}+5=\Big{(}\frac{a^{\prime}-1}{2}\Big{)}+3\Big{(}\frac{b^{\prime}-1}{2} \Big{)}+5\Big{(}\frac{c^{\prime}-1}{2}\Big{)}+5\]
This is
\[(a-a^{\prime})+3(b-b^{\prime})+5(c-c^{\prime})=0. \tag{2}\]
Now, we will show that for \(\gamma_{(a,b,c)}\) and \(\gamma_{(a^{\prime},b^{\prime},c^{\prime})}\) to have the same simple marked length spectra it is enough that the following equation is satisfied:
\[(a-a^{\prime})+(b-b^{\prime})+(c-c^{\prime})=0 \tag{3}\]
A simple closed curve in \(S\) intersects \(\gamma_{(a,b,c)}\) or \(\gamma_{(a^{\prime},b^{\prime},c^{\prime})}\) if and only if it passes through \(P\). Any such simple closed curve will intersect \(P\) as an essential arc with its end points in \(\partial P\). For any essential arc \(s\) in \(P\) with its end points in \(\partial P\), we want:
\[i(s,\gamma_{(a,b,c)})=i(s,\gamma_{(a^{\prime},b^{\prime},c^{\prime})})\]
Here, we let \(i(\cdot,\cdot)\) denote the minimum intersection between \(\gamma_{(a,b,c)}\) and any arc in the free homotopy class of \(s\), where the endpoints of \(s\) to remain in \(\partial P\) throughout the homotopy; the end points of \(s\) need not be fixed. Figure 4 shows all possible essential simple arcs on \(P\) up to homotopy. It is straightforward to see that all the three arcs \(s_{1}\), \(s_{2}\) and \(s_{3}\) intersects \(\gamma_{a,b,c}\) minimally in Figure 4.
Figure 4: Arcs \(s_{1}\), \(s_{2}\) and \(s_{3}\) are homotoped to intersect \(\gamma_{a,b,c}\) minimal number of times.
Counting the number intersections we get
\[i(\gamma_{(a,b,c)},s_{1}) =\left(\frac{a-1}{2}\right)+\left(\frac{b-1}{2}\right)+\left(\frac {c-1}{2}\right)+2 \tag{4}\] \[i(\gamma_{(a,b,c)},s_{2})=a+b+c\] (5) \[i(\gamma_{(a,b,c)},s_{3}) =\left(\frac{a-1}{2}\right)+\left(\frac{b-1}{2}\right)+\left(\frac {c-1}{2}\right)+3 \tag{6}\]
Replacing \(a\), \(b\), \(c\) in the above equations by \(a^{\prime}\), \(b^{\prime}\) and \(c^{\prime}\) respectively give us the intersection number between \(\gamma_{(a,^{\prime},b^{\prime},c^{\prime})}\) and the arcs. The condition
\[i(s_{j},\gamma_{(a,b,c)})=i(s_{j},\gamma_{(a^{\prime},b^{\prime},c^{\prime})})\]
for \(j=1,2\) and \(3\) is equivalent to equation (3).
For \(k\) an even positive integer and let \(t\) an odd integer greater than or equal to \(3\), the set
\[c =t c^{\prime}=k+t\] \[b =4k+t b^{\prime}=2k+t\] \[a =6k+t a^{\prime}=7k+t\]
satisfies equations (2) and (3). It also satisfies \(a\geq b\geq c\) and \(a^{\prime}\geq b^{\prime}\geq c^{\prime}\). The conditions on \(k\) and \(t\) make certain that the number of half twists are odd and at least equal to \(3\). This gives us a infinite collection of curves \(\gamma_{(a,b,c)}\) and \(\gamma_{(a^{\prime},b^{\prime},c^{\prime})}\) have the same self intersection number and simple marked length spectra.
|
2309.09205 | MFRL-BI: Design of a Model-free Reinforcement Learning Process Control
Scheme by Using Bayesian Inference | Design of process control scheme is critical for quality assurance to reduce
variations in manufacturing systems. Taking semiconductor manufacturing as an
example, extensive literature focuses on control optimization based on certain
process models (usually linear models), which are obtained by experiments
before a manufacturing process starts. However, in real applications,
pre-defined models may not be accurate, especially for a complex manufacturing
system. To tackle model inaccuracy, we propose a model-free reinforcement
learning (MFRL) approach to conduct experiments and optimize control
simultaneously according to real-time data. Specifically, we design a novel
MFRL control scheme by updating the distribution of disturbances using Bayesian
inference to reduce their large variations during manufacturing processes. As a
result, the proposed MFRL controller is demonstrated to perform well in a
nonlinear chemical mechanical planarization (CMP) process when the process
model is unknown. Theoretical properties are also guaranteed when disturbances
are additive. The numerical studies also demonstrate the effectiveness and
efficiency of our methodology. | Yanrong Li, Juan Du, Wei Jiang | 2023-09-17T08:18:55Z | http://arxiv.org/abs/2309.09205v1 | # MFRL-BI: Design of a Model-free Reinforcement Learning Process
###### Abstract
Design of process control scheme is critical for quality assurance to reduce variations in manufacturing systems. Taking semiconductor manufacturing as an example, extensive literature focuses on control optimization based on certain process models (usually linear models), which are obtained by experiments before a manufacturing process starts. However, in real applications, pre-defined models may not be accurate, especially for a complex manufacturing system. To tackle model inaccuracy, we propose a model-free reinforcement learning (MFRL) approach to conduct experiments and optimize control simultaneously according to real-time data. Specifically, we design a novel MFRL control scheme by updating the distribution of disturbances using Bayesian inference to reduce their large variations during manufacturing processes. As a result, the proposed MFRL controller is demonstrated to perform well in a nonlinear chemical mechanical planarization (CMP) process when the process model is unknown. Theoretical properties are also guaranteed when disturbances are additive. The numerical studies also demonstrate the effectiveness and efficiency of our methodology.
_Keywords_: model-free reinforcement learning; process control; Bayesian inference; design of experiments.
## 1 Introduction
### Background and motivations
Process control is critical to keep the stability of manufacturing processes and guarantee the quality of final products, especially when a manufacturing process is complex. For example, in a semiconductor manufacturing process, two types of factors influence the stability of the manufacturing system. First, internal factors from manufacturing equipment and environments, mainly refer to process dynamics and disturbances during the manufacturing process (Tseng and Chen, 2017). Second, external factors
refer to control recipes designed by the manufacturer, which aim to compensate for disturbances and adjust the system output to its desired target.
Traditional run-to-run (R2R) control schemes can be divided into two phases. In Phase I, a process model is specified to describe the relationship between control input and process output through domain knowledge, design of experiments (DOE), or response surface methodology (RSM), followed by control recipe optimizations in Phase II (Tseng et al., 2019). A detailed literature review is provided in Section 1.2. However, in practical applications, when manufacturing processes are too complex to be described by specific models accurately, traditional R2R controllers may encounter significant challenges in accurate quality control. For example, chemical mechanical planarization (CMP) process is one of the most important steps in semiconductor manufacturing to remove excess materials from the surface of silicon wafers. In literature, CMP processes are often controlled with explicit assumptions of process models (Castillo and Yeh, 1998). However, such models cannot fully capture the relationship between system outputs, control recipes, and disturbances, thereby leading to unavoidable model errors, which affect the accuracy of control optimization.
To tackle model inaccuracy in complex manufacturing processes, model-free reinforcement learning (MFRL) approaches (Recht, 2019) have been developed to learn manufacturing environments from real-time experimental data and directly search optimal control recipes without process model assumptions. Therefore, MFRL provides unprecedented opportunities for control optimization, especially in complex manufacturing processes. However, current MFRL approaches need to be improved as disturbances are hidden unstable factors that affect system outputs significantly (Nian et al., 2020). Take CMP process as an example, Figure 1 illustrates the system outputs based on the MFRL controller in Recht (2019) (defined as a basic MFRL controller). In the basic MFRL controller, the effects of disturbances are ignored and control recipes are directly optimized based on system outputs. As shown in Figure 1, compared with the case without control, the basic MFRL controller can roughly keep the system output close to the target level. However, the controlled process still experiences significant deviations during some periods, which leads to invalid control. Therefore, it is highly desired to design a new control methodology to improve the basic MFRL controller by updating real-time distributions of disturbances to reduce the variations.
### Literature review
In this subsection, we review different process control methods for complex manufacturing systems, especially for semiconductor manufacturing. Since the control mechanism or process model is important for controller design (Bastiaan, 1997), we classify the literature into two main categories based on whether the process model is available/predefined or not: (1) model-based controllers and (2) data-driven or model-free controllers.
Both linear and nonlinear process models have been considered in existing process control methodologies. Extensive pioneer works considered linear process models with disturbances that follow different stochastic time series. For example, Ingolfsson and Sachs (1993) analyzed the stability and sensitivity of the exponentially weighted moving average (EWMA) controller in compensating for the integrated moving average (IMA) disturbance process. Ning et al. (1996) formulated the process model as a linear transfer function with time-dependent drifts and developed a time-based EWMA controller. Tsung and Shi (1999) designed a proportional-integral-derivative (PID) controller for linear process models with autoregressive moving average (ARMA) disturbances and integrated the PID-based control scheme with statistical process control. Chen and Guo (2001) proposed an age-based double EWMA controller, which performs better than the EWMA controller in dealing with time-dependent drifts. Tseng et al. (2003) designed a new controller to improve the traditional EWMA controller by optimizing its discount factor and defined it as the variable-EWMA (VEWMA) controller, which has great performance in linear process models with ARIMA disturbance. Tseng et al. (2007) showed that the VEWMA controller has better performance than double EWMA numerically. He et al. (2009)
Figure 1: An example of basic MFRL controller in a CMP process
proposed a new controller named general harmonic rule (GHR) and theoretically proved its performance for a wide range of stochastic disturbances.
Besides linear process models, nonlinear process models are also widely studied. Hankinson et al. (1997) introduced a polynomial function to approximate a process model in deep reactive ion etching. Del Castillo and Yeh (1998) reviewed different polynomial process models for approximation of the CMP process and proposed adaptive R2R controllers according to these polynomial models. Kazemzadeh et al. (2008) extended the EWMA and VEWMA controllers in quadratic process models. In addition to polynomial models, more complicated nonlinear process models are introduced by differential equations. For example, Bibian and Jin (2000) considered a digital control problem in a second-order system and proposed two practical control schemes to deal with the time delay. Chen et al. (2012) focused on the deterministic as well as stochastic process models with measurement delay and proposed a new controller that integrates deterministic and stochastic components with applications in chemical vapor deposition (CVD) processes. In summary, model-based controllers depend crucially on explicit process formulations and are suitable for cases where the focused process models are well-validated.
When an explicit process model is not available, data-driven or model-free controllers are directly designed based on historical or offline data. For example, neural networks (NN) are widely used to approximate the unknown process model according to control recipes and system outputs. Park et al. (2005) approximated the real process model by an NN and designed an NN-based controller to reduce overlay misalignment errors significantly in semiconductor manufacturing processes. Wang and Chou (2005) proposed a neural-Taguchi-based control strategy to reach the desired material removal rate through an NN-simulated CMP process. Chang et al. (2006) developed a virtual metrology system using different NNs to describe the process model and optimized the control recipes accordingly. Liu et al. (2018) summarized NN-based controllers in their review paper and emphasized the related practical issues such as nonstationary control results and poor interpretations. Therefore, when controlling dynamic manufacturing systems characterized by unstable disturbances, existing NN-based approaches also encounter challenges in accurately approximating the manufacturing process.
Compared with NN-based control methods, reinforcement learning (RL) is another efficient data-driven control method to learn system dynamics and optimize control recipes by interacting with real-time system states. Given the definition of system state, control policy, and cost or reward function, RL can optimize control recipes based on real-time system states (Wang et al., 2018). For example, Recht (2019) introduced two basic policy-based algorithms for MFRL methods, policy gradient and pure random search (PRS). The policy gradient method optimizes control strategies based on the distribution of system outputs (Li et al., 2023), while the PRS method is more general and directly optimizes control strategies by stochastic gradient descent. However, as pointed out by Nian et al. (2020), these MFRL controllers cannot be directly applied in complex manufacturing systems due to large variations caused by unknown process models and unstable disturbances. Therefore, Khamaru et al. (2021) explored an effective variance reduction method based on an instance-dependent function in Q-learning.
In summary, the above data-driven methods share a common limitation that variations are relatively large. As process models are unknown, hidden unstable disturbances are hard to be recognized, thereby bringing difficulties to optimize control recipes compensating for them. To tackle the challenges, in this article, we design a new process control scheme to improve the basic MFRL controller (e.g., PRS-based MFRL controller) by updating the distribution of disturbances through Bayesian inference. We define it as a model-free reinforcement learning controller with Bayesian inference (MFRL-BI).
As disturbances can be reflected by system outputs, we use Bayesian inference to update the real-time distribution and integrate it into current MFRL control schemes. Figure 2 illustrates the difference between the control schemes of existing R2R and the proposed MFRL-BI controllers in terms of process assumptions and control optimization. Following the design steps of process control scheme in Figure 2 (Del Castillo and Hurwitz, 1997), we divide the MFRL-BI controller into two phases: the optimization phase for controller learning (Phase I) and the application phase in online manufacturing (Phase II). In Phase I, we design experiments by virtual metrology (VM) to provide extensive data (Chang et al., 2006; Kang et al., 2009) for searching control recipes using MFRL algorithms. Considering the fact that disturbance can be inferred by system outputs, we update its distribution through Bayesian inference using real-time outputs. Finally, the input control recipes, system outputs, and disturbance inference data are collected and used for online control in Phase II.
The main contributions of our work are summarized as follows: (1) a new model-free control scheme called MFRL-BI is proposed for efficient variation reduction by updating disturbance processes through Bayesian inference. (2) The corresponding algorithms of the MFRL-BI controller that combine Bayesian inference with the current PRS-based MFRL methodology are presented. (3) The proposed MFRL-BI controller is theoretically shown to guarantee optimality asymptotically.
The remainder of this paper is organized as follows. Section 2 introduces the basic MFRL methodology in an R2R control scheme. Section 3 provides the design procedure of the MFRL-BI control scheme and interprets the related theoretical principles in Phases I and II. Section 4 demonstrates the performance of our method numerically and compares it with the DOE-based automatic process controller (APC) with the application in a nonlinear CMP process control. Finally, Section 5 concludes the paper with remarks on future research directions.
## 2 Basic MFRL Controller
In this section, we first present formulations of the process control problem in Section 2.1, and then discuss the methodology and corresponding algorithms of the basic MFRL in Section 2.2.
### Process control formulation
We consider a multiple input-multiple output (MIMO) R2R process control problem that aims to reduce variations in a manufacturing system. At run \(t\in\{1,2,...T\}\), a control recipe \(\mathbf{u}_{t}\in\mathbb{R}^{m\times 1}\) is optimized to keep the system output \(\mathbf{y}_{t}\in\mathbb{R}^{n\times 1}\) close to its target level \(\mathbf{y}^{\star}\in\mathbb{R}^{n\times 1}\), where \(T\) is the total number
Figure 2: Difference between existing R2R and MFRL-BI control schemes
of runs. \(m\) and \(n\) are the dimensions of input control recipes and system outputs, respectively. The squared errors of process outputs are used to measure the control cost (Wang and Han, 2013). Furthermore, as control actions also bring extra costs in the manufacturing process, the cost function at run \(t\) is:
\[C_{t}(\boldsymbol{y}_{t},\boldsymbol{u}_{t})=(\boldsymbol{y}_{t}-\boldsymbol{ y}^{*})^{T}\boldsymbol{Q}(\boldsymbol{y}_{t}-\boldsymbol{y}^{*})+\boldsymbol{u}_{t}^{T} \boldsymbol{Ru}_{t}, \tag{1}\]
where \(\boldsymbol{Q}\) and \(\boldsymbol{R}\) are positive definite weighted matrices. According to Del Castillo and Hurwitz (1997), the system output \(\boldsymbol{y}_{t}\) is affected by the control recipes \(\boldsymbol{u}_{t}\) as well as disturbances in manufacturing environments. Therefore, we define the underlying truth of the unknown process model as \(\boldsymbol{y}_{t}=h(\boldsymbol{u}_{t},\boldsymbol{d}_{t})\), where \(\boldsymbol{d}_{t}\in\mathbb{R}^{n\times 1}\) is the disturbance at run \(t\). Combining with the cost function in Equation (1), we have the process control problem in \(T\) runs as:
\[\min_{\{\boldsymbol{u}_{1},\boldsymbol{u}_{2},...,\boldsymbol{u}_ {T}\}}E_{\{\boldsymbol{d}_{1},\boldsymbol{d}_{2},...,\boldsymbol{d}_{T}\}} \left[\sum\nolimits_{t=1}^{T}\left((\boldsymbol{y}_{t}-\boldsymbol{y}^{*})^{ T}\boldsymbol{Q}(\boldsymbol{y}_{t}-\boldsymbol{y}^{*})+\boldsymbol{u}_{t}^{T} \boldsymbol{Ru}_{t}\right)\right]\] \[\text{s.t. }\boldsymbol{y}_{t}=h(\boldsymbol{u}_{t},\boldsymbol{d}_{t}). \tag{2}\]
Note that the process model \(h(\boldsymbol{u}_{t},\boldsymbol{d}_{t})\) is general and not specified.
In semiconductor manufacturing, it is widely recognized that process disturbances come from manufacturing systems or environments, both of which are independent of control recipes. Meanwhile, the effects of control recipes and disturbances are additive in a process model (Box and Kramer, 1992; Zhong et al, 2010; Wang and Han, 2013). Therefore, we have Assumption 2.1.
**Assumption 2.1**: _The manufacturing process outputs can be separated into two additive parts related to control recipes and disturbances respectively, i.e.,_
\[\boldsymbol{y}_{t}=h(\boldsymbol{u}_{t},\boldsymbol{d}_{t})=g(\boldsymbol{u}_ {t})+\boldsymbol{d}_{t}. \tag{3}\]
_where \(g(\boldsymbol{u}_{t})\) and \(\boldsymbol{d}_{t}\) are assumed to be independent._
In semiconductor manufacturing systems, disturbance processes exhibit general autocorrelations due to manufacturing environments such as aging effects (Del Castillo and Hurwitz, 1997). Therefore, in a manufacturing cycle from runs 1 to \(T\), the disturbance \(\boldsymbol{d}_{t}\) can be inferred from its historical trajectory \(\boldsymbol{D}_{t-1}=[\boldsymbol{d}_{1},\boldsymbol{d}_{2},...\, \boldsymbol{d}_{t-1}]\). We define the conditional probability density function of the disturbance at run \(t\) as \(p(\boldsymbol{d}_{t}|\boldsymbol{D}_{t-1})\) with mean vector \(\boldsymbol{\mu}_{t}\) and covariance matrix \(\boldsymbol{\Sigma}_{t}\).
For control recipes to compensate for the disturbances, as shown in Equation (3), their effects on the system output are modeled by a function \(g(\cdot)\), which is often assumed as a linear function in
literature (Chen and Guo, 2001; Tseng et al., 2003; 2007). Considering the potential inaccuracy, we relax formulation assumptions of \(g(\cdot)\) in our model. Although the effects of control recipes and disturbances on the system output are separated according to Assumption 2.1, there still exists a significant challenge in quantifying the effects of control recipes and disturbances as \(g(\cdot)\) is unknown and \(\mathbf{d}_{t}\) cannot be observed directly.
### Methodology of basic MFRL with PRS
In the control methodology of a basic MFRL controller, the expectation of control cost over disturbances \(\mathbf{d}_{t}\) is minimized by optimizing control recipe \(\mathbf{u}_{t}\). Due to the unknown process model \(g(\cdot)\), the cost function is also an unknown function over \(\mathbf{u}_{t}\). According to Recht (2019), the objective function in Equation (2) can be reformulated as \(J(\mathbf{u})=\mathbf{E}_{(\mathbf{d}_{1},\mathbf{d}_{2},...,\mathbf{d}_{T})}[\sum_{t=1}^{T}C_ {t}(\mathbf{y}_{t}(\mathbf{u}_{t},\mathbf{d}_{t}),\mathbf{u}_{t})]\), where \(\mathbf{u}=[\mathbf{u}_{1},...,\mathbf{u}_{t},...\mathbf{u}_{T}]\). Before optimizing the function \(J(\mathbf{u})\), suppose the following assumption holds.
**Assumption 2.2**: _The function \(J(\mathbf{u})=\mathbf{E}_{(\mathbf{d}_{1},\mathbf{d}_{2},...,\mathbf{d}_{T})}[\sum_{t=1}^{T}C_ {t}(\mathbf{y}_{t}(\mathbf{u}_{t},\mathbf{d}_{t}),\mathbf{u}_{t})]\) achieves a minimum at an unknown point \(\mathbf{u}^{*}\)._
To minimize \(J(\mathbf{u})\), the basic MFRL controller in Recht (2019) uses a PRS-based method to optimize the control recipes by stochastic gradient descent (SGD). If Assumptions 2.1 and 2.2 hold, the optimization problem in Equation (2) can be solved via the SGD algorithm as follows.
**SGD Algorithm**: _There are two steps in the SGD algorithm for the basic MFRL controller. First, the gradient of \(J(\mathbf{u})\) is approximated by a finite difference along the direction \(\mathbf{\epsilon}\), where \(\mathbf{\epsilon}\in\mathbb{R}^{m\times T}\) is a random vector consisting of 0 or 1. Then, we can write the gradient of \(J(\mathbf{u})\) as:_
\[\nabla_{\mathbf{u}}J(\mathbf{u})=\frac{J(\mathbf{u}+\mathbf{\iota}\mathbf{\epsilon})-J(\mathbf{u}-\mathbf{ \iota}\mathbf{\epsilon})}{2\iota}\mathbf{\epsilon}, \tag{4}\]
_where \(\iota\to 0\) and \(\mathbf{u}\mp\iota\mathbf{\epsilon}\) denote the neighborhood of the control strategy \(\mathbf{u}\). Second, the control recipe moves along the gradient descent direction with step size \(\alpha\). If \(\mathbf{u}^{[k]}\) is used to denote the value of control recipes in the \(k\)th iteration, we have_
\[\mathbf{u}^{[k+1]}=\mathbf{u}^{[k]}-\alpha\nabla_{\mathbf{u}}J(\mathbf{u}^{[k]}). \tag{5}\]
_These two steps are executed alternately until \(\mathbf{u}\) converges (i.e., the difference between successive iterated values of \(\mathbf{u}^{[k+1]}\) and \(\mathbf{u}^{[k]}\) is smaller than a pre-defined threshold \(\eta\))._
Following the SDG algorithm, Algorithm 1 presents the aforementioned control search procedure to minimize the unknown function \(J(\cdot)\).
```
Function: \(\text{MFRL\_PRS}(\cdot)\) Input: hyper-parameters \(\mathbf{\epsilon}\), \(\iota\), \(\alpha\), \(\eta\) Initialize: \(k=0\), control recipe \(\mathbf{u}^{[0]}\) Repeat: Execute two initial control strategies \(\mathbf{u}^{[k]}+\iota\mathbf{\epsilon}\) and \(\mathbf{u}^{[k]}-\iota\mathbf{\epsilon}\) \(\nabla_{\mathbf{u}}\big{/}(\mathbf{u}^{[k]})=\frac{J\big{(}\mathbf{u}^{[k]}+\iota\mathbf{ \epsilon}\big{)}-J\big{(}\mathbf{u}^{[k]}-\iota\mathbf{\epsilon}\big{)}}{2\iota}\mathbf{\epsilon}\) \(\mathbf{u}^{[k+1]}=\mathbf{u}^{[k]}-\alpha\nabla_{\mathbf{u}}\big{/}(\mathbf{u}^{[k]})\) \(k\gets k+1\) Until \(\big{\|}\mathbf{u}^{[k]}-\mathbf{u}^{[k-1]}\big{\|}<\eta\) \(\widehat{\mathbf{u}}=\mathbf{u}^{[k]}\) Output: \(\widehat{\mathbf{u}}\)
```
**Algorithm 1******MFRL with PRS Algorithm
According to the asymptotic analysis of SGD algorithm in Kiefer and Wolfowitz (1952), if disturbances satisfy the condition \(\mathbf{E}(\mathbf{d}_{\mathbf{t}})=\mathbf{0}\), the control recipe searched in Algorithm 1 will converge to the optimal value. However, in practice, the disturbance process is not stable, its fluctuations and drifts are inevitable and may even increase as time goes by. For example, in CMP process in Figure 1, the basic MFRL controller encounters large variations, as it focuses on minimizing the expected control cost \(J(\mathbf{u})\) but ignores the variations and drifts of disturbance \(\mathbf{d}_{\mathbf{t}}\). To overcome this limitation, we propose the MFRL-BI controller to further reduce the variations of system outputs by dynamically updating the distribution of disturbances in Section 3.
## 3 The Merl-Bi Controller
In this section, the Merl-BI controller is proposed to improve the performance of basic MFRL by updating the distribution of disturbance via Bayesian inference. Following Figure 2, we introduce methodologies of the proposed MFRL-BI controller in two phases in Sections 3.1 and 3.2 respectively. As shown in Figure 3, in Phase I, control recipes are searched in the inner loop using the MFRL algorithm with PRS. After taking the convergent control recipe, the distribution of disturbance is updated in the outer loop. Meanwhile, the control recipes, system outputs, and estimated disturbances are collected, which are used for online control optimization in Phase II.
As introduced in Section 2.2, disturbances are unobservable, we define the prior distribution of \(\mathbf{d_{t}}\) condition on its trajectory as
\[\mathbf{d_{t}}|\mathbf{D_{t-1}}\mathbf{\sim}p(\mathbf{\mu_{t}},\mathbf{\Sigma_{t}}), \tag{6}\]
where \(p(\cdot)\) is the probability distribution function. The observations of system output \(\mathbf{y_{t}}\) can reflect the disturbance process and be used to update the posterior distribution of \(\mathbf{d_{t}}\). However, \(\mathbf{y_{t}}\) is also affected by the control recipe \(\mathbf{u_{t}}\), which brings challenges for disturbance inference. Therefore, in Figure 3, we separate the effects of \(\mathbf{d_{t}}\) and \(\mathbf{u_{t}}\), and make inference of \(\mathbf{d_{t}}\) in the outer loop and optimization of \(\mathbf{u_{t}}\) in the inner loop.
Figure 3: The methodology of the MFRL-BI controller
Specifically, to separate the effects of \(\mathbf{d}_{t}\) and \(\mathbf{u}_{t}\), we reformulate the process model in Equation (3) as \(\mathbf{y}_{t}=g(\mathbf{u}_{t})+\mathbf{d}_{t}=g(\mathbf{u}_{t})+\mathbf{\mu}_{t}+\mathbf{\delta}_{t}\), where \(\mathbf{\mu}_{t}\) is the mean vector of \(\mathbf{d}_{t}\) and \(\mathbf{\delta}_{t}=\mathbf{d}_{t}-\mathbf{\mu}_{t}\) is a random vector with \(E(\mathbf{\delta}_{t})=\mathbf{0}\). Since the process model \(g(\mathbf{u}_{t})\) is unknown, the variability of searched control recipe via Algorithm 1 using PRS is unavoidable, especially when the number of iterations is limited and the step size is fixed (Kiefer and Wolfowitz, 1952). We use \(\mathbf{v}_{t}=\mathbf{\hat{u}}_{t}-\mathbf{u}_{t}^{*}\) to denote this variability, where \(\mathbf{\hat{u}}_{t}\) is control recipe searched by PRS and \(\mathbf{u}_{t}^{*}\) is the underlying optimal control recipe. In summary, we reformulate the optimization problem in Equation (2) as follows at each run \(t\):
\[\min_{\mathbf{u}_{t}}\mathbf{E}_{\mathbf{\delta}_{t},\mathbf{v}_{t}}[C_{t}( \mathbf{y}_{t},\mathbf{u}_{t})]\] \[\text{s.t. }\mathbf{y}_{t}=g(\mathbf{u}_{t})+\mathbf{\mu}_{t}+\mathbf{\delta}_{t}. \tag{7}\]
By incorporating the constraints into the objective function, we have:
\[\mathbf{E}_{\mathbf{\delta}_{t},\mathbf{v}_{t}}[C_{t}(\mathbf{y}_{t},\mathbf{u}_{t})]=tr(\mathbf{ Q}\mathbf{\Sigma}_{t})+\mathbf{E}_{\mathbf{v}_{t}}[(g(\mathbf{u}_{t})+\mathbf{\mu}_{t}-\mathbf{y}^{*}) ^{T}\mathbf{Q}(g(\mathbf{u}_{t})+\mathbf{\mu}_{t}-\mathbf{y}^{*})]+\mathbf{u}_{t}^{T}\mathbf{Ru}_{t}. \tag{8}\]
Detailed derivations are presented in Appendix A. For convenience, we define the function \(M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) given the distribution of disturbances as:
\[M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\coloneqq\mathbf{E}_{\mathbf{v}_{t}}[(g(\mathbf{u}_{t})+\bm {\mu}_{t}-\mathbf{y}^{*})^{T}\mathbf{Q}(g(\mathbf{u}_{t})+\mathbf{\mu}_{t}-\mathbf{y}^{*})]+\mathbf{u} _{t}^{T}\mathbf{Ru}_{t}, \tag{9}\]
Then the total cost can be divided into two parts: \(M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) and \(tr(\mathbf{Q}\mathbf{\Sigma}_{t})\). This separation allows us to optimize \(M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) by MFRL algorithm with PRS and update the value of \(tr(\mathbf{Q}\mathbf{\Sigma}_{t})\) and \(\mathbf{\mu}_{t}\) by Bayesian inference. The methodology and corresponding algorithms of control optimization and disturbance inference in Phase I will be elaborated in Section 3.1.
### Control optimization in Phase I
To separate the effects of \(\mathbf{u}_{t}\) and \(\mathbf{d}_{t}\), we divide the control process at each run into two steps: (i) at the beginning of run \(t\), given the prior distribution of \(\mathbf{d}_{t}\), control recipe \(\mathbf{u}_{t}\) is searched to minimize the control cost \(M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\); (ii) the posterior distribution of \(\mathbf{d}_{t}\) is updated when the system output \(\mathbf{y}_{t}\) is observed and the prior distribution of \(\mathbf{d}_{t+1}\) is inferred according to the posterior distribution of \(\mathbf{d}_{t}\). These two steps correspond to the inner and outer loops in Figure 3, respectively, and are presented as follows.
### A. Inner loop: search for control recipes
In this part, we design an experiment searching for control recipes to minimize the expected control cost \(M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\). According to its definition in Equation (9), we can separate \(M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) as:
\[M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\coloneqq H(\mathbf{u}_{t}|\mathbf{\mu}_{t})+\mathbf{u}_{t}^{T} \mathbf{Ru}_{t}, \tag{10}\]
where \(H(\mathbf{u}_{t}|\mathbf{\mu}_{t})=[(g(\mathbf{u}_{t})+\mathbf{\mu}_{t}-\mathbf{y}^{*})^{T}\mathbf{Q}(g (\mathbf{u}_{t})+\mathbf{\mu}_{t}-\mathbf{y}^{*})]\). As \(\mathbf{u}_{t}^{T}\mathbf{Ru}_{t}\) is a deterministic convex function of \(\mathbf{u}_{t}\), it is necessary to search the gradient of \(H(\cdot)\), and we have \(\nabla_{\mathbf{u}_{t}}M(\mathbf{u}_{t}|\mathbf{\mu}_{t})=\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_{t}| \mathbf{\mu}_{t})+2\mathbf{Ru}_{t}\). Before searching for \(\mathbf{u}_{t}\), we suppose that \(H(\cdot)\) also satisfies Assumption 2.2, i.e., \(H(\cdot)\) is an unknown function that has a minimum at an unknown point \(\mathbf{\tilde{u}}_{t}\) ( \(\mathbf{\tilde{u}}_{t}=arg\min_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\)). Then, similar to the basic MFRL controller, we implement Algorithm 1 to optimize the unknown function \(M(\cdot)\) using PRS. Particularly, to further guarantee the stability of control recipes and reduce the variability of \(\mathbf{v}_{t}\), after the convergence of \(\mathbf{u}_{t}\) based on Algorithm 1, we execute another \(N\) iterations of control recipes, which are denoted as \(\mathbf{\tilde{u}}_{t}(1)\) to \(\mathbf{\tilde{u}}_{t}(N)\). The final recipe is chosen as the mean of control recipes after convergence (i.e., \(\mathbf{\overline{u}}_{t}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{\tilde{u}}_{t}(i)\)). Algorithm 2 presents the details of the control optimization in the MFRL-BI controller.
```
Function: Control_Search Input: parameter \(\mathbf{\mu}_{t}\), hyper-parameters \(\mathbf{\epsilon}\in\mathbb{R}^{m\times 1},\ \alpha,\ N,\ \iota\) Output: \(\mathbf{\overline{u}}_{t}\) Initialize: control recipe \(\mathbf{u}_{t}^{[0]}\) Calculate \(\mathbf{\tilde{u}}_{t}(1)\) using Algorithm 1 based on function \(M(\cdot\ |\mathbf{\mu}_{t})\) For\(i=1\) to \(N-1\)do Execute control strategies \(\mathbf{\tilde{u}}_{t}(i)+\mathbf{\iota}\mathbf{\epsilon}\) and \(\mathbf{\tilde{u}}_{t}(i)-\mathbf{\iota}\mathbf{\epsilon}\) \(\nabla_{\mathbf{u}_{t}}M(\mathbf{\tilde{u}}_{t}(i)|\mathbf{\mu}_{t})=\frac{H(\mathbf{\tilde{u}} _{t}(i)+\mathbf{\iota}\mathbf{\epsilon}|\mathbf{\mu}_{t})+H(\mathbf{\tilde{u}}_{t}(i)-\mathbf{ \iota}\mathbf{\epsilon}|\mathbf{\mu}_{t})}{2\iota}\mathbf{\epsilon}+2\mathbf{R}\mathbf{\tilde{u}}_{ t}(i)\) \(\mathbf{\tilde{u}}_{t}(i+1)=\mathbf{\tilde{u}}_{t}(i)-\alpha\nabla_{\mathbf{u}_{t}}M(\mathbf{ \tilde{u}}_{t}(i)|\mathbf{\mu}_{t})\) End for \(\mathbf{\overline{u}}_{t}=\frac{1}{N}{\sum}_{i=1}^{N}\mathbf{\tilde{u}}_{t}(i)\)
```
**Algorithm 2**Control optimization given disturbance distribution
Algorithm 2 has two procedures: first, control recipes are searched to minimize the cost function \(M(\cdot)\) given the distribution of disturbances. Second, after the convergence of control recipes, we use another \(N\) samples to reduce the variations of control resulting from stochastic gradient approximation for the unknown function \(H(\cdot)\). To further examine the properties of searched control recipes in Algorithm 2, we make two assumptions about function \(H(\cdot)\) as in Mandt et al. (2017).
**Assumption 3.1**:: _The stochastic gradient in Algorithm 2 can be expressed as the underlying truth gradient value plus a random gradient noise. The noise can be approximated as Gaussian, whose variance is independent of control recipes. i.e., \(\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\approx\nabla_{\mathbf{u}_{t}}H^{*}( \mathbf{u}_{t}|\mathbf{\mu}_{t})+\mathbf{\varepsilon}\) and \(\nabla_{\mathbf{u}_{t}}M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\approx\nabla_{\mathbf{u}_{t}}M^{*}( \mathbf{u}_{t}|\mathbf{\mu}_{t})+\mathbf{\varepsilon}\), where \(\nabla_{\mathbf{u}_{t}}H^{*}(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) and \(\nabla_{\mathbf{u}_{t}}M^{*}(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) denote the underlying truth gradients of functions \(H(\cdot)\) and \(M(\cdot)\), respectively. It is obvious that \(\nabla_{\mathbf{u}_{t}}M^{*}(\mathbf{u}_{t}|\mathbf{\mu}_{t})=\nabla_{\mathbf{u}_{t}}H^{*}( \mathbf{u}_{t}|\mathbf{\mu}_{t})+2\mathbf{R}\mathbf{u}_{t}\) according to their definition. \(\mathbf{\varepsilon}\) follows a multi-normal distribution with zero mean vector and covariance matrix \(\mathbf{\Sigma}_{\mathbf{\varepsilon}}\)._
**Assumption 3.2**:: _The finite-difference equation of control iterations can be approximated by the stochastic differential equation. Specifically, the difference equation between two successive control iterations searched by Algorithm 2 (\(\Delta\mathbf{u}_{t}=-\alpha\nabla_{\mathbf{u}_{t}}M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\)) can be approximated by \(\mathrm{d}\mathbf{u}_{t}=-\alpha\nabla_{\mathbf{u}_{t}}M(\mathbf{u}_{t}|\mathbf{\mu}_{t})\mathrm{ d}t\). Combined with Assumption 3.1, we have \(\mathrm{d}\mathbf{u}_{t}=-\alpha\nabla_{\mathbf{u}_{t}}M^{*}(\mathbf{u}_{t}|\mathbf{\mu}_{t}) \mathrm{d}t+\alpha\mathbf{B}\mathrm{d}W_{t}\), where \(\mathbf{B}^{T}\mathbf{B}=\mathbf{\Sigma}_{\mathbf{\varepsilon}}\) and \(W_{t}\) is a standard Wiener process._
According to Assumptions 3.1 and 3.2 on the unknown functions \(H(\cdot)\), Theorem 1 shows the theoretical property of the searched control recipes in Algorithm 2.
**Theorem 1**: _The searched control recipe using Algorithm 2 is asymptotically optimal._
The proof is provided in Appendix B.
Theorem 1 guarantees the asymptotic optimality of Algorithm 2 when process models are unknown for complex manufacturing processes in general. Specifically, if the function \(H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) can also be approximated by its second-order Taylor expansion, more theoretical properties are obtained related to the closed-form solution (Proposition 1), the stochastic searching process (Theorem 2), and the stationary distribution (Theorem 3) of the control recipes.
**Proposition 1**: _If function \(H(\mathbf{u_{t}}|\mathbf{\mu_{t}})\) has a minimum at an unknown point \(\widetilde{\mathbf{u}}_{t}\), i.e., \(\widetilde{\mathbf{u}}_{t}\coloneqq arg\min\limits_{\mathbf{u}_{t}}H(\mathbf{u_{t}}|\mathbf{\mu_{ t}})\), the optimal control recipe to minimize the cost \(C_{t}\) is \(\mathbf{u_{t}^{*}}=(\mathbf{G^{T}Q}\mathbf{G}+\mathbf{R})^{-1}\mathbf{G^{T}Q}\mathbf{G}\widetilde{\mathbf{u} }_{t}\), where \(\mathbf{G}=\begin{bmatrix}\frac{\partial g_{1}}{\partial\widetilde{u}_{1}}&\cdots &\frac{\partial g_{1}}{\partial\widetilde{u}_{m}}\\ \vdots&\ddots&\vdots\\ \frac{\partial g_{n}}{\partial\widetilde{u}_{1}}&\cdots&\frac{\partial g_{n} }{\partial\widetilde{u}_{m}}\end{bmatrix}_{n\times m}\) is the gradient matrix of function \(g(\cdot)\)._
The proof is provided in Appendix C.
**Theorem 2**: _The control search process for \(\mathbf{u_{t}^{*}}\) in Algorithm 2 can be approximated by an Ornstein-Uhlenbeck process, i.e., \(d\mathbf{u_{t}}=\mathbf{\Psi}(\mathbf{u_{t}^{*}}-\mathbf{u}_{t})\mathrm{d}t+\mathbf{\sigma} \mathrm{d}W_{t}\), where \(\mathbf{\Psi}=2\alpha[\mathbf{G^{T}Q}\mathbf{G}+\mathbf{R}]\), \(\mathbf{\sigma}=\alpha\mathbf{B}\) and \(\mathbf{B^{T}B}=\mathbf{\Sigma}_{\varepsilon}\)._
The proof is provided in Appendix D.
**Theorem 3**: _The stationary distribution of the control recipe searched in Algorithm 2 can be approximated by a multi-normal distribution, which is expressed as_
\[\mathbf{u_{t}}\!\sim\!\!MN\left(\mathbf{u_{t}^{*}},\tfrac{1}{2}\mathbf{\sigma}^{T}\mathbf{ \Psi}^{-1}\mathbf{\sigma}\right)\!, \tag{11}\]
_where \(\mathbf{\Psi}=2\alpha[\mathbf{G^{T}Q}\mathbf{G}+\mathbf{R}]\) and \(\mathbf{\sigma}=\alpha\mathbf{B}\)._
The proof is provided in Appendix E.
In summary, Theorem 1 guarantees the control searched in Algorithm 2 can converge to the underlying optimal one in general. Specifically, if the unknown function \(H(\cdot)\) can be approximated by its second-order Taylor expansion, Theorems 2 and 3 propose the explicit formulations of the search process and stationary distribution of control recipes, respectively. Furthermore, from the distribution of control recipes in Equation (11), we find that smaller step sizes can reduce the variations of \(\mathbf{u_{t}}\).
### _B. Outer loop: Bayesian inference of disturbances_
In Section 2.1, the prior probability of disturbance \(\mathbf{d_{t}}\) is defined as \(p(\mathbf{d_{t}}|\mathbf{D_{t-1}})\) depending on its trajectory \(\mathbf{D_{t-1}}=[\mathbf{d_{1}},\mathbf{d_{2}},...,\mathbf{d_{t-1}}]\). After making control decisions and observing the system output \(\mathbf{y_{t}}\), we can update the posterior probability of disturbance \(\mathbf{d_{t}}\) using Bayesian inference as follows:
\[p(\mathbf{d_{t}}|\mathbf{y_{t}})=\tfrac{p(\mathbf{d_{t}}|\mathbf{D_{t-1}})p(\mathbf{y_{t}}|\mathbf{d_{ t}})}{p(\mathbf{y_{t}})}\propto p(\mathbf{d_{t}}|\mathbf{D_{t-1}})p(\mathbf{y_{t}}|\mathbf{d_{t}}), \tag{12}\]
where the conditional probability \(p(\mathbf{y}_{t}|\mathbf{d}_{t})\) can be obtained by Monte Carlo methods based on the system outputs after the convergence of control recipes in Algorithm 2. In literature, the disturbance \(\mathbf{d}_{t}\) is generally supposed to be normally distributed given its historical trajectory. Specifically, if \(p(\mathbf{y}_{t}|\mathbf{d}_{t})\) can also be approximated by a normal distribution, we have Proposition 2 for the posterior distribution of the disturbance using Bayesian inference theory as follows.
**Proposition 2**: _If the prior distribution of the disturbance follows multi-normal distribution as \(\mathbf{d}_{t}|\mathbf{D}_{t-1}\mbox{$\sim$}MN(\mathbf{\mu}_{t},\mathbf{\Sigma}_{t})\), the explicit expression of the posterior distribution disturbances after observing the system output \(\mathbf{y}_{t}\) is given by:_
\[p(\mathbf{d}_{t}|\mathbf{y}_{t})\propto\exp\left\{-\frac{1}{2}\left( \left(\mathbf{y}_{t}-\frac{1}{N}\Sigma_{i=1}^{N}\hat{\mathbf{y}}_{t}\big{(}\mathbf{\hat{n} }_{t}(i)\big{)}\right)^{T}\frac{1}{N}\mathbf{\Sigma}_{i}^{-1}\left(\mathbf{y}_{t}- \frac{1}{N}\Sigma_{i=1}^{N}\hat{\mathbf{y}}_{t}\big{(}\mathbf{\hat{n}}_{t}(i)\big{)} \right)+\right.\right.\] \[\left.\left.\left(\mathbf{d}_{t}-\mathbf{\mu}_{t}\right)^{T}\mathbf{\Sigma}_ {t}^{-1}(\mathbf{d}_{t}-\mathbf{\mu}_{t})\right)\right\}\!,\]
_where \(\mathbf{\Sigma}_{\mathbf{y}}\) is the sample variance matrix of system output after the convergence of control recipes._
Notably, other distributions of disturbances can also be updated by Bayesian inference methods using Monte Carlo methods. By analyzing the posterior probability of disturbances, we can obtain a more reliable prior distribution to reduce variations of disturbances in the next run. Algorithm 3 presents the Bayesian update procedure of disturbance as follows.
``` Initialize\(t,\mathbf{u}_{1}^{[0]}\), the prior distribution of disturbance \(p(\cdot)\), initial disturbance \(\mathbf{d}_{0}\). For\(t=1\):\(T\) \[\mathbf{\mu}_{t}=\int_{-\infty}^{+\infty}\mathbf{d}_{t}\cdot p(\mathbf{d}_{t}|\mathbf{D}_{t-1}) \mathrm{d}\mathbf{d}_{t}\] \[\overline{\mathbf{u}}_{t}\leftarrow\text{Control\_Search}(\mathbf{\mu}_{t}) \text{/*\bf Algorithm 2*/}\] Take control \(\overline{\mathbf{u}}_{t}\), and record the system output \(\mathbf{y}_{t}\). Update the disturbance according to: \[p(\mathbf{d}_{t}|\mathbf{y}_{t})=\frac{p(\mathbf{d}_{t}|\mathbf{D}_{t-1})p(\mathbf{y}_{t}|\mathbf{d}_{ t})}{p(\mathbf{y}_{t})}\propto\ p(\mathbf{d}_{t}|\mathbf{D}_{t-1})p(\mathbf{y}_{t}|\mathbf{d}_{t})\] Update \(p(\mathbf{d}_{t+1}|\mathbf{D}_{t})\). Endfor
### Online control in Phase II
In real applications of semiconductor manufacturing processes, after control optimization by VM systems in Phase I, real-time control recipes need to be directly determined in practical manufacturing processes. Therefore, in this section, we propose a real-time control algorithm used for online control in Phase II.
Suppose that manufacturing environments and process models are kept stable in Phases I and II, and it is reasonable that the control recipes searched in Phase I can be applied in Phase II. We denote the offline experimental dataset collected in Phase I as \(\{D\_off\}\). Each sample in \(\{D\_off\}\) consists of the control recipes, system output, and the distribution of disturbances, i.e., \(\ [\mathbf{u_{t}},\mathbf{y_{t}},\mathbf{d_{t}}]\in\{D\_off\}\).
Due to the asymptotic optimality of searched control recipes in the offline dataset \(\{D\_off\}\), it can be used as a "memory buffer" for online control. Since the key hidden variables in manufacturing processes are disturbances, online control decisions can be implemented by matching the closest offline disturbance \(\mathbf{d_{t^{*}}}\) in \(\{D\_off\}\) with the online inferred disturbance and choosing the corresponding control recipe as the online recipe. Specifically, \(\ \mathbf{d_{t^{*}}}\) is obtained by:
\[\mathbf{d_{t^{*}}}\coloneqq arg\min_{\mathbf{d}\in\{D\_off\}}\mathbb{D}_{KL}(p(\mathbf{d} )||q(\mathbf{d_{t}}^{on}|\mathbf{D_{t-1}^{on}})), \tag{13}\]
where \(\mathbf{d_{t}^{on}}\) is online disturbance and \(\ \mathbb{D}_{KL}(\cdot||\cdot)\) is Kullback-Leibler divergence. To distinguish the online disturbance, we use \(\ q(\cdot)\) to denote its prior distribution. Then, the control recipe \(\ \mathbf{u_{t^{*}}}\) corresponding to \(\ \mathbf{d_{t^{*}}}\) is chosen as the online control strategy. Notably, as the size of dataset \(\ \{D\_off\}\) increases, the divergence between the online and offline disturbance becomes smaller, and the control performs better. Algorithm 4 presents the online control scheme in detail.
``` Input: Historical offline data \(\{D\_off\}\), initial system output \(\ \mathbf{y_{0}}\), prior distribution of online disturbance \(\ q(\cdot)\) For\(t=1\):\(T\) \(\ \
Update the disturbance according to \(q(\mathbf{d}_{t}^{on}|\mathbf{y}_{t})=\frac{q(\mathbf{d}_{t}^{on}|\mathbf{D}_{t-1}^{on})p(\mathbf{y}_ {t}|\mathbf{d}_{t}^{on})}{p(\mathbf{y}_{t})}\propto\)
\(q(\mathbf{d}_{t}^{on}|\mathbf{D}_{t-1}^{on})p(\mathbf{y}_{t}|\mathbf{d}_{t}^{on})\).
Cauculate \(q(\mathbf{d}_{t+1}^{on}|\mathbf{D}_{t}^{on})\).
**End for**
## 4 Numerical study and comparison
To show the performance of the proposed MFRL-BI control scheme, we propose numerical studies based on a nonlinear chemical mechanical planarization (CMP) process in semiconductor manufacturing. In Section 4.1, the proposed MFRL-BI controller is compared with the basic MFRL controller to verify the improvement by using Bayesian inference. In Section 4.2, we focus on a comparison between the proposed MFRL-BI controller and the DOE-based automatic process controller (APC), which is also designed for an unknown process model.
### Comparison with basic MFRL controller
Due to the privacy of real CMP data, Khuri (1996) proposed an experiment tool and designed a nonlinear process model to describe the CMP process, which is widely used in CMP data simulation (Del Castillo and Yeh, 1998). In this section, we also follow their simulation for data generation. The control recipe \(\mathbf{u}_{t}\) consists of three dimensions (i.e., \(\mathbf{u}_{t}=\left[u_{t}^{(1)},u_{t}^{(2)},u_{t}^{(3)}\right]^{T}\)), which represent the backpressure downforce, platen speed, and slurry concentration, respectively. The two dimensions of the system outputs (\(\mathbf{y}_{t}=\left[y_{t}^{(1)},y_{t}^{(2)}\right]^{T}\)) to reflect the manufacturing quality are removal rate and within-wafer standard deviation with target levels as \(\mathbf{y}^{*}=[2200,400]^{T}\). Without loss of generality, the initial system output is set as the target levels.
Specifically, following the nonlinear model proposed by Del Castillo and Yeh (1998), we use the following formulation to simulate data in CMP process at each run \(t\).
\[\mathbf{y}_{t}=\mathbf{C}\mathbf{X}_{t}+\mathbf{d}_{t}, \tag{14}\]
where \(\mathbf{C}\) is the parameter matrix defined as
\(\mathbf{C}=\left[\begin{matrix}2756.5&547.6&616.3&-126.7&-1109.5&-286.1&989.1&-52.9&-156.9&-550.3&-10\\ 746.3&62.3&128.6&-152.1&-289.7&-32.1&237.7&-28.9&-122.1&-140.6&1.5\end{matrix}\right]\),
\(\mathbf{X}_{t}\) consists of constant, linear, and quadratic terms of control recipes at run \(t\)
\(\mathbf{X}_{t}=\begin{bmatrix}1,&u_{t}^{(1)},&u_{t}^{(2)},&u_{t}^{(3)},&\begin{bmatrix} u_{t}^{(1)}\end{bmatrix}^{2},&\begin{bmatrix}u_{t}^{(2)}\end{bmatrix}^{2},&u_{t}^{(1)}u_{t}^{(2)},&u_{t}^{( 1)}u_{t}^{(3)},&u_{t}^{(2)}u_{t}^{(3)},&t\end{bmatrix}^{\tau}\).
\(\mathbf{d}_{t}=\begin{bmatrix}d_{t}^{(1)},d_{t}^{(2)}\end{bmatrix}^{\mathbf{T}}\) are two dimensions of disturbances that follow two independent IMA(1,1) processes, and the total number of runs \(T\) is 50. Based on this setting, we analyze the performance of the proposed MFRL-BI controller and compare it with the basic MFRL controller.
We first consider a special case where there is no extra cost associated with control actions, i.e., \(\mathbf{R=0}\), the control cost is \(C_{t}(\mathbf{u}_{t})=(\mathbf{y}_{t}-\mathbf{y}^{*})^{T}\mathbf{Q}(\mathbf{y}_{t}-\mathbf{y}^{*})\). Under this setting, the basic MFRL and MFRL-BI controllers are applied for online control, and the corresponding system outputs are used to evaluate the performances of these two controllers. To make a fair comparison, we search control recipes for 2000 iterations at each run in both Algorithms 1 and 2 in the basic MFRL and MFRL-BI controllers, respectively. After collecting data from 1000 production cycles in \(\{D_{o}ff\}\), we make the online control by matching the disturbances in \(\{D_{o}ff\}\) with the online one using Algorithm 4. Figure 4 illustrates the boxplot of system outputs in Phase II with 100 replications. The two panels in Figures 4(a) and 4(b) correspond to the two dimensions of \(\mathbf{y}_{t}\). As shown, system outputs based on the basic MFRL controller have relatively large variations and significant deviations when dealing with system drifts, while the proposed MFRL-BI controller can keep the system outputs well close to their desired targets, even though the process model is unknown.
Figure 4(a). Online control results based on the basic MFRL controller
Figure 4(b). Online control results based on the MFRL-BI controller
Generally, executing control has extra control cost during the manufacturing process, the total cost is: \(C_{t}(\mathbf{u}_{t})=(\mathbf{y}_{t}-\mathbf{y}^{\star})^{T}\mathbf{Q}(\mathbf{y}_{t}-\mathbf{y}^{ \star})+\mathbf{u}_{t}^{T}\mathbf{Ru}_{t}\), where \(\mathbf{R}\neq\mathbf{0}\). For example, we set \(\mathbf{Q}=\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\) and \(\mathbf{R}=\begin{bmatrix}10\\ &10\\ &5\end{bmatrix}\). The mean control cost (MCC) at each run (defined as \(\sum_{t=1}^{T}C_{t}(\mathbf{y}_{t},\mathbf{u}_{t})/T\)) is used as performance criteria. Table 1 summarizes the mean and standard deviation of MCC in basic MFRL, MFRL-BI controllers, and without control under 100 replications.
As shown in Table 1, in comparison to without control, the basic MFRL controller presented in Algorithm 1 substantially reduces the control cost. Nonetheless, the performance of the basic MFRL does not fulfill the accuracy specifications for semiconductor manufacturing. Upon updating the distribution of disturbances by Algorithms 2 to 4, it is observed that the mean of control cost reduces by 97% in comparison to the basic MFRL controller. Table 1 demonstrates the efficient performance of the MFRL-BI controller in further reducing the control cost during the manufacturing process.
### Comparison with the DOE-based APC
When process models are unknown, extensive DOE-based methods are proposed in literature for a predictive process model design (Tseng et al., 2019; Shi, 2022). One of the most important methods is the DOE-based automatic process controller (APC) proposed by Zhong et al. (2009), which primarily emphasizes designing experiments to identify the effects of control and disturbances. As the MFRL-BI control scheme also focuses on control optimization based on experimental data when the process model is unknown, we provide a performance comparison with the DOE-based APC. Considering the fairness of performance comparison, we follow the objective of DOE-based APC to minimize the difference between system outputs and their target levels.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Different cases} & \multirow{2}{*}{MCC} & \multirow{2}{*}{Without control} & Basic MFRL & MFRL-BI controller \\ & & & controller in Algo.1 & in Algo.2-4 \\ \hline \multirow{2}{*}{\(\mathbf{R}=\mathbf{0}\)} & Mean & \(2.5989\times 10^{5}\) & \(3.7054\times 10^{3}\) & 116.4702 \\ & _Std._ & \(6.9650\times 10^{3}\) & 382.4001 & 21.3797 \\ & Mean & \(2.5989\times 10^{5}\) & \(5.1766\times 10^{3}\) & 135.8367 \\ & _Std._ & \(6.9650\times 10^{3}\) & 386.9175 & 22.2550 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of basic MFRL and MFRL-BI controllers
#### Settings of DOE-based APC
In the methodology of Zhong et al. (2009), the DOE-based APC aims to identify factors that significantly impact system outputs from control recipes, noises in manufacturing environments, and their interactions using a linear DOE regression model. Then, control recipes are optimized considering the randomness of regression parameters. As the nonlinear CMP process is a dynamic manufacturing process with unstable auto-correlated disturbances, current DOE-based APC cannot be directly applied. We incorporate two extra factors in this part: (i) the auto-regression term to describe autocorrelations in disturbances, and (ii) the noises of a linear model to represent the inaccuracy of linear model assumptions. Furthermore, we use the error of system outputs \(\mathbf{z}_{t}=\boldsymbol{y_{t}}-\boldsymbol{y^{*}}\) as the response variable to simplify the model. In summary, the independent variables to be identified by the DOE regression model are output errors at the last run (\(\mathbf{z}_{t-1}\)), control recipes \(\boldsymbol{u_{t}}\), noises of the linear model at the end of the last run (\(\boldsymbol{e_{t-1}}\)), the number of runs (\(t\)), and their interactions.
Before designing experiments, we first run the linear regression model to collect noises (\(\boldsymbol{e_{t}}\)), which are used to estimate the model inaccuracy. According to Zhou et al. (2003), a dynamic linear model to describe the manufacturing process is given:
\[\mathbf{z}_{t}=\boldsymbol{\beta_{0}}+\boldsymbol{\beta_{1}}\boldsymbol{u_{t}} +\boldsymbol{\beta_{2}}\mathbf{z}_{t-1}+\boldsymbol{\beta_{3}}t+\boldsymbol{ e_{t}}. \tag{15}\]
The noises of the dynamic linear model are calculated by \(\boldsymbol{e_{t}}=\mathbf{\hat{z}}_{t}-\mathbf{z}_{t}\). Then, the effects of the current state (\(\mathbf{z}_{t-1}\)), control recipes (\(\boldsymbol{u_{t}}\)), current model noises (\(\boldsymbol{e_{t-1}}\)) and their interactions are considered in the DOE model as follows:
\[\mathbf{z}_{t}=\boldsymbol{\theta_{0}}+\boldsymbol{\theta}\boldsymbol{u_{t}}+ \boldsymbol{\gamma}t+\boldsymbol{\vartheta}\boldsymbol{e_{t-1}}+\boldsymbol{ \omega}\mathbf{z}_{t-1}+\boldsymbol{\rho}\boldsymbol{u_{t}}\boldsymbol{e_{t-1} }+\boldsymbol{\varphi}\boldsymbol{e_{t-1}}+\boldsymbol{r}, \tag{16}\]
where \(\boldsymbol{\theta_{0}}\), \(\boldsymbol{\theta}\), \(\boldsymbol{\gamma}\), \(\boldsymbol{\vartheta}\), \(\boldsymbol{\omega}\), \(\boldsymbol{\rho}\), and \(\boldsymbol{\varphi}\) are the parameter vectors, and \(\boldsymbol{r}\) is the residual vector of the DOE model. After selecting the significant variables and their interaction terms by the DOE, we optimize the control recipes as follows:
\[\boldsymbol{u_{t}^{*}}=arg\min_{\boldsymbol{u_{t}}}C_{t}\big{(} \boldsymbol{u_{t}}\big{|}\boldsymbol{\bar{\theta}_{0}},\boldsymbol{\bar{ \theta}},\boldsymbol{\bar{\gamma}},\boldsymbol{\bar{\vartheta}},\boldsymbol{ \bar{\omega}},\boldsymbol{\bar{\rho}},\boldsymbol{e_{t-1}}\big{)}=arg\min_{ \boldsymbol{u_{t}}}E_{\boldsymbol{\bar{\theta}_{0}},\boldsymbol{\bar{\theta}}, \boldsymbol{\bar{\vartheta}},\boldsymbol{\bar{\vartheta}},\boldsymbol{\bar{ \rho}},\boldsymbol{\bar{\varphi}}}\big{(}\mathbf{z}_{t}^{T}\mathbf{z}_{t} \big{|}\boldsymbol{\bar{\theta}_{0}},\boldsymbol{\bar{\theta}},\boldsymbol{ \bar{\gamma}},\boldsymbol{\bar{\vartheta}},\boldsymbol{\bar{\omega}}, \boldsymbol{\bar{\rho}},\boldsymbol{e_{t-1}}\big{)} \tag{17}\]
Generally, the DOE-based APC aims to approximate the manufacturing process by a linear regression model, which is unbiased when the ground truth of the process model is linear. However, in
this section, we focus mainly on a complex nonlinear CMP process, wherein an exhaustive comparison of DOE-based APC and the proposed MFRL-BI controller are presented.
#### Numerical comparison
Numerical comparison results are discussed in this part. For DOE-based APC, we first collect the model noises in Equation (15) using the offline data, which are generated by nonlinear CMP simulations in Equation (14) 1000 times from run 1 to \(T\). Then based on Equation (16), the effects of control variables (\(\mathbf{u_{t}}\)), the number of runs (\(t\)), and the model noises (\(\mathbf{e_{t-1}}\)) on the response variable (\(\mathbf{z_{t}}\)) are summarized in Table 2. Specifically, \(\ z_{t}^{(1)}\) (\(e_{t}^{(1)}\)) and \(\ z_{t}^{(2)}\) (\(e_{t}^{(2)}\)) denote the two dimensions of the response variables (noises) at run \(t\). The experiments in each cell are replicated 300 times to calculate the mean response values of \(\ \mathbf{z_{t}}\).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Response variable \(\ \mathbf{z_{t}^{(1)}\ \ \text{for}\
identify the significant terms and obtain the DOE-based approximate model as follows:
\[\begin{cases}z_{t}^{(1)}=\theta_{10}+\theta_{11}u_{t}^{(1)}+\theta_{12}u_{t}^{(2) }+\theta_{13}u_{t}^{(3)}+\gamma_{1}t+\vartheta_{1}e_{t-1}^{(1)}+\omega_{12}z_{t -1}^{(1)}+\varphi_{1}te_{t-1}^{(1)}+\tau_{1}\\ z_{t}^{(2)}=\theta_{20}+\theta_{21}u_{t}^{(1)}+\theta_{22}u_{t}^{(2)}+\theta_{23 }u_{t}^{(3)}+\gamma_{2}t+\vartheta_{2}e_{t-1}^{(2)}+\omega_{2}z_{t-1}^{(2)}+ \varphi_{2}te_{t-1}^{(2)}+\tau_{2}.\end{cases} \tag{18}\]
We define \(\mathbf{\theta_{0}}=\begin{bmatrix}\theta_{10}\\ \theta_{20}\end{bmatrix}\), \(\mathbf{\theta}=\begin{bmatrix}\theta_{11}\\ \theta_{21}\end{bmatrix}=\begin{bmatrix}\theta_{11}\\ \theta_{21}\end{bmatrix}\), \(\mathbf{\gamma}=\begin{bmatrix}Y_{1}\\ Y_{2}\end{bmatrix}\), \(\mathbf{\vartheta}=\begin{bmatrix}\vartheta_{1}\\ \theta_{2}\end{bmatrix}\),\(\mathbf{\omega}=\begin{bmatrix}\omega_{1}\\ \omega_{2}\end{bmatrix}\), \(\mathbf{\varphi}=\begin{bmatrix}\varphi_{1}\\ \varphi_{2}\end{bmatrix}\) as parameters that need to be estimated. Due to the randomness of \(\mathbf{r}=[\tau_{1},\tau_{2}]^{T}\) in Equation (18), the parameter estimators \(\mathbf{\widehat{\theta}_{0}}\), \(\mathbf{\widehat{\theta}}\), \(\mathbf{\widehat{\gamma}}\), \(\mathbf{\widehat{\vartheta}}\), \(\mathbf{\widehat{\omega}}\), and \(\mathbf{\widehat{\varphi}}\) are also random variables. Moreover, \(\mathbf{e}_{t}\) is used to describe the model noise in Equation (15), which is also a random vector. Figures 6(a) and 6(b) display the distribution of model noises and parameter estimators, respectively. Subsequently, a robust control recipe considering the randomness of variables in Figure 6 is designed with a closed-form solution according to Zhong et al. (2009) (see Appendix F for more detailed derivations).
\[\begin{split}&\mathbf{u}_{t}^{*}=arg\min_{\mathbf{u}_{t}}E_{\mathbf{ \widehat{\theta}_{0}},\mathbf{\widehat{\theta}},\mathbf{\widehat{\vartheta}},\mathbf{ \widehat{\omega}},\mathbf{\widehat{\varphi}}}\big{(}\mathbf{z}_{t}^{T}\mathbf{z}_{t}\big{|} \mathbf{\widehat{\theta}_{0}},\mathbf{\widehat{\theta}},\mathbf{\widehat{\varphi}},\mathbf{ \widehat{\omega}},\mathbf{\widehat{\varphi}},\mathbf{e}_{t-1}\big{)}=-\left[\mathbf{\Sigma _{\theta}^{1}}+\mathbf{\widehat{\theta}_{1}}\mathbf{\widehat{\theta}_{1}}^{T}+\mathbf{ \Sigma_{\theta}^{2}}+\mathbf{\widehat{\theta}_{2}}\mathbf{\widehat{\theta}_{2}}^{T} \right]^{-1}\\ &\cdot\left[\left(\mathbf{\widehat{\theta}}_{10}+\hat{\gamma}_{1}t+ \hat{\theta}_{1}e_{t-1}^{(1)}+\hat{\varphi}_{1}te_{t-1}^{(1)}+\hat{\omega}_{1 }z_{t-1}^{(1)}\right)\cdot\mathbf{\widehat{\theta}_{1}}+\left(\mathbf{\widehat{\theta }}_{20}+\hat{\gamma}_{2}t+\hat{\vartheta}_{2}e_{t-1}^{(2)}+\hat{\varphi}_{2}te _{t-1}^{(2)}+\hat{\omega}_{2}z_{t-1}^{(2)}\right)\cdot\mathbf{\widehat{\theta}_{2 }}\right],\end{split} \tag{19}\]
where \(\mathbf{\widehat{\theta}_{1}}=\begin{bmatrix}\mathbf{\widehat{\theta}}_{11},\mathbf{ \widehat{\theta}}_{12},\mathbf{\widehat{\theta}}_{13}\end{bmatrix}^{T}\) and \(\mathbf{\widehat{\theta}_{2}}=\begin{bmatrix}\mathbf{\widehat{\theta}}_{21},\mathbf{ \widehat{\theta}}_{22},\mathbf{\widehat{\theta}}_{23}\end{bmatrix}^{T}\). \(\mathbf{\Sigma_{\theta}^{1}}\) and \(\mathbf{\Sigma_{\theta}^{2}}\) are covariance matrices of \(\mathbf{\widehat{\theta}_{1}}\) and \(\mathbf{\widehat{\theta}_{2}}\) respectively.
Figure 5: Half-normal probability plot of main effects and interactions
To make a fair comparison, we employ the same amount of historical data in the MFRL-BI controller and DOE-based APC. However, it is difficult for a linear DOE-based regression model to approximate a nonlinear CMP process. As shown in Figure 7, according to the closed-form solution in Equation (19), DOE-based APC even cannot keep the system outputs close to the desired target. When compared with the MFRL-BI controller in Figure 4(b), the linear DOE-based APC is invalid when the underlying process model is nonlinear. Table 3 presents the mean and standard deviation of MCC based on the MFRL-BI controller and DOE-based APC under 100 replications. The results demonstrate that the MFRL-BI controller surpasses the DOE-based APC in nonlinear CMP processes, implying that the proposed MFRL-BI controller can overcome the limitations of linear DOE-based APC when dealing with more complex nonlinear processes.
## 5 Conclusions
This work designs a new process control scheme by model-free reinforcement learning to reduce the system variations in semiconductor manufacturing when process model is unknown and complex. Due to unstable and unobservable disturbances, basic MFRL controllers usually suffer from large variations. To overcome this challenge, We update the distribution of disturbances during manufacturing processes using Bayesian inference. The corresponding algorithms in offline optimization and online control phases are presented, and corresponding theoretical properties are also guaranteed. Through performance comparisons between the proposed MFRL-BI, basic MFRL, and DOE-based APC, we observe that the proposed MFRL-BI controller exhibits superior performance, particularly when underlying process models are nonlinear and complex.
Along with our research direction, several extensions can be further investigated. First, how to develop a RL-based process control model when the effects of control recipes and disturbances are correlated. Second, the constraints of control recipes can also be considered in process control optimization in future studies.
Figure 7: Control results based on linear DOE-based APC
## References
* Bastiaan (1997) Bastiaan, H. K. (1997). Process model and recipe structure, the conceptual design for a flexible batch plant. _ISA Transactions_, _36_(4), 249-255.
* Bibian & Jin (2000) Bibian, S., & Jin, H. (2000). Time delay compensation of digital control for DC switchmode power supplies using prediction techniques. _IEEE Transactions on Power Electronics_, _15_(5), 835-842.
* Box & Kramer (1992) Box, G., & Kramer, T. (1992). Statistical process monitoring and feedback adjustment--a discussion. _Technometrics_, _34_(3), 251-267.
* Chang et al. (2006) Chang, Y. J., Kang, Y., Hsu, C. L., Chang, C. T., & Chan, T. Y. (2006, July). Virtual metrology technique for semiconductor manufacturing. In _The 2006 IEEE International Joint Conference on Neural Network Proceedings_ (pp. 5289-5293). IEEE.
* Chen & Guo (2001) Chen, A., & Guo, R. S. (2001). Age-based double EWMA controller and its application to CMP processes. _IEEE Transactions on Semiconductor Manufacturing_, _14_(1), 11-19.
* Chen et al. (2012) Chen, J., Munoz, J., & Cheng, N. (2012). Deterministic and stochastic model based run-to-run control for batch processes with measurement delays of uncertain duration. _Journal of process control_, _22_(2), 508-517.
* Del Castillo & Hurwitz (1997) Del Castillo, E., & Hurwitz, A. M. (1997). Run-to-run process control: Literature review and extensions. _Journal of Quality Technology_, _29_(2), 184-196.
* Del Castillo & Yeh (1998) Del Castillo, E., & Yeh, J. Y. (1998). An adaptive run-to-run optimizing controller for linear and nonlinear semiconductor processes. _IEEE Transactions on Semiconductor Manufacturing_, _11_(2), 285-295.
* Hankinson et al. (1997) Hankinson, M., Vincent, T., Irani, K. B., & Khargonekar, P. P. (1997). Integrated real-time and run-to-run control of etch depth in reactive ion etching. _IEEE Transactions on Semiconductor Manufacturing_, _10_(1), 121-130.
* He et al. (2009) He, F., Wang, K., & Jiang, W. (2009). A general harmonic rule controller for run-to-run process control. _IEEE Transactions on Semiconductor Manufacturing_, _22_(2), 232-244.
* Ingolfsson & Sachs (1993) Ingolfsson, A., & Sachs, E. (1993). Stability and sensitivity of an EWMA controller. _Journal of Quality Technology_, _25_(4), 271-287.
* Kang et al. (2009) Kang, P., Lee, H. J., Cho, S., Kim, D., Park, J., Park, C. K., & Doh, S. (2009). A virtual metrology system for semiconductor manufacturing. _Expert Systems with Applications_, _36_(10), 12554-12561.
* Kazemzadeh et al. (2008) Kazemzadeh, R. B., Karbasian, M., & Moghadam, M. B. (2008). Design and investigation of EWMA and double EWMA with quadratic process model in R2R controllers. _Quality & Quantity_, _42_(6), 845-857.
* Khamaru et al. (2021) Khamaru, K., Xia, E., Wainwright, M. J., & Jordan, M. I. (2021). Instance-optimality in optimal value estimation: Adaptivity via variance-reduced Q-learning. _arXiv preprint arXiv:2106.14352_.
* Khuri (1996) Khuri, A. (1996, April). Response surface methods for multiresponse experiments. _In 13th SEMATECH_ Statistical Methods Symposium.
* Kiefer & Wolfowitz (1952) Kiefer, J., & Wolfowitz, J. (1952). Stochastic estimation of the maximum of a regression function. _The Annals of Mathematical Statistics_, 462-466.
* Li et al. (2023) Li, Y., Du, J., & Jiang, W. (2023). Reinforcement Learning for Process Control with Application in Semiconductor Manufacturing. _IISE Transactions,_ (just-accepted), 1-25.
* Liu et al. (2018) Liu, K., Chen, Y., Zhang, T., Tian, S., & Zhang, X. (2018). A survey of run-to-run control for batch processes. _ISA transactions, 83_, 107-125.
Mandt, S., Hoffman, M. D., & Blei, D. M. (2017). Stochastic gradient descent as approximate Bayesian inference. _arXiv preprint arXiv:1704.04289_.
* Nian et al. (2020) Nian, R., Liu, J., & Huang, B. (2020). A review on reinforcement learning: Introduction and applications in industrial process control. _Computers & Chemical Engineering_, _139_, 106886.
* Park et al. (2005) Park, S. J., Lee, M. S., Shin, S. Y., Cho, K. H., Lim, J. T., Cho, B. S., & Park, C. H. (2005). Run-to-run overlay control of stepers in semiconductor manufacturing systems based on history data analysis and neural network modeling. _IEEE Transactions on Semiconductor Manufacturing_, _18_(4), 605-613.
* Recht (2019) Recht, B. (2019). A tour of reinforcement learning: The view from continuous control. _Annual Review of Control, Robotics, and Autonomous Systems_, \(2\), 253-279.
* Tseng et al. (2003) Tseng, S. T., Yeh, A. B., Tsung, F., & Chan, Y. Y. (2003). A study of variable EWMA controller. _IEEE Transactions on Semiconductor Manufacturing_, _16_(4), 633-643.
* Tseng et al. (2007) Tseng, S. T., Tsung, F., & Liu, P. Y. (2007). Variable EWMA run-to-run controller for drifted processes. _IIE Transactions_, _39_(3), 291-301.
* Tseng et al. (2019) Tseng, S. T., Tsung, F., & Wu, J. H. (2019). Stability conditions and robustness analysis of a general MMSE run-to-run controller. _IISE Transactions_, _51_(11), 1279-1287.
* Tsung & Shi (1999) Tsung, F., & Shi, J. (1999). Integrated design of run-to-run PID controller and SPC monitoring for process disturbance rejection. _IIE Transactions_, _31_(6), 517-527.
* Tseng & Chen (2017) Tseng, S. T., & Chen, P. Y. (2017). A generalized quasi-mmse controller for run-to-run dynamic models. _Technometrics_, _59_(3), 381-390.
* Wang & Chou (2005) Wang, G. J., & Chou, M. H. (2005). A neural-Taguchi-based quasi time-optimization control strategy for chemical-mechanical polishing processes. _The International Journal of Advanced Manufacturing Technology_, _26_(7), 759-765.
* Wang & Han (2013) Wang, K., & Han, K. (2013). A batch-based run-to-run process control scheme for semiconductor manufacturing. _IIE Transactions_, _45_(6), 658-669.
* Wang et al. (2018) Wang, Y., Velswamy, K., & Huang, B. (2018). A novel approach to feedback control with deep reinforcement learning. _IFAC-PapersOnLine_, _51_(18), 31-36.
* Zhong et al. (2009) Zhong, J., Shi, J., & Wu, J. C. (2009). Design of DOE-based automatic process controller with consideration of model and observation uncertainties. _IEEE Transactions on Automation Science and Engineering_, _7_(2), 266-273.
* Zhong et al. (2010) Zhong, J., Liu, J., & Shi, J. (2010). Predictive control considering model uncertainty for variation reduction in multistage assembly processes. _IEEE Transactions on Automation Science and Engineering_, _7_(4), 724-735.
* Zhou et al. (2003) Zhou, S., Ding, Y., Chen, Y., & Shi, J. (2003). Diagnosability study of multistage manufacturing processes based on linear mixed-effects models. _Technometrics_, _45_(4), 312-325.
## Appendix
### Appendix A. Simplification of the optimization problem in Equation (7)
The optimization problem in Equation (7) can be simplified as follows:
\[\begin{array}{ll}\mathbf{E}_{\boldsymbol{\delta}_{t},\boldsymbol{v}_{t}}[C_{t}( \boldsymbol{y}_{t},\boldsymbol{u}_{t})]&=\mathbf{E}_{\boldsymbol{\delta}_{t}, \boldsymbol{v}_{t}}[(\boldsymbol{y}_{t}-\boldsymbol{y}^{*})^{T}\boldsymbol{Q }(\boldsymbol{y}_{t}-\boldsymbol{y}^{*})+\boldsymbol{u}_{t}^{T}\boldsymbol{Ru }_{t}]\\ &=\mathbf{E}_{\boldsymbol{\delta}_{t},\boldsymbol{v}_{t}}[(g(\boldsymbol{u}_{t })+\boldsymbol{d}_{t}-\boldsymbol{y}^{*})^{T}\boldsymbol{Q}(g(\boldsymbol{u}_{ t})+\boldsymbol{d}_{t}-\boldsymbol{y}^{*})]+\boldsymbol{u}_{t}^{T}\boldsymbol{Ru}_{t} \\ &=\mathbf{E}_{\boldsymbol{v}_{t}}[\mathbf{E}_{\boldsymbol{\delta}_{t}}[(g( \boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}+\boldsymbol{\delta}_{t}-\boldsymbol{ y}^{*})^{T}\boldsymbol{Q}(g(\boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}+ \boldsymbol{\delta}_{t}-\boldsymbol{y}^{*})]]+\boldsymbol{u}_{t}^{T} \boldsymbol{Ru}_{t}\\ &=\mathbf{E}_{\boldsymbol{v}_{t}}[tr(\boldsymbol{Q}\mathbf{\Sigma}_{t})+(g( \boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}-\boldsymbol{y}^{*})^{T}\boldsymbol{ Q}(g(\boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}-\boldsymbol{y}^{*})]+\boldsymbol{u}_{t}^{T} \boldsymbol{Ru}_{t}\\ &=tr(\boldsymbol{Q}\mathbf{\Sigma}_{t})+\mathbf{E}_{\boldsymbol{v}_{t}}[(g( \boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}-\boldsymbol{y}^{*})^{T}\boldsymbol{ Q}(g(\boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}-\boldsymbol{y}^{*})]+\boldsymbol{u}_{t}^{T} \boldsymbol{Ru}_{t}.\end{array}\]
This result is directly presented in Equation (8).
### Appendix B. Proof of Theorem 1
As we analyzed in Section 3.2, the iteration of control recipes is defined as \(\Delta\boldsymbol{u}_{t}=-\alpha\nabla_{\boldsymbol{u}_{t}}\mathcal{M}( \boldsymbol{u}_{t}|\boldsymbol{\mu}_{t})\). By combining Assumptions 3.1 and 3.2, we have
\[\mathrm{d}\boldsymbol{u}_{t}=-\boldsymbol{\alpha}\nabla_{\boldsymbol{u}_{t}}M (\boldsymbol{u}_{t}|\boldsymbol{\mu}_{t})\mathrm{d}t+\boldsymbol{\alpha} \boldsymbol{B}\mathrm{d}W_{t},\] (B.1)
where \(\boldsymbol{B}^{T}\boldsymbol{B}=\mathbf{\Sigma}_{\boldsymbol{e}}\), and \(W_{t}\) is a standard Wiener process.
Based on Equation (B.1), the control action search process can be approximated by a Fokker-Planck equation, which has a standard expression: \(d\boldsymbol{u}_{t}=A(\boldsymbol{u}_{t},t)\mathrm{d}t+\big{(}B(\boldsymbol{u} _{t},t)\big{)}^{1/2}\mathrm{d}W_{t}\). In our work, we have \(A(\boldsymbol{u}_{t},t)=-\alpha\nabla_{\boldsymbol{u}_{t}}M(\boldsymbol{u}_{ t},\boldsymbol{\mu}_{t})\) and \(B(\boldsymbol{u}_{t},t)=(\alpha\boldsymbol{B})^{T}\alpha\boldsymbol{B}\). We find that \(B(\boldsymbol{u}_{t},t)\) which is a constant matrix that is independent with \(\boldsymbol{u}_{t}\). According to Gardiner (1985), \(\boldsymbol{u}_{t}\) has a stable distribution \(p_{s}(\boldsymbol{u}_{t})\) if
\[\nabla\boldsymbol{u}_{t}[A(\boldsymbol{u}_{t},t)p_{s}(\boldsymbol{u}_{t})]- \tfrac{1}{2}\nabla_{\boldsymbol{u}_{t}}^{2}[B(\boldsymbol{u}_{t},t)p_{s}( \boldsymbol{u}_{t})]=0.\] (B.2)
We can find the stable distribution as
\[p_{s}(\boldsymbol{u}_{t})\propto e^{-\frac{2\alpha M(\boldsymbol{u}_{t}| \boldsymbol{\mu}_{t})}{(\alpha\boldsymbol{B})^{T}\alpha\boldsymbol{B}}}=\exp \Big{\{}-\frac{2\alpha[(g(\boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}-\boldsymbol {y}^{*})^{T}\boldsymbol{Q}(g(\boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}- \boldsymbol{y}^{*})+\boldsymbol{u}_{t}^{T}\boldsymbol{Ru}_{t}]}{(\alpha \boldsymbol{B})^{T}\alpha\boldsymbol{B}}\Big{\}}.\] (B.3)
Therefore, we can conclude that the control recipe obtained by Algorithm 2 has a stable distribution with mean vector \(\mathbf{E}[\boldsymbol{u}_{t}]=\boldsymbol{u}_{t}^{*}\coloneqq\arg\min_{ \boldsymbol{u}_{t}}(g(\boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}-\boldsymbol{y} ^{*})^{T}\boldsymbol{Q}(g(\boldsymbol{u}_{t})+\boldsymbol{\mu}_{t}- \boldsymbol{y}^{*})+\boldsymbol{u}_{t}^{T}\boldsymbol{Ru}_{t}\).
## Appendix C Proof of Proposition 1
If \(H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) can be approximated as its second-order Taylor expansion, we have the Taylor expansion of \(H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) at point \(\widetilde{\mathbf{u}}_{t}\) as:
\[\begin{array}{ll}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})&\approx H(\widetilde{\mathbf{u}}_{t}| \mathbf{\mu}_{t})+\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})|_{\mathbf{u}_{t}= \widetilde{\mathbf{u}}_{t}}(\mathbf{u}_{t}-\widetilde{\mathbf{u}}_{t})\\ &+\frac{1}{2}(\mathbf{u}_{t}-\widetilde{\mathbf{u}}_{t})^{T}\nabla^{2}_{\mathbf{u}_{t}}H( \mathbf{u}_{t},\mathbf{\mu}_{t})|_{\mathbf{u}_{t}=\widetilde{\mathbf{u}}_{t}}(\mathbf{u}_{t}- \widetilde{\mathbf{u}}_{t}),\end{array}\] (C.1)
where \(\nabla^{2}_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})|_{\mathbf{u}_{t}=\widetilde{\bm {u}}_{t}}\) is Hessian matrix of function \(H(\cdot)\) when \(\mathbf{u}_{t}\) equals to \(\widetilde{\mathbf{u}}_{t}\). Then the gradient of \(\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) can be approximated as
\[\begin{array}{ll}\nabla_{\mathbf{u}_{t}}\left[H(\widetilde{\mathbf{u}}_{t}|\mathbf{\mu} _{t})+\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})|_{\mathbf{u}_{t}=\widetilde{ \mathbf{u}}_{t}}(\mathbf{u}_{t}-\widetilde{\mathbf{u}}_{t})+\frac{1}{2}(\mathbf{u}_{t}- \widetilde{\mathbf{u}}_{t})^{T}\nabla^{2}_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})|_ {\mathbf{u}_{t}=\widetilde{\mathbf{u}}_{t}}(\mathbf{u}_{t}-\widetilde{\mathbf{u}}_{t})\right] \end{array}.\] (C.2)
Since \(\widetilde{\mathbf{u}}_{t}\coloneqq arg\min_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\), we have \(\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})|_{\mathbf{u}_{t}=\widetilde{\mathbf{u}}_ {t}}=0\). Moreover, \(H(\widetilde{\mathbf{u}}_{t}|\mathbf{\mu}_{t})\) is a constant for \(\mathbf{u}_{t}\). We only analyze the last term in Equation (C.2). According to the definition of function \(H(\cdot)\), we have \(\nabla^{2}_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})|_{\mathbf{u}_{t}=\widetilde{\bm {u}}_{t}}=2\mathbf{G}^{T}\mathbf{Q}\mathbf{G}\) where \(\mathbf{G}=\begin{bmatrix}\frac{\partial g_{1}}{\partial\widetilde{\mathbf{u}}_{1}}& \cdots&\frac{\partial g_{1}}{\partial\widetilde{\mathbf{u}}_{m}}\\ \vdots&\ddots&\vdots\\ \frac{\partial g_{n}}{\partial\widetilde{\mathbf{u}}_{1}}&\cdots&\frac{\partial g _{n}}{\partial\widetilde{\mathbf{u}}_{m}}\\ \end{bmatrix}_{\mathbf{u}_{t}=\mathbf{u}_{t}}\) is the gradient of multivariate function \(g(\cdot)\), and \(g_{1}\) to \(g_{n}\) correspond to \(n\) dimensions of \(\mathbf{y}_{t}\). Then, based on Equation (C.2), we have \(\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})=2\mathbf{G}^{T}\mathbf{Q}\mathbf{G}(\mathbf{u}_{ t}-\widetilde{\mathbf{u}}_{t})\). According to the definition of \(\mathbf{u}_{t}^{*}\), we have:
\[\nabla_{\mathbf{u}_{t}}M(\mathbf{u}_{t}^{*}|\mathbf{\mu}_{t})=\nabla_{\mathbf{u}_{t}}H(\mathbf{u}_ {t}^{*}|\mathbf{\mu}_{t})+2\mathbf{Ru}_{t}^{*}=2\mathbf{G}^{T}\mathbf{Q}\mathbf{G}(\mathbf{u}_{t}^{*}- \widetilde{\mathbf{u}}_{t})+2\mathbf{Ru}_{t}^{*}=0.\]
Thus we have \(\mathbf{u}_{t}^{*}=(\mathbf{G}^{T}\mathbf{Q}\mathbf{G}+\mathbf{R})^{-1}\mathbf{G}^{T}\mathbf{Q}\widetilde{ \mathbf{u}}_{t}\).
## Appendix D Proof of Theorem 2
If \(H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\) can be approximated as its second-order Taylor expansion, we have
\[\begin{array}{ll}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})\approx H(\widetilde{\mathbf{u}}_{t}| \mathbf{\mu}_{t})+\nabla_{\mathbf{u}_{t}}H^{T}(\mathbf{u}_{t}|\mathbf{\mu}_{t})|_{\mathbf{u}_{t}= \widetilde{\mathbf{u}}_{t}}(\mathbf{u}_{t}-\widetilde{\mathbf{u}}_{t})+\frac{1}{2}(\mathbf{u }_{t}-\widetilde{\mathbf{u}}_{t})^{T}\nabla^{2}_{\mathbf{u}_{t}}H(\mathbf{u}_{t}|\mathbf{\mu}_ {t})|_{\mathbf{u}_{t}=\widetilde{\mathbf{u}}_{t}}(\mathbf{u}_{t}-\widetilde{\mathbf{u}}_{t}). \end{array}\] (D.1)
Therefore, we have the control action searching process for \(\mathbf{u}_{t}^{*}\) as
\[\begin{array}{ll}\mathrm{d}\mathbf{u}_{t}&=-\alpha\nabla_{\mathbf{u}_{t}}M(\mathbf{u}_{t} |\mathbf{\mu}_{t})\mathrm{d}t+\alpha\mathrm{d}W_{t}\\ &-\alpha\nabla_{\mathbf{u}_{t}}\big{(}H(\mathbf{u}_{t}|\mathbf{\mu}_{t})+\mathbf{u}_{t}^{T}\mathbf{ Ru}_{t}\big{)}\mathrm{d}t+\alpha\mathrm{d}W_{t}\\ &\approx-\alpha[2\mathbf{G}^{T}\mathbf{Q}\mathbf{G}(\mathbf{u}_{t}-\widetilde{\mathbf{u}}_{t})+2 \mathbf{Ru}_{t}]\mathrm{d}t+\alpha\mathbf{B}\mathrm{d}W_{t}\\ &=-2\alpha[\mathbf{G}^{T}\mathbf{Q}\mathbf{G}\mathbf{u}_{t}-\mathbf{G}^{T}\mathbf{Q}\mathbf{G}\widetilde{\mathbf{u }}_{t}+\mathbf{Ru}_{t}]\mathrm{d}t+\alpha\mathbf{B}\mathrm{d}W_{t}\\ &=-2\alpha[\mathbf{G}^{T}\mathbf{Q}\mathbf{G}\mathbf{u}_{t}-(\mathbf{G}^{T}\mathbf{Q}\mathbf{G}+\mathbf{R})\mathbf{u} _{t}^{*}+\mathbf{Ru}_{t}]\mathrm{d}t+\alpha\mathbf{B}\mathrm{d}W_{t}\\ &=2\alpha\big{(}\mathbf{G}^{T}\mathbf{Q}\mathbf{G}+\mathbf{R}\big{)}(\mathbf{u}_{t}^{*}-\mathbf{u}_{t}) \mathrm{d}t+\alpha\mathbf{B}\mathrm{d}W_{t}.\end{array}\] (D.2)
Let \(\mathbf{\Psi}=2\alpha[\mathbf{G}^{T}\mathbf{QG}+\mathbf{R}]\) and \(\mathbf{\sigma}=\alpha\mathbf{B}\). Because \(\alpha\) is a positive step size, we have \(\mathbf{\Psi}>0\). Then we have the search process of control action \(\mathbf{u}_{t}^{*}\) is an Ornstein-Uhlenbeck process.
## Appendix E Proof of Theorem 3
According to Theorem 2, we have the searching process for \(\mathbf{u}_{t}^{*}\) follows Ornstein-Uhlenbeck process, i.e. \(\mathrm{d}\mathbf{u}_{t}=\mathbf{\Psi}(\mathbf{u}_{t}^{*}-\mathbf{u}_{t})\mathrm{d}t+\mathbf{ \sigma}\mathrm{d}W_{t}\). According to Gardiner (1985), the stationary distribution of \(\mathbf{u}_{t}^{*}\) satisfies:
\[\nabla_{\mathbf{u}_{t}}[\mathbf{\Psi}(\mathbf{u}_{t}^{*}-\mathbf{u}_{t})p_{s}(\mathbf{u}_{t})|\mathbf{ u}_{t}=\mathbf{u}_{t}^{*}]=\tfrac{1}{2}\mathbf{\sigma}^{T}\mathbf{\sigma}\nabla_{\mathbf{u}_{t} }^{2}[p_{s}(\mathbf{u}_{t})|\mathbf{u}_{t}=\mathbf{u}_{t}^{*}]. \tag{101}\]
We can solve the stationary distribution of \(\mathbf{u}_{t}\) as: \(p_{s}(\mathbf{u}_{t})\propto\) is to say, as \(t\to\infty\), the search process for \(\mathbf{u}_{t}^{*}\) has a stationary distribution as:
\[\mathbf{u}_{t}{\sim}MN\left(\mathbf{u}_{t}^{*},\tfrac{1}{2}\mathbf{\sigma}^{T}\mathbf{\Psi}^{- 1}\mathbf{\sigma}\right). \tag{102}\]
## Appendix F Solution of the DOE-based APC
According to Equation (19), the control decision is \(\mathbf{u}_{t}^{*}=arg\min_{\mathbf{u}_{t}}\mathcal{C}_{t}\big{(}\mathbf{u}_{t}\big{|} \mathbf{\theta}_{0},\mathbf{\bar{\theta}},\mathbf{\bar{\gamma}},\mathbf{\bar{\vartheta}},\mathbf{ \bar{\omega}},\mathbf{\bar{\rho}},\mathbf{e}_{t-1}\big{)}=arg\min_{\mathbf{u}_{t}}\mathbb{ E}_{\mathbf{\bar{\theta}}_{0},\mathbf{\bar{\theta}},\mathbf{\bar{\gamma}},\mathbf{\bar{ \partial}},\mathbf{\bar{\omega}},\mathbf{\bar{\rho}},\mathbf{e}_{t-1}}\big{(}\mathbf{z}_{t}^{T} \mathbf{z}_{t}\big{|}\mathbf{\bar{\theta}}_{0},\mathbf{\bar{\theta}},\mathbf{\bar{\gamma}}, \mathbf{\bar{\vartheta}},\mathbf{\bar{\omega}},\mathbf{\bar{\rho}},\mathbf{\bar{\varphi}},\bm {e}_{t-1}\big{)}\). Since we have \(\exp\{-(\mathbf{u}_{t}-\mathbf{u}_{t}^{*})^{T}[\mathbf{\sigma}^{T}\mathbf{\Psi}^{-1}\mathbf{\sigma} ]^{-1}(\mathbf{u}_{t}-\mathbf{u}_{t}^{*})\}\). That a two-dimension system output in the CMP case, Equation (19) can be rewritten as
\[\mathbf{u}_{t}^{*} =arg\min_{\mathbf{u}_{t}}\mathbb{E}_{\mathbf{\bar{\theta}}_{0},\mathbf{\bar{ \theta}},\mathbf{\bar{\gamma}},\mathbf{\bar{\partial}},\mathbf{\bar{\omega}},\mathbf{\bar{ \rho}},\mathbf{e}_{t-1}}\left(\left(y_{t}^{(1)}-y_{1}^{*}\right)^{2}+\left(y_{t}^ {(2)}-y_{2}^{*}\right)^{2}\bigg{|}\mathbf{\bar{\theta}}_{0},\mathbf{\bar{\theta}},\bm {\bar{\gamma}},\mathbf{\bar{\vartheta}},\mathbf{\bar{\omega}},\mathbf{\bar{\rho}},\mathbf{ \bar{\varphi}},\mathbf{e}_{t-1}\right)\] \[=arg\min_{\mathbf{u}_{t}}\mathbb{E}_{\mathbf{\bar{\theta}}_{0},\mathbf{\bar{ \theta}},\mathbf{\bar{\gamma}},\mathbf{\bar{\partial}},\mathbf{\bar{\omega}},\mathbf{\bar{ \rho}},\mathbf{e}_{t-1}}\left(\left(z_{t}^{(1)}\right)^{2}+\left(z_{t}^{(2)} \right)^{2}\bigg{|}\mathbf{\bar{\theta}}_{0},\mathbf{\bar{\theta}},\mathbf{\bar{\gamma}}, \mathbf{\bar{\vartheta}},\mathbf{\bar{\omega}},\mathbf{\bar{\rho}},\mathbf{\bar{\varphi}},\bm {e}_{t-1}\right).\]
The objective function can be separated into two parts, and we take \(\mathcal{C}_{t}^{(1)}(\cdot)\) related to \(z_{t}^{(1)}\) as an example to analyze the optimal APC, the other dimension \(\mathcal{C}_{t}^{(2)}\) is the same. Considering the randomness of parameters in decision making, we let:
\[\mathcal{C}_{t}^{(1)}(\mathbf{u}_{t}) =\mathbb{E}_{\mathbf{\bar{\theta}}_{10},\mathbf{\bar{\theta}}_{1},\mathbf{ \bar{\gamma}}_{1},\mathbf{\bar{\partial}}_{1},\mathbf{\bar{\omega}}_{1},\mathbf{\bar{ \rho}}_{1},\mathbf{\bar{\varphi}}_{1},\mathbf{e}_{t-1}^{(1)}}\left[z_{t}^{(1)}(\mathbf{u}_{t })\Big{|}\mathbf{\bar{\theta}}_{10},\mathbf{\bar{\theta}}_{1},\mathbf{\bar{\gamma}}_{1}, \mathbf{\bar{\vartheta}}_{1},\mathbf{\bar{\omega}}_{1},\mathbf{\bar{\rho}}_{1},\mathbf{e}_{t-1 }^{(1)}\right]^{2}\] \[=\left(\mathbb{E}_{\mathbf{\bar{\theta}}_{10},\mathbf{\bar{\theta}}_{1}, \mathbf{\bar{\gamma}}_{1},\mathbf{\bar{\partial}}_{1},\mathbf{\bar{\omega}}_{1},\mathbf{\bar{ \rho}}_{1},\mathbf{\bar{\varphi}}_{1},\mathbf{e}_{t-1}^{(1)}}\left[z_{t}^{(1)}(\mathbf{u}_{t })\Big{|}\mathbf{\bar{\theta}}_{10},\mathbf{\bar{\theta}}_{1},\mathbf{\bar{\gamma}}_{1}, \mathbf{\bar{\vartheta}}_{1},\mathbf{\bar{\omega}}_{1},\mathbf{\bar{\rho}}_{1},\mathbf{\bar{ \varphi}}_{1},\mathbf{e}_{t-1}^{(1)}\right]^{2} \tag{103}\] \[+\mathrm{var}_{\mathbf{\bar{\theta}}_{10},\mathbf{\bar{\theta}}_{1},\mathbf{ \bar{\gamma}}_{1},\mathbf{\bar{\partial}}_{1},\mathbf{\bar{\omega}}_{1},\mathbf{\bar{ \rho}}_{1},\mathbf{e}_{t-1}^{(1)}}\left[z_{t}^{(1)}(\mathbf{u}_{t})\Big{|}\mathbf{\bar{ \theta}}_{10},\mathbf{\bar{\theta}}_{1},\mathbf{\bar{\gamma}}_{1},\mathbf{\bar{\vartheta}}_{1}, \mathbf{\bar{\omega}}_{1},\mathbf{\bar{\rho}}_{1},\mathbf{\bar{\varphi}}_{1},\mathbf{e}_{t-1}^{( 1)}\right].\]
Then we analyze these two items in Equation (103)
\[\begin{split}&\mathbb{E}_{\theta_{10}\tilde{\mathbf{\theta}}_{1}\tilde{ \mathbf{\gamma}}_{1}\tilde{\mathbf{\beta}}_{1}\tilde{\mathbf{\beta}}_{1}\tilde{\mathbf{\beta}} _{1}\tilde{\mathbf{\omega}}_{1}\tilde{\mathbf{\rho}}_{1}\tilde{\mathbf{\varphi}}_{1}}\big{[} z_{t}^{(1)}(\mathbf{u}_{t})\big{]}\hat{\theta}_{10},\widehat{\mathbf{\theta}}_{1}, \widehat{\mathbf{\gamma}}_{1},\hat{\mathbf{\phi}}_{1},\hat{\mathbf{\omega}}_{1},\hat{\mathbf{ \rho}}_{1},e_{t-1}^{(1)}\big{]}\\ &=E_{\tilde{\theta}_{10}\tilde{\mathbf{\beta}}_{1}\tilde{\mathbf{\beta}}_{1 }\tilde{\mathbf{\beta}}_{1}\tilde{\mathbf{\omega}}_{1}\tilde{\mathbf{\rho}}_{1}\tilde{\mathbf{ \omega}}_{1}\tilde{\mathbf{\rho}}_{1}e_{t-1}^{(1)}}\big{[}\theta_{10}+\theta_{11}u_ {t}^{(1)}+\theta_{12}u_{t}^{(2)}+\theta_{13}u_{t}^{(3)}+\gamma_{1}t+\vartheta_ {1}e_{t-1}^{(1)}+\omega_{1}z_{t-1}^{(1)}+\varphi_{1}te_{t-1}^{(1)}+r_{1}| \theta_{10},\widehat{\mathbf{\theta}}_{1},\hat{\mathbf{\gamma}}_{1},\theta_{1},\hat{ \mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_{1}\big{]}\\ &=\hat{\theta}_{10}+\theta_{11}u_{t}^{(1)}+\theta_{12}u_{t}^{(2)} +\hat{\theta}_{13}u_{t}^{(3)}+\hat{\gamma}_{1}t+\hat{\theta}_{1}e_{t-1}^{(1)}+ \hat{\partial}_{1}z_{t-1}^{(1)}+\hat{\partial}_{1}te_{t-1}^{(1)}.\end{split}\] (F.2)
where \(e_{t-1}^{(1)}\) denotes the regression noise of the first dimension for system output at run \(t-1.\) We denote the three dimensions of control recipe as \(\mathbf{u}_{t}=\begin{bmatrix}u_{t}^{(1)},u_{t}^{(2)},u_{t}^{(3)}\end{bmatrix}^{T},\) and the corresponding parameters are \(\mathbf{\theta_{1}}=\begin{bmatrix}\theta_{11},\theta_{12},\theta_{13}\end{bmatrix}.\) By taking the conditional variance by the variable \(e_{t-1}^{(1)},\) we have:
\[\begin{split}&\text{var}_{\widehat{\theta}_{10}\widehat{\mathbf{ \theta}}_{1}\tilde{\mathbf{\gamma}}_{1}\tilde{\mathbf{\beta}}_{1}\tilde{\mathbf{\omega}}_{1 }\tilde{\mathbf{\rho}}_{1},\widehat{\mathbf{\rho}}_{1},e_{t-1}^{(1)}}\left[z_{t}^{(1)} (\mathbf{u}_{t})\Big{]}\hat{\theta}_{10},\widehat{\mathbf{\theta}}_{1},\hat{\mathbf{\gamma }}_{1},\hat{\mathbf{\phi}}_{1},\hat{\mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_{1},e_{t-1}^{( 1)}\right]\\ &=\mathbb{E}_{e_{t-1}^{(1)}}\left[\text{var}_{\widehat{\theta}_{1 0},\widehat{\mathbf{\theta}}_{1}\tilde{\mathbf{\gamma}}_{1},\widehat{\mathbf{\omega}}_{1 },\widehat{\mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_{1},\hat{\mathbf{\rho}}_{1}}\left[z_{t }^{(1)}(\mathbf{u}_{t})\Big{]}\hat{\theta}_{10},\widehat{\mathbf{\theta}}_{1},\hat{ \mathbf{\gamma}}_{1},\hat{\mathbf{\theta}}_{1},\hat{\mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_ {1},\hat{\mathbf{\rho}}_{1},e_{t-1}^{(1)}\right]\\ &+\text{var}_{e_{t-1}^{(1)}}\left[\mathbb{E}_{\widehat{\theta}_{1 0},\widehat{\mathbf{\theta}}_{1}\tilde{\mathbf{\gamma}}_{1},\widehat{\mathbf{\omega}}_{1 },\widehat{\mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_{1}}\left[z_{t}^{1}(\mathbf{u}_{t}) \Big{]}\hat{\theta}_{10},\widehat{\mathbf{\theta}}_{1},\hat{\mathbf{\gamma}}_{1},\hat {\mathbf{\theta}}_{1},\hat{\mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_{1},\hat{\mathbf{\rho}}_{1}, e_{t-1}^{(1)}\right].\end{split}\] (F.3)
For the first term in Equation (F.3), we have:
\[\begin{split}&\mathbb{E}_{e_{t-1}^{(1)}}\left[\text{var}_{ \widehat{\theta}_{10}\widehat{\mathbf{\theta}}_{1}\tilde{\mathbf{\gamma}}_{1},\widehat {\mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_{1},\hat{\mathbf{\rho}}_{1}}\left[z_{t}^{(1)}( \mathbf{u}_{t})\Big{]}\theta_{10},\widehat{\mathbf{\theta}}_{1},\hat{\mathbf{\gamma}}_{1}, \hat{\mathbf{\phi}}_{1},\hat{\mathbf{\omega}}_{1},\hat{\mathbf{\rho}}_{1},e_{t-1}^{(1)} \right]\right]\\ &=E_{e_{t-1}^{(1)}}\left[\text{var}\big{(}\hat{\theta}_{10}\big{)} +\mathbf{u}_{t}^{T}\mathbb{E}_{\mathbf{\theta}}^{1}\mathbf{u}_{t}+t^{2}\text{var}(\hat{ \gamma}_{1})+\left(e_{t-1}^{(1)}\right)^{2}\text{var}\big{(}\hat{\partial}_{1} \big{)}+\left(z_{t-1}^{(1)}\right)^{2}\text{var}(\hat{\partial}_{1})+\left(z_{t -1}^{(1)}\right)^{2}\text{var}(\hat{\partial}_{1})+\text{var}_{\hat{\mathbf{\rho}}_ {1}}\left[\varphi_{1}te_{t-1}^{(1)}\right]+\text{var}(r_{1})\right]\\ &=\text{var}\big{(}\hat{\theta}_{10}\big{)}+\mathbf{u}_{t}^{T}\mathbb{ E}_{\mathbf{\theta}}^{1}\mathbf{u}_{t}+t^{2}\text{var}(\hat{\gamma}_{1})+\left(e_{t-1}^{(1)} \right)^{2}\text{var}\big{(}\hat{\partial}_{1}\big{)}+\left(z_{t-1}^{(1)} \right)^{2}\text{var}(\hat{\partial}_{1})+\text{var}_{\hat{\mathbf{\rho}}_{1}} \left[\varphi_{1}te_{t-1}^{(1)}\right]+\text{var}(r_{1}).\end{split}\] (F.4)
And the second term is:
\[\begin{split}&\text{var}_{e_{t-1}^{(1)}}\left[\mathbb{E}_{\widehat{ \theta}_{10}\widehat{\mathbf{\theta}}_{1}\tilde{\mathbf{\gamma}}_{1},\widehat{\mathbf{ \vartheta}}_{1},\widehat{\mathbf{\omega}}_{1},\widehat{\mathbf{\rho}}_{1},\widehat{ \mathbf{\rho}}_{1}}\left[z_{t}^{1}(\mathbf{u}_{t})\Big{]}\hat{\theta}_{10},\widehat{\mathbf{ \theta}}_{1},\hat{\mathbf{\gamma}}_{1},\hat{\mathbf{\delta}}_{1},\widehat{\mathbf{\omega}}_{1}, \hat{\mathbf{\rho}}_{1},\hat{\mathbf{\phi}}_{1},e_{t-1}^{(1)}\right]\\ &=\text{var}_{e_{t-1}^{(1)}}\big{[}\hat{\theta}_{10}+\hat{ \theta}_{11}u_{t}^{(1)}+\hat{\theta}_{12}u_{t}^{(2)}+\hat{\theta}_{13}u_{t}^{(3)}+ \hat{\gamma}_{1}t+\hat{\theta}_{1}e_{t-1}^{(1)}+\hat{\partial}_{1}z_{t-1}^{(1)} +\hat{\varphi}_{1}te_{t-1}^{(1)}\big{]}\\ &=\big{(}\hat{\vartheta}_{1}+\hat{\varphi}_{1}t\big{)}^{2}\text{var} \left(e_{t-1}^{(1)}\right).\end{split}\] (F.5)
Therefore, we can summarize Equation (F.1) as
\[\begin{split}& C_{t}^{(1)}(\mathbf{u}_{t})=[\hat{\theta}_{10}+\theta_{11}u_ {t}^{(1)}+\theta_{12}u_{t}^{(2)}+\theta_{13}u_{t}^{(3)}+\hat{\gamma}_{1}t+\hat{ \vartheta}_{1}e_{t-1}^{(1)}+\partial_{1}z_{t-1}^{(1)}+\hat{\varphi}_{1}te_{t-1}^ {(1)}]^{2}+\text{var}\big{(}\hat{\theta}_{10}\big{)}\\ &\qquad\qquad\qquad+\mathbf{u}_{t}^{T}\mathbb{E}_{\mathbf{\theta}}^{1}\mathbf{u}_{t} +t^{2}\text
\[C_{t}^{(2)}(\boldsymbol{u}_{t})=[\hat{\theta}_{20}+\hat{\theta}_{21}u_{t}^{(1)}+ \hat{\theta}_{22}u_{t}^{(2)}+\hat{\theta}_{23}u_{t}^{(3)}+\hat{\gamma}_{2}t+ \hat{\vartheta}_{2}e_{t-1}^{(2)}+\hat{\varpi}_{2}te_{t-1}^{(2)}]^{2}+\text{var} \big{(}\hat{\theta}_{20}\big{)}\] \[\qquad\qquad\qquad+\boldsymbol{u}_{t}^{T}\Sigma_{\theta}^{2} \boldsymbol{u}_{t}+t^{2}\text{var}(\hat{\vartheta}_{2})+\left(e_{t-1}^{(2)} \right)^{2}\text{var}\big{(}\hat{\vartheta}_{2}\big{)}+\left(z_{t-1}^{(2)} \right)^{2}\text{var}(\hat{\omega}_{2})+\text{var}_{\hat{\varphi}_{2}}\left[ \varphi_{2}te_{t-1}^{(2)}\right]\] \[\qquad\qquad\qquad+\text{var}(r_{2})+\big{(}\hat{\vartheta}_{2}+ \hat{\varphi}_{2}t\big{)}^{2}\text{var}\left(e_{t-1}^{(2)}\right).\]
Then taking the first-order derivative of \(C_{t}^{(1)}(\boldsymbol{u}_{t})+C_{t}^{(2)}(\boldsymbol{u}_{t})\), we have
\[\frac{\text{d}\left(C_{t}^{(1)}(\boldsymbol{u}_{t})+C_{t}^{(2)}( \boldsymbol{u}_{t})\right)}{\text{d}\boldsymbol{u}_{t}} =2\left(\hat{\theta}_{10}+\boldsymbol{\bar{\theta}}_{1}\boldsymbol {u}_{t}+\hat{\gamma}_{1}t+\hat{\vartheta}_{1}e_{t-1}^{(1)}+\hat{\varpi}_{1}t e_{t-1}^{(1)}+\hat{\varphi}_{1}te_{t-1}^{(1)}\right)\boldsymbol{\bar{\theta}}_{1}+2 \boldsymbol{u}_{t}\text{var}\big{(}\boldsymbol{\bar{\theta}}_{1}^{T}\big{)}\] \[+2\left(\hat{\theta}_{20}+\boldsymbol{\bar{\theta}}_{2} \boldsymbol{u}_{t}+\hat{\gamma}_{2}t+\hat{\theta}_{2}e_{t-1}^{(2)}+\hat{ \varpi}_{2}t_{t-1}^{(2)}+\hat{\varphi}_{2}te_{t-1}^{(2)}\right)\boldsymbol{ \bar{\theta}}_{2}+2\boldsymbol{u}_{t}\text{var}\big{(}\boldsymbol{\bar{\theta} }_{2}^{T}\big{)}=0.\]
If \(\left[\boldsymbol{z}_{\theta}^{1}+\boldsymbol{\bar{\theta}}_{1}\boldsymbol{ \bar{\theta}}_{1}^{T}+\Sigma_{\theta}^{2}+\boldsymbol{\bar{\theta}}_{2} \boldsymbol{\bar{\theta}}_{2}^{T}\right]\) is invertible, we have the closed-form solution as:
\[\boldsymbol{u}_{t}^{*}=-\left[\boldsymbol{z}_{\theta}^{1}+ \boldsymbol{\bar{\theta}}_{1}\boldsymbol{\bar{\theta}}_{1}^{T}+\Sigma_{\theta} ^{2}+\boldsymbol{\bar{\theta}}_{2}\boldsymbol{\bar{\theta}}_{2}^{T}\right]^{ -1}\cdot\left[\left(\hat{\theta}_{10}+\hat{\gamma}_{1}t+\hat{\vartheta}_{1}e _{t-1}^{(1)}+\hat{\varphi}_{1}te_{t-1}^{(1)}+\hat{\omega}_{1}z_{t-1}^{(1)} \right)\cdot\boldsymbol{\bar{\theta}}_{1}+\left(\hat{\theta}_{20}+\hat{ \gamma}_{2}t+\hat{\vartheta}_{2}e_{t-1}^{(2)}+\hat{\varphi}_{2}te_{t-1}^{(2) }+\hat{\varpi}_{2}t^{(2)}\right)\cdot\boldsymbol{\bar{\theta}}_{2}\right]\]
|
2307.16689 | No that's not what I meant: Handling Third Position Repair in
Conversational Question Answering | The ability to handle miscommunication is crucial to robust and faithful
conversational AI. People usually deal with miscommunication immediately as
they detect it, using highly systematic interactional mechanisms called repair.
One important type of repair is Third Position Repair (TPR) whereby a speaker
is initially misunderstood but then corrects the misunderstanding as it becomes
apparent after the addressee's erroneous response. Here, we collect and
publicly release Repair-QA, the first large dataset of TPRs in a conversational
question answering (QA) setting. The data is comprised of the TPR turns,
corresponding dialogue contexts, and candidate repairs of the original turn for
execution of TPRs. We demonstrate the usefulness of the data by training and
evaluating strong baseline models for executing TPRs. For stand-alone TPR
execution, we perform both automatic and human evaluations on a fine-tuned T5
model, as well as OpenAI's GPT-3 LLMs. Additionally, we extrinsically evaluate
the LLMs' TPR processing capabilities in the downstream conversational QA task.
The results indicate poor out-of-the-box performance on TPR's by the GPT-3
models, which then significantly improves when exposed to Repair-QA. | Vevake Balaraman, Arash Eshghi, Ioannis Konstas, Ioannis Papaioannou | 2023-07-31T14:02:45Z | http://arxiv.org/abs/2307.16689v1 | # _No that's not what I meant_: Handling Third Position Repair in Conversational Question Answering
###### Abstract
The ability to handle _misccommunication_ is crucial to robust and faithful conversational AI. People usually deal with misccommunication immediately as they detect it, using highly systematic interactional mechanisms called _repair_. One important type of repair is _Third Position Repair_ (TPR) whereby a speaker is initially misunderstood but then corrects the misunderstanding as it becomes apparent after the addressee's erroneous response (see Fig. 1). Here, we collect and publicly release repair-qa1, the first large dataset of TPRs in a conversational question answering (QA) setting. The data is comprised of the TPR turns, corresponding dialogue contexts, and candidate repairs of the original turn for execution of TPRs. We demonstrate the usefulness of the data by training and evaluating strong baseline models for executing TPRs. For stand-alone TPR execution, we perform both automatic and human evaluations on a fine-tuned T5 model, as well as OpenAI's GPT-3 LLMs. Additionally, we _extrinsically_ evaluate the LLMs' TPR processing capabilities in the downstream conversational QA task. The results indicate poor out-of-the-box performance on TPR's by the GPT-3 models, which then significantly improves when exposed to repair-qa.
Footnote 1: The dataset, models and code for all experiments are available at [https://github.com/alanaai/Repair-QA](https://github.com/alanaai/Repair-QA)
## 1 Introduction
Participants in conversation need to work together on a moment by moment basis to achieve shared understanding and coordination Clark (1996); Clark and Brennan (1991); Goodwin (1981); Healey et al. (2018); Mills (2007). One of the key interactional mechanisms that enables this is called _repair_Schegloff et al. (1977); Schegloff (1992) - see Fig. 1: a set of universal, highly systematised Dingemanse et al. (2015), local methods for dealing with _misccommunication_ as it is detected.
Misccommunication likewise arises in human-machine conversation. Therefore, the ability to interpret and generate effective repair sequences is crucial to _robust_ Conversational AI technology, and to ensuring that Natural Language Understanding (NLU) output and/or subsequent system responses remain _faithful_ to what the user intended.
Considerable attention has been paid to computational models for the interpretation and generation of _self-repair_(see Hough and Schlangen (2015); Hough (2015); Shalyminov et al. (2017); Skantze and Hjalmarsson (2010); Buss and Schlangen (2011); Hough and Purver (2012) among others): a class of repairs whereby the speaker corrects themselves on the fly within the same conversational turn (e.g. "User: I want to go to London uhm sorry Paris"). Similarly, the crucial role of generating and responding to _Clarification Requests_ (e.g. "Pardon/what/who?") in conversational models has long been recognised (see San-Segundo et al. (2001); Purver (2004); Purver and Ginzburg (2004); Rieser and Moore (2005); Rodriguez and Schlangen (2004); Rieser and Lemon (2006) among others), but existing systems either remain limited (e.g. Curry et al. (2018)) or do not support this at all - see Purver et al. (2018) for an overview of existing models of repair.
In this paper, we focus on an important class of repairs that has, to our knowledge, been neglected in the NLP community, likely due to the unavail
Figure 1: **T5** Example from Repairs-qa.
ability of data: _Third Position Repair_ (TPR; (Schegloff, 1992); aka repair after next turn). These occur when the addressee initially misunderstands the speaker (Fig. 1 at T1, the _trouble source_ turn), responds based on this misunderstanding (at T2), which in turn reveals the misunderstanding to the addressee who then goes on to correct the misunderstanding (at T3). Our **contributions** are: (1) We collect, analyse and release repair-qa, the first large dataset of Third Position Repairs (TPR) in a conversational QA setting together with candidate repair outcomes (rewrites) for training _repair execution_ models; and (2) We then use repair-qa to: (a) train and intrinsically evaluate strong baseline models for the execution of TPRs; and (b) systematically probe the TPR processing capabilities of GPT-3-Curie and GPT-3-Davinci with and without exposing them to examples from repair-qa.
## 2 The repair-qa dataset
In this section, we describe our method for eliciting Third Position Repairs (TPR) from AMT crowd workers (henceforth annotators). Overall, we set this up as a dialogue completion task whereby the annotators are given a dialogue snippet in which a miscommunication has occurred: they are given T1 (Fig. 1; the Trouble Source) and T2 (the erroneous system response). They are then asked to provide a (Third Position) correction at T3 to resolve the miscommunication.
Method: Eliciting TPRsWe built our dialogue completion tasks on Amazon Mechanical Turk (AMT). Annotators were paid \(\$0.29\) per annotation for their work (estimated at \(\$11\) per hour). To generate the dialogue completion tasks in order to elicit TPRs, we start from the AmbigQA dataset (Min et al., 2020) since it contains ambiguous questions (i.e. questions that have multiple interpretations and answers) and their corresponding unambiguous questions along with their answers. For each ambiguous question, \(Q\), and the corresponding pair of unambiguous questions with their answers, \((Q_{1},A_{1})\) and \((Q_{2},A_{2})\), we build a dialogue snippet to be completed by the annotator with a TPR as follows: (1) We build an informative _context_, \(C\), that differentiates between questions \(Q_{1}\) and \(Q_{2}\); (2) The answers in AmbigQA are mostly short, Noun Phrase answers, which do not reveal how the ambiguous question was interpreted or reveal the apparent miscommunication to the annotator. To remedy this, we transform these short answers to full sentential form using the rule-based approach of Demszky et al. (2018). This allows us to derive sentential forms for \(A_{1}\), call it \(A_{1}^{\prime}\); (3) We build the dialogue snippet with two turns, T1 and T2 - see Fig. 1 - where \(T1=Q\) and \(T2=A_{1}^{\prime}\). Annotators are told that their goal was to get a response to \(Q_{2}\) (indicated by context \(C\)); then, given the dialogue snippet which erroneously provides an answer to \(Q_{1}\), they are asked to provide _two_ alternative TPRs at \(T3\) to get a response to \(Q_{2}\) instead. For example, in Fig. 1: \(Q\) is \(T1\); \(Q_{1}\) is "What is the name of the princess in Frozen who eventually becomes queen?"; \(A_{1}\) is "Elsa"; \(A_{1}^{\prime}\) is \(T2\); and \(C\) is "who eventually becomes queen vs. the younger sister". The context \(C\) is built by identifying the difference between \(Q1\) and \(Q2\). We employ this approach as the AmbigQA unambiguous questions have the same syntactic form as the ambiguous question. Another big advantage of using the AmbigQA dataset is that \(Q_{2}\) can be seen as the contextually resolved meaning of the TPR which we call the gold'rewrite' following (Anantha et al., 2021). This gold rewrite is used below in our _repair execution_ models. See Appendix B for more details.
Statistics and Quality ControlThe repair-qa dataset consists of **3305** examples (training: 2657, test: 648) which are chosen and annotated from the 4749 examples from the AmbigQA dataset. Each conversation in repair-qa consists of two different TPRs yielding a total 6610 TPR annotations. Table 6 in Appendix shows some examples of the collected data. For quality control, we randomly select 100 TPR annotations from the testset to perform a qualitative inspection of the collected data. We annotate them for (i) Quality: Does the TPR convey the information needed to convey the necessary correction?; (ii) Context-Dependence: Does the TPR contain any context-dependent phenomena (e.g. fragments, ellipsis, pronominals); and (iii) Corrective: Is the TPR formulated explicitly as a correction? (e.g. The TPR in Fig. 1 could have been: "what about the name of the younger sister?" which does not explicitly signal a correction). We find that only 16% of the data contains some noise; that 93% of TPRs contain some form of context-dependency; and that 80% of the TPRs formulate the TPR explicitly as a correction. To further measure the degree to which the interpretation of the TPRs relies on the dialogue context, we measure the unigram overlap between the TPR and the refer
ence rewrite (viz. \(Q2\) above). We find 28% overlap between them, suggesting that the TPRs are highly context-dependent.
LimitationsAs such, repair-qa has two important limitations: (1) TPRs can in general sometimes - but rarely - occur at a distance of more than two turns from the _trouble-source_ turn (Schegloff, 1992). But the TPRs we collected are always in the third turn following the trouble source: this is an artefact not just of our data collection design as a unilateral dialogue completion task, but also of the architecture of most Conversational QA models that REPAIR-QA is designed to be useful for; and (2) overall we'd have preferred a more ecologically valid setup where TPRs are elicited within a more dynamic, interactive setting rather than as a dialogue completion task. Nevertheless, we believe that this trade-off between difficulty of collecting human-human dialogues, and the breadth of the types of TPR sequences collected is justified.
## 3 TPR execution
We cast the TPR execution task as a sequence to sequence problem, where input to the model is the dialogue history up to and including the TPR turn, and the model is trained to generate a rewrite of the ambiguous, trouble-source question, reflecting the correction in the TPR. We use a pre-trained T5 model (Raffel et al., 2022) for our experiments and compare against OpenAI's GPT-3 (Brown et al., 2020) when prompted with TPR examples.
### Repair Execution Results
The models are evaluated against metrics of BERTScore (Zhang et al., 2020), BLEU and Exact Match (EM) between the reference rewrite and the generated output 2.
Footnote 2: We also tried an NLI-based text-classifier (Yin et al., 2019) for evaluation but the metric was not suited for this task, hence not reported here.
Table 1 shows the performance of all models on the repair-qa testset. The T5 model is fine-tuned using the repair-qaand its performance is reported as T5-repair-qa. The fine-tuned T5-repair-qa model achieves the best performance against the gold rewrites on all the 3 metrics considered. The GPT-3 models (Davinci and Curie) are few-shot prompted with 10 random examples, per test instance, pooled from repair-qa followed by the test data; (see Appendix C for details); unlike the T5-repair-qa model which is fine-tuned using the repair-qa training data. We see a slightly lower performance for Davinci compared to the T5-repair-qa on the automatic evaluation; the Curie model shows significantly inferior performance, especially when looking at EM 3.
Footnote 3: We also did a zero-shot evaluation of a T5 model trained only on QReCC (Anantha et al., 2021) – a contextual resolution dataset – against the repair-qa testset: it performed very poorly (BLEU = 37.44) indicating that the patterns of context-dependency in the TPRs are very different from the general patterns of context-dependency found in the QReCC dataset. This further demonstrates the usefulness of repair-qa.
Generally, the correction that a TPR provides to the _trouble source_ question (T1 in Fig. 1) is very specific and small (often just 1 or 2 words, e.g. "the younger sister" in Fig. 1). Thus a higher BLEU score is more likely even when the model prediction is similar to the trouble source. To evaluate the ability of the models to produce specifically the corrective tokens, we evaluate the models' predictions against both the gold rewrite and the trouble source itself, and compare these across all metrics. We compute the metrics for the models' prediction against the gold rewrite on the one hand, and, the trouble source separately on the other hand, and compute the difference between them (simple subtraction). This difference in performance against them is therefore attributable to whether the model was able to produce the few corrective tokens. Table 2 shows this differential evaluation: a similar trend is seen on the models for the BLEU metric but GPT-3-Davinci outperforms other models on BERTScore. This result is discussed further below.
\begin{table}
\begin{tabular}{c|c|c|c} & BERT & BLEU & EM \\ & Score & & \\ \hline \hline T5-repair-qa & **97.48** & **72.06** & **30.40** \\ GPT-3-Davinci & 97.22 & 64.18 & 25.68 \\ GPT-3-Curie & 93.19 & 52.43 & 7.60 \\ \end{tabular}
\end{table}
Table 1: Model performance on the testset of the repair-qa dataset.
\begin{table}
\begin{tabular}{c|c|c} & BERTScore & BLEU \\ \hline \hline T5-repair-qa & 1.48 & **20.12** \\ GPT-3-Davinci & **1.76** & 19.94 \\ GPT-3-Curie & (0.11) & 1.85 \\ \end{tabular}
\end{table}
Table 2: Model ability to generate corrective tokens computed based on the difference in performance of the prediction against the rewrite and the trouble source.
### Human Evaluation
We asked two expert annotators (two co-authors of the paper) to rate the quality of T5-repair-qa and GPT-3-Davinci model's output rewrites for executing the TPRs. We separately asked them the following questions: **Q1**: "On a scale of 1 to 5, how well does the model prediction avoid the misunderstanding caused by the ambiguity in the original question?"; and **Q2**: "On a scale of 1 to 5, to what degree is the model prediction asking for the same information as the gold?". While the answer to Q2 depends on the gold rewrites from repair-qa, the answer to Q1 does not. This is because in executing a TPR what we care about is not necessarily the surface form of the output but instead the overall correction on a _semantic level_. The annotators showed very high interannotator agreement on both questions (average Krippendorf's \(\alpha=0.8\)).
As Table 3 shows, the Davinci model's performance in the human evaluation is superior to the T5-repair-qa model for both Q1 and Q2. At first glance, this would seem to be inconsistent with the word overlap metrics in Table 1 since the fine-tuned T5-repair-qa model outputs show more overall overlap with the gold rewrites. However, a qualitative inspection of the respective outputs of each model shows that the Davinci model manages to produce rewrites which sufficiently capture the meaning of the TPR even as it doesn't always reproduce exactly the same words. This explanation is further supported by the BERTScore, semantic similarity results in Table 2 which shows slightly superior performance of the Davinci model (see Table 5 in Appendix for an example comparison). We believe that this is due to the fact the Davinci model is only exposed to ten examples in the prompt each time, whereas the T5-repair-qa model is fine-tuned on all the training data from repair-qa.
## 4 Extrinsic evaluation of GPT-3's TPR capabilities in conversational QA
In this section, we use repair-qa to evaluate the TPR processing capabilities of OpenAI's GPT-3 Davinci model extrinsically in an end-to-end, conversational QA setting. We do this by comparing:
1. the model's response to the reference rewrite (the corrected, unambiguous form of each question); with
2. the response returned after the dialogue snippet with the TPR as its last turn.
If (a) and (b) are identical or highly similar, we can infer that the model was able to interpret the TPR correctly; independently of whether the responses are faithful. We compute the automatic evaluation on the model's response in (b) while treating the model's response in (a) as the ground truth. This would evaluate if the model was consistent in generating responses for both the rewrite and the TPR dialogue snippet. This evaluation is performed under two _prompting conditions_: **With TPR examples**: where the model is exposed to 10 TPR examples in the prompt; and; **Without TPR examples**: where the model is prompted without any TPR examples. In both conditions, the preamble instructs Davinci to generate _unknown_ as the answer if the question is either nonsense, trickery, or Davinci has no clear answer. In addition, in both cases, the model is instructed to provide short form, Noun Phrase answers (for details of all of the preambles used, see Appendix, Sec. C).
There could in general be two reasons for _unknown_ predictions after a TPR: (i) the Davinci's closed-book knowledge is insufficient to answer the (disambiguated, corrected) question; or; (ii) It was unable to interpret the TPR sequence. Since we are interested only in (ii), we _exclude_ all cases where the model was not able to answer the unambiguous question (i.e case (a) above), viz. the reference rewrite (the meaning of the TPR). This way we ensure that the model can actually answer the target, rewritten / corrected question. After these are excluded, the 'Unknown' column in Table 4 contains the number of _unknown_ responses to the TPRs; showing how the model improves when exposed to TPR examples in conversational QA.
For cases where both (a) and (b) above receive answers from GPT3, we perform automatic evaluation to measure the similarity between them: this is
\begin{table}
\begin{tabular}{c|c|c|c} Prompting & BLEU & EM & Unknown \\ \hline \hline w/o TPR examples & 11.40 & 11.71\% & 230 \\ with TPR examples & 16.98 & 31.90\% & 57 \\ \end{tabular}
\end{table}
Table 4: End-to-end, TPR processing capability of GPT-3 Davinci, with and without being exposed to TPR examples from REPAIR-QA
\begin{table}
\begin{tabular}{c|c|c} & **Q1** & **Q2** \\ \hline \hline T5-repair-qa & 3.53 & 4.01 \\ GPT-3-Davinci & 4.56 & 4.27 \\ \end{tabular}
\end{table}
Table 3: Human evaluation of TPR execution models
also shown in Table 4. As a surface overlap metric, BLEU is suitable for this evaluation since we compare short answer tokens with many of these being bare Noun Phrases, e.g. names of movies, persons, dates, etc: there are no or few semantically similar paraphrases of these answers.
As is evident in Table 4, the TPR processing capability of Davinci in conversational QA when not exposed to any TPR example is very poor, but this improves significantly with a handful of TPR examples in the prompt. This shows that state-of-the-art LLMs do not handle TPRs well at all out-of-the-box, validating the requirement for datasets addressing specific dialogue phenomena like TPRs.
Even when the model is exposed to TPR sequences in the prompt (the "with TPR examples" condition) the model's performance still leaves a lot to be desired: the model's responses to the TPRs matches the expected response only in \(31.9\%\) of cases.
To verify the meaningfulness of the \(31.9\%\) exact match and the corresponding low BLEU score of \(16.98\) between model responses in (a) and (b), we went on to do a manual inspection of the data. Fig. 2 shows two examples of these responses:
We can see different answers when prompted with the dialogue including the TPR ((b) above) and when prompted with the rewrite (unambiguous form of the input; (a) above). Such inconsistent answers are frequent from the model even when repair-qa examples are provided in the prompt.
For more certainty, we further computed more focused BLEU scores only in cases where there was no exact match between the model's responses in (a) and (b). The BLEU scores on these not exactly matching responses, **with** and **without** exposure to TPR examples were 8.81 and 8.08 respectively. This shows that the model provides different, inconsistent answers for a large part of the repair-qa dataset even when exposed to TPR examples in the prompt; which in turn shows that the model is not able to interpret or integrate the TPR for too large a part of repair-qa. On a very small proportion of cases, Davinci provides responses which are similar (usually a partial match as in the second example above: "British Ministry of Information" vs. "Ministry of Information"), which is captured by the BLEU score metric.
## 5 Conclusion
The ability to interpret and generate repairs is essential to robust and faithful Conversational AI. In this paper, we focused on Third Position Repair (TPR) that's been largely neglected in the NLP community. We collect, analyse and release the first large dataset of TPRs and use it to evaluate strong baseline repair execution models, as well as the conversational QA performance of Open AI's Davinci model when it encounters TPRs. The results show very poor out-of-the-box performance on TPRs which then improves when the model is exposed to the repair-qa dataset. But even then, Davinci does not exhibit an acceptable performance on TPRs when evaluated end to end in a Conversational QA setting. This is a symptom of the sparsity of TPRs in the original dialogic data used to pretrain Davinci and LLMs in general; and suggests that LLM researchers should be more selective in how they compile the datasets used for pretraining.
For this paper, we did not have a chance to evaluate later releases of LLMs (e.g. GPT3.5; GPT4) - it would be telling to see how much performance improvement the later models might exhibit on TPRs. Our evaluation methods above in conjunction with the repair-qa dataset can be used easily to perform these evaluations. Finally, we hope that this paper inspires further computational research into miscommunication phenomena in dialogue in the context of recent astonishing successes with LLMs.
|
2309.06888 | OWL Reasoners still useable in 2023 | In a systematic literature and software review over 100 OWL reasoners/systems
were analyzed to see if they would still be usable in 2023. This has never been
done in this capacity. OWL reasoners still play an important role in knowledge
organisation and management, but the last comprehensive surveys/studies are
more than 8 years old. The result of this work is a comprehensive list of 95
standalone OWL reasoners and systems using an OWL reasoner. For each item,
information on project pages, source code repositories and related
documentation was gathered. The raw research data is provided in a Github
repository for anyone to use. | Konrad Abicht | 2023-09-13T11:22:42Z | http://arxiv.org/abs/2309.06888v1 | # OWL Reasoners still useable in 2023
###### Abstract
In a systematic literature and software review over 100 OWL reasoners/systems were analyzed to see if they would still be usable in 2023. This has never been done in this capacity. OWL reasoners still play an important role in knowledge organisation and management, but the last comprehensive surveys/studies are more than 8 years old. The result of this work is a comprehensive list of 95 standalone OWL reasoners and systems using an OWL reasoner. For each item, information on project pages, source code repositories and related documentation was gathered. The raw research data is provided in a Github repository for anyone to use.
## 1 Introduction
There are many surveys and studies concerning OWL reasoners. Some examine the underlying methods and functionality, others compare performance metrics. One might think that the field of OWL reasoners is well established and that there is software for each relevant application. But this is not the case. Instead I have noticed that well known reasoners have hardly been updated in the last 10 years (e.g. HermiT). Some are still usable, mostly as Protege plugins, but it raises the question whether new (research or commercial) projects should rely on them. How are they maintained? Are bugs detected and dealt with? Do projects maintain their software dependencies? People interested in OWL reasoners today face many obstacles. To get a neutral view on the software landscape, I conducted a survey between May and July 2023. You hold the results of this work in your hands.
This paper is structured as follows: Section 2 contains short summary of required background knowledge. Section 3 then summarises related work. Section 4 describes my methodology and the section 5 presents results of my research. Finally, in section 6, I draw my conclusions and in section 7, I provide further starting points for future work.
### Publicly available research data
All research data is publicly available via a Github repository. It contains a CSV file with a list of analyzed OWL reasoners as well a CSV file with systems using a foreign OWL reasoner. For each entry there is metadata about installation, usability and references such as source code repository. All this data is available at the following URL:
[https://github.com/k00ni/owl-reasoner-list](https://github.com/k00ni/owl-reasoner-list)
I invite everyone to contribute. The repository is designed in a way to support further research and additions, so that others can continue the work in the years to come without having to start from scratch each time.
## 2 Reader background
You should have an extended knowledge of Semantic Web technologies and concepts such as RDF, RDFS, OWL 1/2 and OWL reasoning. There are many programming/software environments used to develop OWL reasoners, so basic knowledge in compiling and executing programs is recommended. Basic knowledge of software development using distributed version control systems, such as Git, is helpful. Below is a brief summary of the most widely used systems.
### Protege
Protege[73] is an ontology editor well known to ontologists and Semantic Web developers. It has been developed by Stanford University1. It provides tools for developing and maintaining OWL ontologies. There are many plugins available, for instance to use an OWL reasoner. Protege is written in Java and runs on Windows 10/11 as well as Ubuntu Linux.
Footnote 1: [https://protege.stanford.edu/](https://protege.stanford.edu/)
### OWL API
OWL-API [24] is written in Java and provides an Application Programming Interface for managing OWL ontologies. In addition to parsing and manipulating OWL ontologies, it also allows the use of reasoners. It also includes validators for different OWL profiles, for instance OWL 2 QL2, OWL 2 EL3 or OWL 2 RL4. Further information and source code can be found on the project page5.
Footnote 2: [https://www.w3.org/TR/owl2-profiles/#OWL_2_QL](https://www.w3.org/TR/owl2-profiles/#OWL_2_QL)
Footnote 3: [https://www.w3.org/TR/owl2-profiles/#OWL_2_EL](https://www.w3.org/TR/owl2-profiles/#OWL_2_EL)
Footnote 4: [https://www.w3.org/TR/owl2-profiles/#OWL_2_RL](https://www.w3.org/TR/owl2-profiles/#OWL_2_RL)
Footnote 5: [https://owlcs.github.io/owlapi/](https://owlcs.github.io/owlapi/)
## 3 Related work
Since the publication of OWL in 2001, there have been many benchmarks and surveys comparing and evaluating OWL reasoners. In the following only the most recent and relevant ones are presented.
The most recent and relevant publication [30] is from 2023. The authors evaluated the performance of six prominent OWL 2 DL compliant reasoners (such as Pellet, FaCT++ and Hermit) on various reasoning tasks. One of their findings was that many projects are no longer actively maintained. This supports my results and observations, even though their metrics differ from the ones used in this paper (they used a wider range for activity: last 10 years).
The website [http://owl.cs.manchester.ac.uk/tools/list-of-reasoners/](http://owl.cs.manchester.ac.uk/tools/list-of-reasoners/) is often cited in publications. It contains a list of 39 reasoners, each entry having a summary and some metadata such as supported interfaces. Last update was in June 19, 2018. Authors behind it are Uli Sattler and Nico Matentzoglu. They link to [https://www.w3.org/2001/sw/wiki/OWL/Implementations](https://www.w3.org/2001/sw/wiki/OWL/Implementations), an overview of OWL implementations such as reasoners, editors and APIs. It contains fewer entries, but still has some interesting details per entry. It was last updated on June 9, 2020.
In [28] (2017) the authors report on a survey of many Semantic Web tools and services, including a comparison of 23 OWL reasoners in terms of their features. All reasoners were reported as usable, but unfortunately there was no information about maintenance status of OWL reasoners.
The authors of [57] performed a reasoner evaluation in 2015 using 13 reasoners (such as FaCT++, HermiT and jcel). They reported that all the reasoners were usable, although they performed very differently in the competition (e.g. performing DL consistency checking or DL classification). Also in 2015 a survey was conducted using 35 OWL reasoners [37] by the same authors who also created [http://owl.cs.manchester.ac.uk/tools/list-of-reasoners/](http://owl.cs.manchester.ac.uk/tools/list-of-reasoners/). In the survey they asked developers about functionality, language used and for feedback for recommended usage. They report that eight reasoners are barely maintained and CEL and HermiT are not maintained at all.
It is important to note that there no survey or benchmark has been found that compares more than 35 reasoners. They either focus on the most prominent reasoners or only use reasoners for a certain OWL profile. Some surveys also include other systems, such as editors or IDEs, which are out of the scope of this work.
## 4 Methodology
The motivation behind this survey was to get an objective view of available OWL reasoners. For a comprehensive overview information such as project status and usability is important. This information needs to be researched, ordered and assessed for each OWL reasoner.
The following list contains major research questions:
1. Which OWL reasoners are available?
2. Which are still usable?
3. Which are still actively maintained?
I have chosen these research questions in order to get as neutral an overview as possible, they provide a basis for further investigation. The second question refers to usable tools, which is crucial because if a software can no longer be started, it is useless. In the following I specify the terms OWL reasoner, usable and actively maintained.
### Important terms
#### 4.1.1 OWL Reasoner
An OWL reasoner (or semantic reasoner) is basically an inference machine that, infers logical consequences from a given set of axioms (RDFS/OWL data). However, my analysis has shown, that OWL reasoners can have a wide range of features. For example, checking the consistency of an ontology, or checking whether a given set of rules applies. In this survey, the range of features scope doesn't matter as long as it provides OWL reasoning in some way.
An OWL reasoner was included in the analysis if it is an open source project or is provided free of charge. Furthermore, only software released after the public announcement of OWL 1 in 20046 was considered.
Footnote 6: According to [https://www.w3.org/TR/owl-features/](https://www.w3.org/TR/owl-features/)
#### 4.1.2 Usable
An OWL reasoner (or system using a foreign OWL reasoner) is considered usable, if it meets the following criteria:
1. The OWL reasoner can be started successfully on Ubuntu 20.04 (64-bit, X11). Or using PlayOn-Linux7 + Wine8, if software requires a Windows environment. My local machine hardware details are: IBM ThinkCentre with 8 GB RAM, Intel Core i5-7400T and solid state disk.
2. I did not test on my local machine if I did not know the development environment of the software. Systems such as Haskell or Prolog require a certain software stack, that can only be set up properly with prior knowledge. This is also the case, if a project was only available as a Java API or Protege plugin. No custom code was written and executed to test the reasoner. In all these cases I have instead examined project documentation and projects/publications, that use a particular reasoner to determine, whether it is usable or not.
In some cases it was not possible to clarify whether an OWL reasoner was usable without any doubt. Relevant information has been added and its up to the reader to conduct further analysis. This approach was taken to avoid misunderstandings and false claims.
#### 4.1.3 Actively maintained
Active maintenance is an important criteria for people looking for software for a specific use case. However, it is difficult to objectively assess whether a software is still being actively maintained or not. The following aspects have been chosen because they are objective and rooted in software engineering.
1. An open source OWL reasoner is considered to be actively maintained, if the last release/code commit is not older than 3 years or there are other activities by the developers/maintainers, such as responses to bug reports in the same time period.
2. A closed source OWL reasoner is considered to be actively maintained, if the developer has provided software at the time of the survey.
Why 3 years? In my experience open source projects can be inactive for 2-3 years, but still receive important updates from time to time. 3 years seemed to be reasonable enough for this type of survey.
### Literature- and Internet research
The OWL reasoner has its roots in the scientific community. Therefore literature research was the starting point. I used Google Schoolar9 to find relevant publications. Initially, I searched for "OWL Reasoner", but refined my search by using specific project names with the term reasoner to avoid ambiguous results. Only publicly available publications were considered, which means that if a publication was behind a paywall, it was ignored. A paywalled publication may contain valuable information, but I found it unreasonable to pay 30 EUR or more just to gain access.
Footnote 9: [https://scholar.google.com/](https://scholar.google.com/)
An OWL reasoner was selected for further research, if there was a scientific publication or technical report about it, or it had a dedicated project repository. Small demos were ignored, because their usability in the long term was questionable. Systems that provide reasoning services using a third-party OWL reasoner were managed separately. They were included because they provide a benefit to users for certain applications, such as selecting an appropriate reasoner for a given ontology (meta reasoner).
All OWL reasoners were collected in a CSV file, supplemented by meta data such as usability, maintenance status or project website. This list provided a stable overview of available OWL reasoners and also helped to find relations between some projects. At [https://github.com/k00ni/owl-reasoner-list](https://github.com/k00ni/owl-reasoner-list) you can get all raw data, such as the mentioned CSV file, which was created during this analysis.
In addition to the literature search, a separate internet search was carried out for each OWL reasoner using Google Search. Project websites, source code repositories and other relevant information were of interest. One note: Some projects were mentioned in an article and only have a project page, but there was no dedicated article about them.
Github10 is one of the main platforms for finding open source projects11. Therefore a search using reasoner name (and sometimes the term "reasoner" too) was conducted. The same applies for projects on other platforms such as Bitbucket or Sourceforge.
Footnote 10: [https://www.github.com](https://www.github.com)
Footnote 11: [https://www.linuxfoundation.org/blog/hosting-open-source-projects-on-github-nine-things-you-need-to-know](https://www.linuxfoundation.org/blog/hosting-open-source-projects-on-github-nine-things-you-need-to-know)
### Software review and analysis
Each OWL reasoner was examined. First, executable binaries, if available, were downloaded and tested on my local machine running Ubuntu 20.04. After downloading a binary, I tried to find information about installation/setup. Based on the available information and my personal knowledge the binary was installed/setup and executed.
If the software started and showed a CLI or user interface the test was complete. If a library (e.g. as Jar-file12) was provided by the project, I tried to determine usability by searching the project files and the web. For example, if a library was provided and the project used a continuous integration pipeline that did not report any errors during the last run, it can be assumed that the library still works.
Footnote 12: [https://en.wikipedia.org/wiki/Jar_](https://en.wikipedia.org/wiki/Jar_)(file_format)
If no usable files could be found, no further investigation was conducted. No custom code was developed to test a reasoner.
## 5 Results
The structure of this section is as follows. First, a short overview about the most important statistical figures related to OWL reasoners (Table 1) and systems using a foreign OWL reasoner (Table 2) is given.
This is followed by a list of available OWL reasoners. Each block consists of a summary, download and repository part as well as an installation and usage part. This is followed by a list of systems which use a foreign OWL reasoner. Finally, a list of (probably) unusable OWL reasoners is given. Table 3 in the appendix shows a list of all usable and maintained reasoners. Table 4 contains a list of all analyzed OWL reasoners. A more detailed overview can be found in the mentioned Github repository (look for the files: "reasoner.csv" and "system-using-foreign-reasoner.csv").
### Usable OWL Reasoners
Below is a list of all usable OWL reasoners.
#### 5.1.1 AllegroGraph
**Summary:** AllegroGraph13 is an RDF graph database which provides data storage and other services. One of these is an RDFS++ reasoner, which supports all RDFS predicates and also some OWL predicates.14. Source code is not available. The most relevant publication is from Diogo Fernandes and Jorge Bernardino [18]. The company behind AllegroGraph has published many white papers about this reasoner, but no information was found about their RDFS++ Reasoner15.
Footnote 13: [https://alleograph.com/products/allegrograph/](https://alleograph.com/products/allegrograph/)
Footnote 14: [https://franz.com/agraph/support/documentation/current/agraph-introduction.html#reasoning-intro](https://franz.com/agraph/support/documentation/current/agraph-introduction.html#reasoning-intro)
**Download and Repository:** AllegroGraph can be in version 7.3.1 downloaded for different Linux systems (Ubuntu, RHEL,...)16. You can also use a Docker container to run it. There was no binary available for Windows. The AllegroGraph client source code is available on Github in several languages17.
Footnote 17: [https://github.com/franzinc?tab=repositories](https://github.com/franzinc?tab=repositories)
**Installation and Usage:** Installation introductions are available on the download page. Usage information can be found at the following link18.
Footnote 18: [https://franz.com/agraph/support/documentation/current/reasoner-tutorial.html](https://franz.com/agraph/support/documentation/current/reasoner-tutorial.html)
#### 5.1.2 Arachne
**Summary:** Arachne [4] is an RDF rule engine written in Scala with support for efficient reasoning over large OWL RL terminologies. It implements the Rete/UL algorithm of Robert B. Doorenbos [15].
**Download and Repository:** The source code is available on Github19. Latest release 1.2.1 and is from January 14, 2020. The release file contains a couple of Jar files as well as a bat-file for Windows and a binary for Linux systems.
Footnote 19: [https://franz.com/agraph/downloads/](https://franz.com/agraph/downloads/)
**Installation and Usage:** Initially there was little information available about installation and usage. I created an issue to ask for help and get more information20. The author responded very quickly and added sufficient information in the usage section of the README file. There is also a Protege plugin available21, but it seems to be still unstable22.
Footnote 19: [https://github.com/balhoff/arachne/issues/146](https://github.com/balhoff/arachne/issues/146)
Footnote 21: [https://github.com/balhoff/arachne-protege](https://github.com/balhoff/arachne-protege)
Footnote 22: [https://github.com/balhoff/arachne/issues/5](https://github.com/balhoff/arachne/issues/5)
#### 5.1.3 BaseVISor
**Summary:** BaseVISor [38] is a forward-chaining inference machine written in Java and optimized for ontological and rule-based reasoning.
**Download and Repository:** BaseVISor can be downloaded from the following project page23. The current license allows use free-of-charge for academic purposes only. For all other purposes a commercial license is to be acquired. To start the download you need to enter your name and email. You will receive a license key later on, which is required to use the software.
Footnote 23: [https://vistology.com/products/basevisor/](https://vistology.com/products/basevisor/)
**Installation and Usage:** The tested version was 2.0.2 and was downloaded as ZIP archive. The archive contains some script and jar files as well as API documentation and usage information. To see if it is usable, execute test_drive.bat (Windows) or test_drive.sh (Linux) in the terminal (requires Java 1.5). A local test showed that BaseVISor is usable on Ubuntu 20.04, using OpenJDK 11.0.8.
#### 5.1.4 Born
**Summary:** BORN [11] is a Bayesian ontology reasoner and written in Java. Bayesian ontology language is a family of probabilistic ontology languages that allow modelling of probabilistic information about axioms in an ontology. BORN is able to work with the Bayesian ontology language BEL.
**Download and Repository:** The source code of BORN is available on Github24. It is provided as stand-alone software and Protege plugin. Version 0.3.0 has been reviewed and was released in April 201725. There are development activities from time to time. Latest commit was January 202226.
Footnote 24: [https://github.com/julianmendez/born](https://github.com/julianmendez/born)
**Installation and Usage:** The Protege plugin must be installed manually27. To use the standalone version, execute born.jar in the terminal. The file is located in born-standalone/target directory (see zip archive). Further usage information can be found in the Github repository as well as on a separate project page28.
Footnote 28: [https://sourceforge.net/projects/latitude/files/cel/](https://sourceforge.net/projects/latitude/files/cel/)
Footnote 29: [https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc](https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc)
Footnote 30: [https://github.com/julianmendez/cel/downloadloading-cel](https://github.com/julianmendez/cel/downloadloading-cel)
#### 5.1.5 Cel
**Summary:** CEL [42] is a lightweight Description Logic reasoner for large-scale biomedical ontologies in OWL 2 EL. It is written in Java and its source code is released as open source.
**Download and Repository:** The latest version 0.6.0 can be downloaded from Sourceforge29 and Github30. There is also a Protege plugin available, which can be downloaded from Sourceforge31. There are sporadic development activities in the Github repository, latest commit was in January 20223.
Footnote 31: [https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc](https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc)
**Installation and Usage:** CEL has been developed to run only on 32-bit Linux systems according to the documentation33, but the source code can be compiled for other systems if needed. There is a short usage section in the README file34. A manual with further information is available on Sourceforge35. There is also a Protege plugin available.
Footnote 33: [https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc](https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc)
#### 5.1.6 Clipper
**Summary:** Clipper is a conjunctive query rewriting/answering engine for Horn-SHIQ ontologies. There was no primary publication available, but it is mentioned in a few publications, such as [9] and [67].
**Download and Repository:** It is written in Java and the source code can be found on Github36. Further information can be found in the repository, but most of it is outdated37.
Footnote 36: [https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc](https://github.com/julianmendez/cel/commit/36b3e62bd26278d79874d0fe7ee49a6f0ff2bcc)
**Installation and Usage:** The Maven build was tested successfully and Clipper CLI was shown on the terminal. No further test has been conducted. Some usage information can be found in the README file38. Be aware that a program called DLV seems to be required, but its project page is not longer accessible39.
Footnote 38: [https://github.com/gknica/clipper/download.html](https://github.com/gknica/clipper/download.html)
Footnote 39: [http://www.dlysystem.com/dlvsystem/index.php/DLV](http://www.dlysystem.com/dlvsystem/index.php/DLV)
#### 5.1.7 ElepHant
**Summary:** ElepHant [65] is a consequence-based reasoner prototype for EL+ fragment of Description Logics. It is the successor of the previously developed reasoner called CHEETAH [66]. The motivation behind the development of ElepHant was to improve performance and to allow usage on embedded systems with limited memory and CPUs.
**Download and Repository:** It is written in C and the source code can be found on Github40. A binary for 64-bit Linux systems is available for download there and needs to be executed on the terminal to show a CLI. Latest commit was in 2021, but since the latest release (0.5.9) not much has changed41. It seems project receives a few updates from time to time, but is not developed any further.
Footnote 40: [https://github.com/serktaya/elephant-reasoner/compare/v.0.5.9...master](https://github.com/serktaya/elephant-reasoner/compare/v.0.5.9...master)
**Summary:** ElepHant [65] is a consequence-based reasoner prototype for EL+ fragment of Description Logics. It is the successor of the previously developed reasoner called CHEETAH [66]. The motivation behind the development of ElepHant was to improve performance and to allow usage on embedded systems with limited memory and CPUs.
**Download and Repository:** It is written in C and the source code can be found on Github40. A binary for 64-bit Linux systems is available for download there and needs to be executed on the terminal to show a CLI. Latest commit was in 2021, but since the latest release (0.5.9) not much has changed41. It seems project receives a few updates from time to time, but is not developed any further.
Footnote 41: [https://github.com/serktaya/elephant-reasoner/compare/v.0.5.9...master](https://github.com/serktaya/elephant-reasoner/compare/v.0.5.9...master)
**Installation and Usage:** The binary42 has been tested successfully on Ubuntu Linux 20.04. Some usage information are available in the README file43.
Footnote 42: [https://github.com/serktaya/elephant-reasoner/releases/tag/v.0.5.9](https://github.com/serktaya/elephant-reasoner/releases/tag/v.0.5.9)
Footnote 43: [https://github.com/serktaya/elephant-reasoner/blob/master/README](https://github.com/serktaya/elephant-reasoner/blob/master/README)
Footnote 44: [https://github.com/leonstoolges/elk-reasoner/wiki](https://github.com/leonstoolges/elk-reasoner/wiki)
Footnote 45: [https://github.com/inventionologies/elk-reasoner/releases](https://github.com/inventionologies/elk-reasoner/releases)
#### 5.1.8 Elk
**Summary:** ELK [27] is an OWL 2 EL reasoner and written in Java. OWL 2 EL is a subset of OWL 2 that recommended when defining a large amount of classes/properties. According to the developers, ELK provides better performance compared to other reasoners, because it utilizes parallelization.
**Download and Repository:** The documentation of ELK is very extensive compared to other reasoner projects. See the project wiki on Github for installation and usage information44. ELK version 0.4.345 can be downloaded as binary or pure source code. Version 0.5.0 is available as a Protege plugin46.
Footnote 44: [https://oss.sonatype.org/service/local/artifact/maven/content?r=snapshots&g=org.semanticweb.elk&a=elk-distribution-protege&=zip&v=LATEST](https://oss.sonatype.org/service/local/artifact/maven/content?r=snapshots&g=org.semanticweb.elk&a=elk-distribution-protege&=zip&v=LATEST)
**Installation and Usage:** The Protege plugin is one of the reasoners provided with Protege per default. In Protege 5.6.1 you can test ELK in version 0.4.3 and 0.5.0. ELK can also be used via the OWL API and through a command line client47. Each package appears to provide different functionality.
Footnote 47: [https://github.com/viewcasesoner/eye/default](https://github.com/viewcasesoner/eye/default)
#### 5.1.11 FaCT++
**Summary:** FaCT++[85] is the successor to the FaCT (Fast Classification of Terminologies) reasoner. It implements a tableaux decision procedure for the well known SHOIQ Description Logic, with additional support for data types, including strings and integers according to the developers [85].
**Download and Repository:** The latest version 1.6.5 is available for download from the Bitbucket repository57. The last commit was in December 201758.
Footnote 57: [https://bitbucket.org/dtsarkov/factplusplus/downloads/](https://bitbucket.org/dtsarkov/factplusplus/downloads/)
**Installation and Usage:** FaCT++ is part of the standard Protege 5.6.1 installation, but is also available for download59. In some cases the plugin installation fails on Windows, but there is a workaround available60. It is considered usable, because of its availability in latest Protege 5.6.1 and it is used successfully in [30].
Footnote 58: [https://bitbucket.org/dtsarkov/factplusplus/commits/650a50ce78a2fb57fd609a2ee82b72b6e25f4ee](https://bitbucket.org/dtsarkov/factplusplus/commits/650a50ce78a2fb57fd609a2ee82b72b6e25f4ee)
Footnote 59: [https://bitbucket.org/dtsarkov/factplusplus/commits/650a50ce78a2fb57fd609a2ee82b72b6e25f4ee](https://bitbucket.org/dtsarkov/factplusplus/commits/650a50ce78a2fb57fd609a2ee82b72b6e25f4ee)
Footnote 60: [https://bitbucket.org/dtsarkov/factplusplus/](https://bitbucket.org/dtsarkov/factplusplus/)
Footnote 61: [https://github.com/emo-protege/BMM0/blob/74ab82e5a10c5362f4c407b5c38192eb3013a37f/doc/installing_factplusplus.md](https://github.com/emo-protege/BMM0/blob/74ab82e5a10c5362f4c407b5c38192eb3013a37f/doc/installing_factplusplus.md)
Footnote 62: [https://www.umbertostraccia.it/cs/software/fuzzyDL/fuzzyDL.html](https://www.umbertostraccia.it/cs/software/fuzzyDL/fuzzyDL.html)
#### 5.1.12 fuzzyDL
**Summary:** fuzzyDL [8] is a Description Logic reasoner with support for fuzzy logic and fuzzy rough set reasoning. A quote from the project page: "_It provides a reasoner for fuzzy SHIF with concrete fuzzy concepts (ALC extended with transitive roles, a role hierarchy, inverse, reflexive, symmetric roles, functional roles, and explicit definition of fuzzy sets)_"61. Although, it uses a reasoner, it seems that the system itself is a reasoner.
**Download and Repository:** There was no source code repository available, but there are two websites with further information. The main page contains information about the software itself, but also links to downloads, latest changes etc62. The other website provides information about the Protege plugin63. Currently version 2.3 is available for download (January 9, 2019).
Footnote 62: [https://www.umbertostraccia.it/cs/software/fuzzyDL/fuzzyDL.html](https://www.umbertostraccia.it/cs/software/fuzzyDL/fuzzyDL.html)
**Installation and Usage:** The plugin is only available for the outdated Protege 4.364. Use the standard procedure to install the plugin. No further installation or usage information could be found. I tried to test the plugin using Protege 4.3, but Protege didn't start properly65.
Footnote 63: [https://www.umbertostraccia.it/cs/software/fuzzyDML/index.html](https://www.umbertostraccia.it/cs/software/fuzzyDML/index.html)
#### 5.1.13 HermiT
**Summary:** HermiT [20] is an OWL 2 reasoner, written in Java. It uses hypertableul calculus and provides support for entailment checking and also reasoning services such as class and property classification or answering SPARQL queries.
**Download and Repository:** The former project website is no longer available66. Several repositories were found on Github, but it seems that [https://github.com/phillord/hermit-reasoner](https://github.com/phillord/hermit-reasoner) contains the source code of the latest version (1.3.8). Latest commit is over 6 years old67. Repository has been abandoned68. There are other forks, but they seem to contain few, if any adaptions. There was no binary available on Github, but [https://Jar-download.com/?search_box=HermiT](https://Jar-download.com/?search_box=HermiT) provides an unofficial one. HermiT is also available as a Protege plugin69.
Footnote 63: [https://www.umbertostraccia.it/cs/software/fuzzyDML/index.html](https://www.umbertostraccia.it/cs/software/fuzzyDML/index.html)
Footnote 64: [https://www.umbertostraccia.it/cs/software/fuzzyDL/download.html](https://www.umbertostraccia.it/cs/software/fuzzyDL/download.html)
Footnote 65: Running ”/run.sh” in the terminal showed the following errors: org.protege.common.jar (org.osg.framework.BundleException: Unresolved constraint in bundle org.protege.common [1]: Unable to resolve 1.0: missing requirement [1.0] oogi.wiring.package; (&(osgi.wiring.package=org.w3c.dom)(version?=0.0.0))) r.o.osgi.framework.BundleException: Unresolved constraint in bundle org.protege.common [1]: Unable to resolve 1.0: missing requirement [1.0] oogi.wiring.package; (&(osgi.wiring.package=org.w3c.dom)(version?=0.0.0)) [..]
Footnote 66: [http://www.hermit-reasoner.com/](http://www.hermit-reasoner.com/)
Footnote 67: [https://github.com/phillord/hermit-reasoner/commit/37ec30aced32ac81ebecec5e33fdad255ddefcb4c3](https://github.com/phillord/hermit-reasoner/commit/37ec30aced32ac81ebecec5e33fdad255ddefcb4c3)
Footnote 68: [https://github.com/phillord/hermit-reasoner/pull/3#issuscomment-1639673663](https://github.com/phillord/hermit-reasoner/pull/3#issuscomment-1639673663)
Footnote 69: [https://protegewiki.stanford.edu/wiki/HermiT](https://protegewiki.stanford.edu/wiki/HermiT)
**Installation and Usage:** HermiT 1.4.3.456 is part of the standard Protege 5.6.1 package. Interesting observation, the Protege plugin website only contains information for version 1.3.8 and earlier70. No installation or usage documentation was available. HermiT is considered usable, because its available in the latest Protege 5.6.1 and it is used successfully in [30].
Footnote 70: [https://protegewiki.stanford.edu/wiki/HermiT](https://protegewiki.stanford.edu/wiki/HermiT)
#### 5.1.14 jcel
**Summary:** jcel [41] is a Description Logic EL+ reasoner hat implements a subset of OWL 2 EL. It is written in Java and uses a rule-based completion algorithm internally. It can be used as a Java library or as a Protege plugin. Julian Mendez developed jcel, but was also part of the BORN reasoner development team.
**Download and Repository:** The source code is available on Github71. A zip archive containing the relevant files is available for download on Sourceforge72. The Protege plugin can be downloaded from the Github repository73. The latest version 0.24.1 was in 2016, but latest commit was in 202274.
Footnote 71: [https://github.com/julianmendez/jcel/](https://github.com/julianmendez/jcel/)
Footnote 72: [https://github.com/julianmendez/jcel/0.24.1/zip/jcel-0.24.1.zip/download](https://github.com/julianmendez/jcel/0.24.1/zip/jcel-0.24.1.zip/download)
**Installation and Usage:** The jcel plugin is part of the standard Protege 5.6.1 package. There is no dedicated page for the plugin. It is also available as a Java library, but it has to be built with Maven before it can be used. Good documentation is available75. The library was built locally and running "jcel-standalone/target/jcel.Jar" in the terminal produced a CLI.
Footnote 73: [https://github.com/julianmendez/jcel/](https://github.com/julianmendez/jcel/)
#### 5.1.15 JFact
**Summary:** JFact76 is a Java port of FaCT++, an OWL DL reasoner for OWL API 3.x and 4.x. There is no dedicated publication for JFact available, but it is at least mentioned in [47].
Footnote 76: [https://github.com/julianmendez/jcel/](https://github.com/julianmendez/jcel/)
**Download and Repository:** The project website contains some information and also points to a Github repository77. On Sourceforge several versions (Jar file and Protege plugin) until version 4.0.0 are available for download78. On Github the source code until version 4.0.2 is available to download79.
Footnote 78: [https://github.com/owles/jfact/](https://github.com/owles/jfact/)
Footnote 79: [https://sourceforge.net/projects/jcelt/files/jcel/0.24.1/zip/jcel-0.24.1.zip/download](https://sourceforge.net/projects/jcelt/files/jcel/0.24.1/zip/jcel-0.24.1.zip/download)
**Installation and Usage:** Protege plugin 4.0.4 has been reported to work at least partially with Protege 5.280. It is not clear if these problems have been. JFact seems to work with (obsolete) Protege 4.x just fine. JFact can also be used in a Java project.
Footnote 8: [https://github.com/julianmendez/jcel/releases](https://github.com/julianmendez/jcel/releases)
#### 5.1.16 Kaon2
**Summary:** KAON2[46] provides reasoning (e.g. satisfiability decision, subsumption hierarchy computation) for SHIQ knowledge bases. It is free of charge for academic purposes only, otherwise a commercial license is required. Ontoprise GmbH, the company behind the software, filed for bankruptcy in 201281.
Footnote 8: [https://github.com/wolcs/jfact/issues/18](https://github.com/wolcs/jfact/issues/18)
**Download and Repository:** The project website provides extensive information about KAON282. The latest downloadable version is from 200883. According to the feedback from Boris Motik (via email), the project is no longer being developed.
Footnote 8: [https://github.com/julianmendez/jcel/commit/cdfb5f77312f84a6b81531d7be9974783756ff12](https://github.com/julianmendez/jcel/commit/cdfb5f77312f84a6b81531d7be9974783756ff12)
**Installation and Usage:** KAON2 was developed for Java 1.5, which was released in 2004, but it can still be run on Ubuntu 20.04 using OpenJDK 11. Executing the jar file in the terminal produces a user interface. No further testing has been done.
#### 5.1.17 Konclude
**Summary:** Konklude [77] is a SROIQV(D) Description Logic reasoner. According to the authors it is optimized for high throughput. It is written in C++ and uses the Qt framework84.
Footnote 84: [https://contribute.qt-project.org/](https://contribute.qt-project.org/)
**Download and Repository:** The source code is publicly available and can be downloaded from the project website85 and on Github86. The latest version 0.7.0 is from 2021, so the project is considered actively maintained.
Footnote 85: [https://www.derivo.de/en/products/konclude/download/](https://www.derivo.de/en/products/konclude/download/)
**Installation and Usage:** There are binaries for Windows, Linux and Mac OS systems that can be used without installation. A Docker container setup is also available. A local test has been conducted using the Linux binary87. To run it, the archive was extracted and the file "Konklude.sh" executed on the terminal. It spawns a CLI with further usage information. Further usage information can be found in the README file88.
Footnote 86: [https://github.com/konclude/Konclude/releases](https://github.com/konclude/Konclude/releases)
Footnote 87: Kconclude-v0.7.0-1138-Linux-x64-GCC-Static-Qt5.12.10.zip
Footnote 88: [https://github.com/konclude/Konclude/tree/master#usage](https://github.com/konclude/Konclude/tree/master#usage)
Footnote 89: [https://github.com/lift-reasoner/lift#description](https://github.com/lift-reasoner/lift#description)
#### 5.1.18 LiFR
**Summary:** LiFR [88] stands for "Lighweight Fuzzy semantic Reasoner" and is a fuzzy Description Logic reasoner written in Java. It is an extension of the Pocket KRHyper reasoner. They have extended the Description Logic interface to support additional semantics and the transformation of fuzzy operators into the native first-order clause implementation89. The software provides inference services such as consistency checking and fuzzy entailment.
Footnote 89: [https://github.com/lift-reasoner/lift#description](https://github.com/lift-reasoner/lift#description)
**Download and Repository:** The source code is available on Github90. There are currently no binaries available. Based on the feedback from Dorothea Tsatsou (project owner) the repository is monitored for new issues and is still maintained. Latest commit was in 2023.
Footnote 91: [https://github.com/lift-reasoner/lift/issues/2](https://github.com/lift-reasoner/lift/issues/2)
**Installation and Usage:** An issue was created to ask for help, because the documentation provided was insufficient91. Dorothea Tsatsou replied very quickly and provided helpful answers. LiFR can only be used in a Java project, there is no binary or Jar file available. Later on she added further information, such as in the usage section92. LiFR is considered usable based on the provided information in the Github repository.
Footnote 92: [https://github.com/lift-reasoner/lift#usage](https://github.com/lift-reasoner/lift#usage)
#### 5.1.19 LiRoT
**Summary:** LiRoT [6] is a reasoner that provides reasoning for a subset of OWL 2 RL and RDF-S entailment list. It is written in C and uses the RETE algorithm. It has been optimized to use limited memory and CPUs more efficiently, making it suitable for embedded systems, such as Arduino boards. The project is just over one year old (started in 2022), which is very young in comparison to other reasoners.
**Download and Repository:** Source code is available under the terms of the CeCILL-C license93 on a Github repository94. No binaries were found.
Footnote 92: [https://github.com/lift-reasoner/lift#usage](https://github.com/lift-reasoner/lift#usage)
**Installation and Usage:** To use, first check out the Git repository and then start compiling. Afterwards, LiRot can be used inside a C project. A demo is also provided, which was successfully run on Ubuntu 20.04. See the README file for further usage information95. LiRot runs on Linux and Arduino systems, according to the documentation.
Footnote 92: [https://github.com/lift-reasoner/lift#usage](https://github.com/lift-reasoner/lift#usage)
#### 5.1.20 Nora
NORA is a scalable OWL Reasoner, which is based on NoSQL databases and Apache Spark, according to the developer96. No publications could be found, however the following reasons speak for a mention: First
of all, it is very young, the latest commit is from January 9, 202397. A cursory look at the source code leads me to believe that it provides a relevant OWL reasoner. Also, the developer works at University Malaga98 and researches Knowledge Graphs, large-scale data processing and Big Data management. No own local testing was conducted, as a working Apache Spark cluster is required, which in turn depends on a specific use case and data.
Footnote 97: [https://github.com/benhid/nora/commit/8d591656cb38068422fd0889d6fb8d7ca4835f9f](https://github.com/benhid/nora/commit/8d591656cb38068422fd0889d6fb8d7ca4835f9f)
Footnote 98: [https://itis.uma.es/en/personal/antonio-benitez-hidalgo-2/](https://itis.uma.es/en/personal/antonio-benitez-hidalgo-2/)
Footnote 99: [https://ontop-vkg.org/](https://ontop-vkg.org/)
Footnote 100: [https://github.com/ontop/ontop](https://github.com/ontop/ontop)
Footnote 101: [https://ontop-vkg.org/guide/getting-started.html](https://ontop-vkg.org/guide/getting-started.html)
Footnote 102: [https://ontop-vkg.org/guide/cli.html](https://ontop-vkg.org/guide/cli.html)
Footnote 103: [https://ontop-vkg.org/tutorial/enpoint/](https://ontop-vkg.org/tutorial/enpoint/)
Footnote 104: [https://github.com/Galigator/openlet+s-an-owl-2-dl-reasoner](https://github.com/Galigator/openlet+s-an-owl-2-dl-reasoner)
Footnote 105: [https://github.com/Galigator/openlet](https://github.com/Galigator/openlet)
Footnote 106: [https://github.com/Galigator/openlet/releases](https://github.com/Galigator/openlet/releases)
Footnote 107: [https://github.com/Galigator/openlet/commit/3abcbcf0eece54233590cd4149055b78351e37dd](https://github.com/Galigator/openlet/commit/3abcbcf0eece54233590cd4149055b78351e37dd)
Footnote 108: [https://github.com/Galigator/openlet/tree/integration/examples/src/main/java/openlet/examples](https://github.com/Galigator/openlet/tree/integration/examples/src/main/java/openlet/examples)
Footnote 109: [https://mnrepository.com/artifact/com.github.galigator.openlet/openlet-protege](https://mnrepository.com/artifact/com.github.galigator.openlet/openlet-protege)
and download.com/artifacts/com.github.galigator.openlet/openlet-protege/2.6.5
Footnote 110: [https://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules](https://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules)
#### 5.1.22 Openllet
**Summary:** Openllet is an OWL 2 reasoner and built on top of Pellet. It is written in Java and provides functionality to check the consistency of ontologies, compute the classification hierarchy, explain inferences, and answer SPARQL queries104. No specific publications about Openlett could be found, but it is mentioned in [69].
Footnote 104: [https://github.com/RDFLib/rdflib](https://github.com/RDFLib/rdflib)
**Download and Repository:** Openllet is open source and its source code as well as pre-generated Jar files can be downloaded from Github105. The latest version 2.6.5 is from September 27, 2019106. The project is considered maintained as it receives updates from time to time (latest commit was in May 2023107).
Footnote 105: [https://github.com/Galigator/openlet](https://github.com/Galigator/openlet)
**Installation and Usage:** Openllet can be used via a Jar file in your own Java project. Examples are available in the repository108. There seems to be a Protege plugin based on various sources109, but a usable plugin could not be found.
Footnote 108: [https://github.com/Galigator/openlet/tree/integration/examples/src/main/java/openlet/examples](https://github.com/Galigator/openlet/tree/integration/examples/src/main/java/openlet/examples)
#### 5.1.23 Owl-Rl
**Summary:** OWL-RL is a reasoner, written in Python, for the OWL 2 RL profile110. It is based on RDFLib111, an RDF library that provides tools and services for working with RDF data (Parser, SPARQL and store implementations).
Footnote 110: [https://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules](https://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules)
Footnote 111: [https://github.com/RDFLib/rdflib](https://github.com/RDFLib/rdflib)
Footnote 112: [https://github.com/RDFLib/rdflib](https://github.com/RDFLib/rdflib)
#### 5.1.24 Wlm-Rl
**Summary:** OWL-RL is a reasoner, written in Python, for the OWL 2 RL profile110. It is based on RDFLib111, an RDF library that provides tools and services for working with RDF data (Parser, SPARQL and store implementations).
**Download and Repository:** The project repository is on Github112. The latest version 5.2.3 (from September 13, 2021) can be downloaded there.
Footnote 112: [https://github.com/RDFLib/OWL-RL](https://github.com/RDFLib/OWL-RL)
**Installation and Usage:** No local testing has been done, as you need to include the library in your own Python project. The README file contains some usage information113, but overall documentation is very scarce114. However, the project is considered usable and maintained as the latest commit is only 2 years old and its developers are very active in other RDF projects. Therefore future refinements and developments are very likely.
Footnote 113: [https://github.com/RDFLib/OWL-RL](https://github.com/RDFLib/OWL-RL)
Footnote 113: [https://github.com/RDFLib/OWL-RL/blob/master/README.rst](https://github.com/RDFLib/OWL-RL/blob/master/README.rst)
Footnote 114: [https://owl-rl.readthedocs.io/en/latest/installation.html](https://owl-rl.readthedocs.io/en/latest/installation.html)
#### 5.1.24 Pellet
**Summary:** Pellet [56] is an OWL 2 DL reasoner and, according to the authors, was the first sound and complete OWL 2 DL reasoner with extensive support for reasoning with individuals, user-defined data types, and debugging support for ontologies [72]. It is written in Java and its source code is publicly available.
**Download and Repository:** It official website was difficult to find. In publications [56] and [72] the website [http://www.mindswap.org/2003/pellet/](http://www.mindswap.org/2003/pellet/) is mentioned, but it is not available anymore. There are a few repositories on Github which claim to provide the source code of Pellet. The repository [https://github.com/severin-lemaignan/pellet](https://github.com/severin-lemaignan/pellet) seems to contain the original source code. Another repository is maintained by Stardog Union115 and it contains a more recent version (v2). In the projects README file there is a note about a version 3 of Pellet116 which is no longer open source. Both repositories are considered not maintained anymore. In "severin-lemaignan/pellet" the latest commit was 201111 while in "stardog-union/pellet" the latest commit was 2017118. There are 5 open pull requests119 with no comments120.
Footnote 115: [https://github.com/stardog-union/pellet/blob/master/README.md](https://github.com/stardog-union/pellet/blob/master/README.md)
Footnote 116: [https://github.com/stardog-union/pellet/commit/7710fb0f258c0815df8af0829e1a73d2250](https://github.com/stardog-union/pellet/commit/7710fb0f258c0815df8af0829e1a73d2250)
Footnote 117: [https://github.com/severin-lemaignan/pellet/commit/4c7d16bd181lec04117facd96ed592c6cfa956b](https://github.com/severin-lemaignan/pellet/commit/4c7d16bd181lec04117facd96ed592c6cfa956b)
**Installation and Usage:** The Pellet plugin is part of the standard Protege 5.6.1 package and is still usable. It can be used via a Jar file within a custom Java project. A local test of the CLI failed, because running "pellet.sh" on Ubuntu 20.04 aborted due to compilation errors121.
Footnote 121: [https://docs.oxfordsemantic.tech/reasoning.html](https://docs.oxfordsemantic.tech/reasoning.html)
#### 5.1.25 RDFox
**Summary:** RDFox [51] is an RDF store with support for materialisation-based parallel Datalog reasoning (OWL 2 RL), SWRL reasoning and SPARQL query answering. According to the developers, it can efficiently process billions of triples. They provide a detailed overview of the reasoning capabilities122. A license key is required to use the system. Information about conditions or prices could not be found.
**Download and Repository:** RDFox can be downloaded for Windows, Linux and Mac OS from [https://www.oxfordsemantic.tech/downloads](https://www.oxfordsemantic.tech/downloads). To receive the required license key submit the form from [https://www.oxfordsemantic.tech/tryrdfoxforfree](https://www.oxfordsemantic.tech/tryrdfoxforfree). I received a 30-day evaluation key by email. A repository containing the source could not be found.
**Installation and Usage:** RDFox 6.2 was downloaded and extracted as a ZIP archive. It contains a file that spawns a CLI when executed in a terminal. To test its usability I followed the steps in the "Getting Started" section of the documentation123. After starting the store, sample data was uploaded
and a test query was successfully submitted. In comparison to other reasoners, RDFox is one of the easiest to setup and use.
#### 5.1.26 RDFSharp.Semantics
**Summary:** RDFSharp.Semantics is an API and provides a SWRL-Reasoner with forward-chaining inference capabilities. It is an extension of the RDFSharp-API124, which provides functions to work with RDF data in general.
Footnote 124: [https://github.com/mdesalvo/RDFSharp](https://github.com/mdesalvo/RDFSharp)
**Download and Repository:** It is written in C#125 and available as open source on Github126. The latest version 3.5.0 can be downloaded from Github and nuget127.
Footnote 125: [https://en.wikipedia.org/wiki/C_Sharp_](https://en.wikipedia.org/wiki/C_Sharp_)(programming_language)
**Installation and Usage:** RDFSharp.Semantics can only be used inside a custom.NET project. There is a guide with more information about installation and usage128. I have not conducted any further testing due to lack of knowledge of.NET. However, based on the development on Github and recent commits (from 2023) the project is considered usable and maintained.
Footnote 126: [https://github.com/mdesalvo/RDFSharp.Semantics/releases/tag/v3.5.0](https://github.com/mdesalvo/RDFSharp.Semantics/releases/tag/v3.5.0)
Footnote 127: [https://github.com/gitferro/reasonable](https://github.com/gitferro/reasonable)
Footnote 128: [https://github.com/gitferro/reasonable](https://github.com/gitferro/reasonable)
#### 5.1.27 reasonable
**Summary:** reasonable is an OWL 2 RL reasoner, written in Rust and available as open source on Github129. According to the developers, it is much faster than the OWLRL and Allegro reasoners130. There is a section about supported OWL 2 rules in the README file131.
Footnote 131: [https://github.com/gtfierro/reasonable/releases/tag/nightly](https://github.com/gtfierro/reasonable/releases/tag/nightly)
**Download and Repository:** The reasoner can be installed using Cargo (Rust) or pip (Python). Docker containers are also available132. The developer is very active and responds promptly to issues. Besides, a static build is also available 133.
Footnote 132: [https://github.com/gitferro/reasonable/releases/tag/nightly](https://github.com/gitferro/reasonable/releases/tag/nightly)
**Installation and Usage:** Use Cargo or pip to install the reasoner134 on a local machine. See the README file for more information on Python Bindings135. At first, running the binary gave errors, so I created an issue to tell the developer about it136. He was very helpful and provided a fixed static build binary that ran successfully in the terminal (show a CLI).
Footnote 134: [https://docs.rs/reasonable/latest/reasonable/](https://docs.rs/reasonable/latest/reasonable/)
#### 5.1.28 Sequoia
**Summary:** Sequoia [5] is an OWL 2 DL reasoner, written in Scala and released under the terms of the GPL 3. A Scala environment is required to run Sequoia.
**Download and Repository:** The project repository is available on Github137. The documentation is very sparse and it seems that the whole repository has to be downloaded before any of the Sequoia parts can be used.
Footnote 137: [https://github.com/gtfierro/reasonable/issues/3](https://github.com/gtfierro/reasonable/issues/3)
**Installation and Usage:** I have not conducted ay tests due to my lack of knowledge of Scala. It seems to be maintained and usable as the latest commit was in 2020 and there are no issues in the bug tracker that indicate installation or usage problems. An issue has been opened to ask the developers to provide help with installation and a small usage example, but no response yet138.
Footnote 138: [https://github.com/andrewdbate/Sequoia/issues/3](https://github.com/andrewdbate/Sequoia/issues/3)
Footnote 139: [https://github.com/andrewdbate/Sequoia](https://github.com/andrewdbate/Sequoia)
Footnote 131: [https://github.com/andrewdbate/Sequoia/issues/3](https://github.com/andrewdbate/Sequoia/issues/3)
Footnote 132: [https://github.com/gtfierro7tab=packages&repo_name=reasonable](https://github.com/gtfierro7tab=packages&repo_name=reasonable)
Footnote 133: Use file ”reasonable-static” from [https://github.com/gtfierro/reasonable/releases/tag/nightly](https://github.com/gtfierro/reasonable/releases/tag/nightly)
Footnote 134: [https://docs.rs/reasonable/latest/reasonable/](https://docs.rs/reasonable/latest/reasonable/)
Footnote 135: [https://github.com/gtfierro/reasonable#python](https://github.com/gtfierro/reasonable#python)
Footnote 136: [https://github.com/gtfierro/reasonable/issues/24](https://github.com/gtfierro/reasonable/issues/24)
Footnote 137: [https://github.com/andrewdbate/Sequoia](https://github.com/andrewdbate/Sequoia)
Footnote 138: [https://github.com/andrewdbate/Sequoia/issues/3](https://github.com/andrewdbate/Sequoia/issues/3)
**Installation and Usage:** I have not conducted ay tests due to my lack of knowledge of Scala. It seems to be maintained and usable as the latest commit was in 2020 and there are no issues in the bug tracker that indicate installation or usage problems. An issue has been opened to ask the developers to provide help with installation and a small usage example, but no response yet138.
#### 5.1.29 TRILL, TRILP, TORNADO
**Summary:** TRILL [98] is a Description Logics reasoner, written in Prolog. It supports query answering for SHOIN (D) knowledge bases. \(TRILL^{P}\) and TORNADO are based on the source code of TRILL, but are not of interest.
**Download and Repository:** The source code can be found on Github139. The latest version 6.0.5 is from November 2022 and can be downloaded there. It is also possible to use a web based SWI-Prolog environment, which is provided by the linked Docker image140.
Footnote 139: [https://github.com/rzese/trill](https://github.com/rzese/trill)
Footnote 140: [https://github.com/rgiuzzi/trill-on-swish](https://github.com/rgiuzzi/trill-on-swish)
**Installation and Usage:** TRILL is a SWI-Prolog pack and therefore requires SWI-Prolog141 to run. There is also a way to try out TRILL in a browser by using SWISH142. The team provides a PDF manual for TRILL on Github143.
Footnote 141: [https://www.swi-prolog.org/](https://www.swi-prolog.org/)
Footnote 142: [https://trill-sw.eu/p/trlt.pl](https://trill-sw.eu/p/trlt.pl)
Footnote 143: [https://github.com/rzese/trill/tree/master/doc](https://github.com/rzese/trill/tree/master/doc)
Footnote 144: [https://github.com/vprover/vampire/releases](https://github.com/vprover/vampire/releases)
#### 5.1.30 Vampire
**Summary:** Vampire [87, 29] is a theorem prover written in C++, but also supports OWL DL reasoning.
**Download and Repository:** The source code is available on Github144. Latest version 4.7 is dated August 2022 and is available as Linux binary145. To use it on non-Linux system, the source code has to be compiled manually146.
Footnote 144: [https://vprover.github.io/usage](https://vprover.github.io/usage).
**Installation and Usage:** The downloaded Linux binary was ready to use. The following page summarizes first steps of use147.
Footnote 148: [https://github.com/karmaresearch/vlog/wiki](https://github.com/karmaresearch/vlog/wiki)
#### 5.1.31 VLog
**Summary:** VLog [10] is a Datalog-engine and rule-based reasoner (e.g. with existential rules). It is resource-saving and can handle multiple data sources, according to the developers148.
Footnote 149: [https://github.com/karmaresearch/vlog/wiki](https://github.com/karmaresearch/vlog/wiki)
**Download and Repository:** The source code and extensive documentation are available on Github149. The latest version 1.3.7 (from December 2022) can be downloaded there150.
Footnote 149: [https://github.com/karmaresearch/vlog/releases](https://github.com/karmaresearch/vlog/releases)
**Installation and Usage:** One way to run VLog is to compile it yourself or to use the Docker container151. I had to install some packages (cmake, g++, zlib1g-dev) before the compilation worked. After compiling, the file "build/vlog" was available. After executing it in a terminal, it showed a basic CLI. The wiki contains more information, such as usage examples152.
Footnote 151: [https://github.com/karmaresearch/vlog/wiki](https://github.com/karmaresearch/vlog/wiki)
#### 5.1.32 Whelk
**Summary:** Whelk is an ELK-based reasoner, but written in Scala. It provides a Scala API and interface for OWL API. There is an integration with OBO153 tool Robot available154 as well as a Protege plugin155.
Footnote 151: [https://github.com/karmaresearch/vlog/Wocker](https://github.com/karmaresearch/vlog/Wocker)
**Download and Repository:** The project repository is located on Github156. The latest version 1.1.2 is from November 2022 and can be downloaded there157. The developer notes, that API changes are likely because the project is still under heavy development.
**Installation and Usage:** An issue was created due to problems with installation and first steps158. The developer responded quickly and provided a detailed answer. The reasoner can also be tried in a browser on the website [https://balhoff.github.io/whelk-web/](https://balhoff.github.io/whelk-web/). Provided example was tested successfully.
Footnote 158: [https://github.com/balhoff/whelk/issues/217#issuescomment-1609670170](https://github.com/balhoff/whelk/issues/217#issuescomment-1609670170)
### Systems using a third-party reasoner
In the following is a list of all systems, which use a foreign standalone reasoner. During the research it was not always clear whether a system was a standalone reasoner or not. This distinction is important to some extent, because one might want to find out capabilities/functionality of a reasoner, for instance. They are also listed separately to support future research. Only a few systems have been tested in detail.
#### 5.2.1 AberOWL
AberOWL [74] is a framework for ontology-based access to biological data. It consists of an ontology repository and some web services for reasoning. It provides OWL 2 EL reasoning using the ELK reasoner. The project site is [http://aber-owl.net/](http://aber-owl.net/).
#### 5.2.2 Bundle
BUNDLE [60] is written in Java and provides reasoning for probabilistic ontologies using the FaCT++ reasoner (and others). It uses the DISPONTE159 approach, i.e. each axiom is accompanied by a probability number (e.g. 0.6). This number represents the probability that a certain axiom is considered true (epistemic probability). The project page [https://ml.unife.it/bundle/](https://ml.unife.it/bundle/) contains numerous downloads as well as documentation. The latest commits were made in 2023. BUNDLE can be tried out in the browser. The following website contains pre-built examples [https://bundle.ml.unife.it/examples](https://bundle.ml.unife.it/examples). The Bitbucket repository provides information on getting started and usage160.
Footnote 159: [https://ml.unife.it/dispute/](https://ml.unife.it/dispute/)
Footnote 160: [https://bitbucket.org/machinelearninginfe/bundle/src/master/README.md](https://bitbucket.org/machinelearninginfe/bundle/src/master/README.md) and [https://bitbucket.org/machinelearninginfe/bundle/wiki/BUNDLE%203.0.x](https://bitbucket.org/machinelearninginfe/bundle/wiki/BUNDLE%203.0.x)
#### 5.2.3 Chainsaw
Chainsaw [86] is a meta reasoner that uses third-party reasoners such as ELK for a given reasoning task. It was designed to handle large ontologies. Internally, it uses the following approach: for each reasoning query it generates a module of the ontology and select the most appropriate reasoner to process it. They have a Bitbucket repository161, which contains the source code. The latest commit is from 2022. Downloads can be found on Sourceforge162.
Footnote 161: [https://bitbucket.org/ignazio1977/chainsaw/src/master/](https://bitbucket.org/ignazio1977/chainsaw/src/master/)
#### 5.2.4 ComR
ComR [92] is a reasoner prototype to demonstrate an approach for faster ontology classification. According to the authors, the time savings are up to 90% compared to similar reasoners such as HermiT, FaCT++ or Pellet. To achieve this they combine an OWL 2 EL reasoner with an OWL 2 reasoner for ontology classification in SROIQ. The non-EL-part is done by a slower OWL 2 reasoner. A project repository or website could not be found, so it is considered no longer maintained.
#### 5.2.5 Coror
COROR [53] stands for "COmposable Rule-entailment Owl Reasoner" and is a reasoner optimized for embedded systems with limited memory and CPUs. It was developed to research composition algorithms for rule-entailment OWL reasoners. A project repository or website could not be found, so it is considered to be no longer maintained.
#### 5.2.6 DLEJena
DLEJena [40] combines Apache Jena's forward-chaining rule engine and the Pellet reasoner to provide OWL 2 RL reasoning. A project repository or website could not be found, so it is considered to be no longer maintained.
#### 5.2.7 DRAOn
DRAOn is written in Java and provides reasoning for networks of ontologies. It supports both standard Description Logics semantics for non-distributed reasoning and the Integrated Distributed Description Logics (IDDL) semantics for distributed reasoning [31]. The OWL API and Alignment API 4.0[12] are used internally. The HermiT reasoner is used to find inconsistencies in ontologies and alignments. The official project page is no longer accessible163 and a source code repository, or something similar, could not be found.
Footnote 163: [http://iddl.gforge.inria.fr/](http://iddl.gforge.inria.fr/)
Footnote 164: [https://code.google.com/archive/p/elog-reasoner/](https://code.google.com/archive/p/elog-reasoner/)
Footnote 165: [http://executor.informatik.uni-mannheim.de/systems/elog/](http://executor.informatik.uni-mannheim.de/systems/elog/)
Footnote 166: [https://code.google.com/archive/p/elog-reasoner/source/default/commits](https://code.google.com/archive/p/elog-reasoner/source/default/commits)
#### 5.2.8 Elog
ELOG [52] is a prototype of an EL++ log-linear Description Logic reasoner, written in Java. Its source code is publicly available and can be found on Google Code164. ELOG uses foreign reasoners, such as HermiT or Pellet. There are some files available for download, while the file "elog.zip" contains files to run ELOG. After extracting the "elog.zip" archive, run the "elog.Jar" in a terminal to spawn a CLI. You can also use a dedicated web service165, to check out the reasoner. The last commit was in 2014166. There is no indication that the project is still maintained.
Footnote 164: [http://owl.man.ac.uk/hoolet/](http://owl.man.ac.uk/hoolet/)
#### 5.2.9 Hoolet
Hoolet167 provides OWL 2 DL reasoning using the theorem prover Vampire. A dedicated publication could not be found, but a short presentation is available168. It is also mentioned in other publications, for instance [25]. The downloadable file has to be extracted and provides a file called "hooletGUI", which opens an user interface. Based on my findings I consider the project to be no longer maintained.
Footnote 168: [http://www.daml.org/meetings/2004/05/pi/pdf/Hoolet.pdf](http://www.daml.org/meetings/2004/05/pi/pdf/Hoolet.pdf)
Footnote 169: [https://code.google.com/archive/p/owlreasoner/](https://code.google.com/archive/p/owlreasoner/)
#### 5.2.10 Hydrowl
Hydrowl [79] is a query answering system for OWL 2 DL ontologies, using foreign reasoners, such as HermiT or OWLIM. The project page [http://www.image.ece.ntua.gr/gstoil/hydrowl/](http://www.image.ece.ntua.gr/gstoil/hydrowl/) is no longer available. There is no evidence that the project is still maintained.
#### 5.2.11 HyLAR
HyLAR [81] stands for "Hybrid Location-Agnostic Reasoner" and partially supports OWL 2 RL reasoning. It is written in JavaScript and uses the reasoner of an abandoned project called JSW Toolkit169[82]. The project page is hosted on Github170 and the latest commit was in 2021. HyLAR is also available as an NPM package171.
Footnote 170: [https://github.com/ucbl/HyLAR-Reasoner](https://github.com/ucbl/HyLAR-Reasoner)
#### 5.2.12 Minerva
Minerva [99, 68] is an ontology storage and inference system, using the Racer and Pellet reasoner. The project is mentioned in some publications, but a project page or source code repository could not be found.
#### 5.2.13 MORe
MORe [2] stands for "Modular OWL Reasoner" and is a system for classifying OWL 2 ontologies. It is written in Java and uses the OWL 2 reasoner HermiT and the OWL 2 EL Reasoner ELK. It seems that the reasoner is no longer usable according to one of the issues on Github172. The latest commit was 7 years ago, the project is considered no longer maintained.
Footnote 172: [https://github.com/anaphylactic/MORe/issues/1](https://github.com/anaphylactic/MORe/issues/1)
#### 5.2.14 NoHR
NoHR [35] stands for "Nova Hybrid Reasoner" and is a system that uses various reasoners, such as HermiT or ELK) for reasoning tasks. It is written in Java and can be used as Protege plugin or via the Java API. NoHR was included because it provides reasoning for different OWL profiles and supports a rule engine (XSB Prolog173). The project is hosted on Github174, and I was able to successfully load the plugin into Protege, so I assume the project is still usable. The latest commit was in 2019 though.
Footnote 173: [https://xsb.sourceforge.net/](https://xsb.sourceforge.net/)
Footnote 174: [https://github.com/NoHR](https://github.com/NoHR) Reasoner/NoHR
#### 5.2.15 PAGOdA
PAGOdA [100] provides conjunctive query answering for OWL 2 ontologies. Internally it uses the Datalog reasoner RDFox and the OWL 2 reasoner HermiT. It can be used either using a Jar file or via the Java API. The source code is hosted on Github175. The latest version 2.1.2 was in 2015, but the latest commit was in February 2023. I assume the project gets small updates sporadically.
Footnote 175: [https://github.com/KRR-Oxford/PAGOdA](https://github.com/KRR-Oxford/PAGOdA)
#### 5.2.16 Proge
PROSE [94] is a plugin-based paraconsistent OWL reasoner that uses a third-party reasoner, such as Pellet or HermiT. Paraconsistent OWL ontologies are characterized by the fact, that they can contain conflicting axioms. No project page or source code repository could be found.
#### 5.2.17 Rzo2*
\(R_{2}O_{2}\)*[33] is a meta reasoner that selects the most appropriate reasoner, such as ELK or FaCT++, for a given reasoning task. Robustness and efficiency are used as selection criteria, for instance. It uses the OWL API 3.4. The source code and other material is hosted on Github176. I tried out the reasoner, but compilation failed due to unfulfilled dependencies177. Latest commit was in 2019.
Footnote 177: [ERHOR] Failed to execute goal on project r2o2-star: Could not resolve dependencies for project edd.monsa.introfech:r2o2-star:jar.1.0: Failed to collect dependencies at io.wasiluk:weka-xgboost:Jar:0.2.0 -? biz.k11::xgboost-predictor:Jar:0.3.0: Failed to read artifact descriptor for biz.k11::xgboost-predictor:Jar:0.3.0: Could not transfer artifact biz.k11::xgboost-predictor:pom:0.3.0 from/to bintray-komiya-atsushi-maven ([http://dl.bintray.com/komiya-atsushi/maven](http://dl.bintray.com/komiya-atsushi/maven)): Transfer failed for [http://dl.bintray.com/komiya-atsushi/maven/biz/k11/xgboost-predictor/0.3.0/xgboost-predictor-0.3.0.pm](http://dl.bintray.com/komiya-atsushi/maven/biz/k11/xgboost-predictor/0.3.0/xgboost-predictor-0.3.0.pm): Unknown host dlibintray.com:
Footnote 178: [https://github.com/g/hxiao/require](https://github.com/g/hxiao/require)
#### 5.2.18 Requiem
REQUIEM is an OWL 2 QL reasoner to demonstrate a specific query rewriting algorithm. It uses HermiT reasoner for reasoning tasks. The project page [https://www.cs.ox.ac.uk/isg/tools/Requiem/](https://www.cs.ox.ac.uk/isg/tools/Requiem/) mentions [59] as the primary publication, but does not contain any reference to REQUIEM. It was mentioned in passing in [95]. On Github there is one repository containing the source code179 and another repository that provides access to REQUIEM via a Docker container179. The latest commit to the source code repository was in 2013.
Footnote 179: [https://github.com/justin2004/pomify-REQUIEM](https://github.com/justin2004/pomify-REQUIEM)
#### 5.2.19 RuQAR
RuQAR [3] is a query answering and reasoning framework for OWL 2 RL ontologies. It uses the HermiT reasoner for TBox reasoning. RuQAR has been mentioned in passing a few times in some publications. No project page could be found.
#### 5.2.20 Screech
Screech [23] seems to be a prototype for OWL ABox reasoning with approximation using KAON2. This approach is useful in applications where efficient queries are not important than correctness. The project page [http://logic.aifb.uni-karlsruhe.de/screech](http://logic.aifb.uni-karlsruhe.de/screech) is no longer available.
#### 5.2.21 TrOWL: Quill and REL
TrOWL [54, 83] is a reasoning infrastructure for OWL 2. It uses Quill reasoner and REL, an OWL 2 EL reasoner, which is based on ELK reasoner. The official links to both reasoners are broken180. The source code of TrOWL is hosted on Github, but without documentation181. Fun fact: the repository links to the page [http://trowl.eu/](http://trowl.eu/), which contains pornographic content!
Footnote 180: [https://www.w3.org/2001/sw/wiki/OWL/Implementations](https://www.w3.org/2001/sw/wiki/OWL/Implementations)
Footnote 181: [https://github.com/TrOWL/core](https://github.com/TrOWL/core)
Footnote 182: [http://owl.cs.manchester.ac.uk/tools/list-of-reasoners/](http://owl.cs.manchester.ac.uk/tools/list-of-reasoners/)
#### 5.2.22 WSClassifier and WSReasoner
WSClassifier [75] is a reasoner for ALCHI(D) ontologies. In the reasoner list182 another reasoner with the same name (WSReasoner [76]) is mentioned. It uses the ConDOR and HermiT reasoner internally. There is a Google Code repository for WSClassifier, but it only provides some information and a handful download links183. Each download link leads to a Google Drive folder.
Footnote 183: [https://code.google.com/archive/p/wsclassifier/](https://code.google.com/archive/p/wsclassifier/)
### (Probably) unusable OWL reasoners
In this section you will find a list of all OWL reasoners that are considered (probably) not usable. Some of the reasoners have been manually tested.
#### 5.3.1 Bossam
BOSSAM [25] is a RETE-based inference machine that supports reasoning for OWL and SWRL ontologies, and RuleML rules. In comparison to similar reasoners, it supports negation-as-failure and classical negation. According to the authors, it handles dynamic and conflicting knowledge bases better than the competition. It is written in Java, but is closed source. The project page does not provide a working download link184. Latest activity was in 2007185.
Footnote 184: [https://bossam.wordpress.com/2007/08/18/new-api/](https://bossam.wordpress.com/2007/08/18/new-api/)
Footnote 185: [https://bossam.wordpress.com/2007/08/18/new-api/](https://bossam.wordpress.com/2007/08/18/new-api/)
#### 5.3.2 CB OWL 2 Horn Reasoner
CB [26] is a reasoner for Horn SHIQ ontologies. It is written in OCaml and has been developed as part of the ConDOR186 project. They host the source code on Github187 and the latest commit is 13 years old. There are no binaries to download, so compilation is required before the reasoner can be used. The reasoner should run on Windows, Linux and MacOS X, but there are problems indicating that this may no longer be the case anymore (e.g. on Ubuntu 14.04188). The compilation189 failed on my local machine190.
Footnote 186: [https://www.cs.ox.ac.uk/isg/projects/ConDOR/](https://www.cs.ox.ac.uk/isg/projects/ConDOR/)
Footnote 187: [https://github.com/ykazakov/cb-reasoner](https://github.com/ykazakov/cb-reasoner)
Footnote 188: [https://github.com/ykazakov/cb-reasoner/issues/2](https://github.com/ykazakov/cb-reasoner/issues/2)
Footnote 189: [https://github.com/ykazakov/cb-reasoner/blob/master/INSTALL](https://github.com/ykazakov/cb-reasoner/blob/master/INSTALL)
Footnote 190: I had to install ocambuild, which fixed one error, but I ran into the following which I couldn’t solve: ”/bin/sh: 1: ocamplot: not found”
Footnote 191: [https://github.com/ykazakov/cb-reasoner](https://github.com/ykazakov/cb-reasoner)
#### 5.3.3 Cereba Engine
It was difficult to find enough information about the Cereba Engine. In [45, 28] it is described as a software that provides logical reasoning through ABox and TBox for DAML+OIL191, OWL and ontologies. In [28] there is a reference to a website with further information192. A dedicated project page or source code repository could not be found, so the project is considered abandoned.
Footnote 191: [https://www.w3.org/TR/daml+oil-reference/](https://www.w3.org/TR/daml+oil-reference/)
Footnote 192: [https://www2003.org/cdrom/papers/poster/p087/Poster87.html](https://www2003.org/cdrom/papers/poster/p087/Poster87.html)
#### 5.3.4 Ciclop
CICLOP is a Description Logic reasoner that supports role hierarchy and inverse roles, for instance. There is no primary publication, but it is mentioned in [84] and [17]. Binaries or source code for own tests could not be found.
#### 5.3.5 Dbowl
DBOWL [36] is a scalable reasoner (ABox focus), optimized for ontologies with billions of triples. It provides extensive support for OWL 1 Description Logic. DBOWL uses RDBMS for ontology storage and instance classification. The official project page is [https://khaos.uma.es/dbowl](https://khaos.uma.es/dbowl), but it does not contain any information. Other web sites could not be found.
#### 5.3.6 Deslog
Deslog [93] is a shared-memory parallel reasoner for Description Logic ALC. It is written in Java and uses the OWL API. It uses parallel computing to speed up reasoning, which distinguishes it from the more classic reasoners. Web sites such as the project page or a Github repository could not be found.
#### 5.3.7 DistEL
DistEL [49] is a Peer-to-Peer based, distributed reasoner that can be used to classify EL+ ontologies. It is written in Java and uses Redis193 for data storage. The source code is hosted on Github194. The latest commit was in 2016. The code needs to be compiled before use as there are no binaries or scripts available for download.
Footnote 193: [https://redis.io/](https://redis.io/)
Footnote 194: [https://github.com/raghavan/DistEL](https://github.com/raghavan/DistEL)
Footnote 195: [https://github.com/gshiao/drew](https://github.com/gshiao/drew)
#### 5.3.8 Dlp
DLP [58] stands for "Description Logic Programs" and is a Description Logic system that contains sound and complete reasoners for expressive Description Logics. It can be used for satisfiability checks for propositional modal logics. DLP was developed to study various optimization techniques. The official site [http://www.bell-labs.com/user/pfps/dlp](http://www.bell-labs.com/user/pfps/dlp) still exists, but only shows a 404 page not found error.
#### 5.3.9 DReW
DReW [97] is a query answering system for LDL+ ontologies and provides reasoning for Description Logic programs over LDL+ ontologies. According to the authors, it is optimized to handle large ontologies very well. It is written in Java and the source code is hosted on Github195. The latest version 0.3.0 beta 3 was released in March 3, 2013 and the latest activity in the repository was in 2015. I downloaded and extracted the 0.3.0 beta 3 archive and started the CLI on Ubuntu 20.04. But first I had to complete the following steps: (1) set the global variable _DREW_HOME_ to the path of the extracted ZIP folder. (2) rename the file _drew-0.3-beta-3.Jar_ to _drew-0.3-beta-2.Jar_ in the lib folder. Without the second step, an error will occur196. DReW requires DLV197 to run (at least partly), but DLV is no longer available.
Footnote 196: Error: Unable to access Jarfie...
Footnote 197: [http://www.dlvsystem.com/dlv/](http://www.dlvsystem.com/dlv/)
#### 5.3.10 Elly
ELLY [71] is an ELP reasoner that is based on the ISIS Datalog reasoner198. ELP is the decidable part of the Semantic Web Rule Language (SWRL), that allows polynomial time reasoning. By using the OWL API it supports reasoning for OWL 2 EL and OWL 2 RL. Further information about the ISIS Datalog reasoner could not be found. This website is incomplete and contains unrelated content. ELLYs project page is hosted on Sourceforge199.
Footnote 198: [https://www.iris-reasoner.org/](https://www.iris-reasoner.org/)
Footnote 199: [https://elly.sourceforge.net/](https://elly.sourceforge.net/)
#### 5.3.11 FLOwer
\(FL_{0}\)wer [44] is a reasoner for the Description Logic \(FL_{0}\) that provides efficient TBox reasoning with value restrictions. It is a prototype and was developed as part of a bachelor thesis200, is written in Java. The source code is available on Github201, but there are no binaries to download. For your own testing, you need to build the code using Maven202. The latest commit was in 2019, so an issue was opened to find out if there are plans for future development203. There has been no response yet.
Footnote 200: In German: [https://lat.int-u-dresden.de/research/theses/2017/Mic-Bac-17.pdf](https://lat.int-u-dresden.de/research/theses/2017/Mic-Bac-17.pdf)
Footnote 201: [https://github.com/attalos/f0wer](https://github.com/attalos/f0wer)
Footnote 202: [https://maven.apache.org/](https://maven.apache.org/)
Footnote 203: [https://github.com/attalos/flower/issues/1](https://github.com/attalos/flower/issues/1)
#### 5.3.12 F-Owl
F-OWL [101] is an F-Logic based inference machine for RDF and OWL. It uses the XSB logic programming system with the Flora-2 extension, which provides an F-logic frame-based representation layer. Neither a project page nor source code repository could be found, so the project is considered abandoned.
#### 5.3.13 HS-Reasoner
HS-Reasoner is an OWL 2 DL reasoner written in Haskell. The project is hosted on Github204. According to the developer, HS-Reasoner was developed to study possibilities of functional programming for reasoners. An issue has been created for a short introduction on how to install and use it205. I don't have any knowledge in Haskell and without any information on install and use, there was no way for me to determine if it is usable or not. The latest commit was in 2019, so it is possible that the project is still maintained.
Footnote 204: [https://github.com/agmanits/hs-reasoner/issues/1](https://github.com/agmanits/hs-reasoner/issues/1)
#### 5.3.14 leancor
leancor is a Description Logic reasoner and a fork of leanCoR. It is written in Prolog and the source code is hosted on Github206. Information about installation and usage was scarce and scattered in various files. The installation failed on my local machine (Ubuntu 20.04). It seems that leancor uses the leanCoP theorem prover207 internally208. leancor is considered abandoned because the latest commit was in 2015 and the project page [http://www.leancor.org/](http://www.leancor.org/) is no longer accessible.
Footnote 206: [https://github.com/adrianomelo/leancor](https://github.com/adrianomelo/leancor)
#### 5.3.15 LillyTab
LillyTab [96] is an OWL 2 reasoner for SHOF(D) Description Logic. It is written in Java and its source code is hosted on Github209. The latest version 1.12 was released in 2015. According to the developer, LillyTab was developed as part of his thesis ([96], page 79). For this reason and the fact that latest development activity was in 2018, I assume that the project is no longer maintained.
Footnote 207: [http://www.leancop.de/index.html](http://www.leancop.de/index.html)
Footnote 208: [https://github.com/adrianomelo/leancor/blob/master/leancop.sh](https://github.com/adrianomelo/leancor/blob/master/leancop.sh)
Footnote 209: [https://github.com/pwu/lllytab](https://github.com/pwu/lllytab)
Footnote 201: [https://github.com/pwu/lllytab](https://github.com/pwu/lllytab)
#### 5.3.16 Mini-ME and Mini-ME Swift
Mini-ME [64] (short for Mini Matchmaking Engine) is a reasoner for ALN (attributive language with unqualified number restrictions) Description Logic (DL). It is written in Java and optimized for use on mobile devices (Semantic Web of Things). There are two variants available: **Tiny-ME210** comes with a C-, Java- and Object-C API. **Mini-ME** Swift211 is an iOS port, written in Swift212. Mini-ME is only distributed for the purpose of academic evaluation and review only, no other use is allowed. On the download page [http://swot.sisinflab.poliba.it/minime/#download](http://swot.sisinflab.poliba.it/minime/#download) many links are broken. Mini-ME Swift is only available for iOS and MacOS213. No source code repository could be found. Mini-ME was tried locally. After downloading and extracting the "Mini-ME OWLLink package", the file "start-minime.sh" was executed on the terminal. It started a server which was accessible at localhost:8080, but a 404 error was displayed. Exception messages were displayed on the terminal each time I visited the index page. The library versions weren't tried, so they may still be usable. Protege plugin is reported to work with Protege 4.3 and 5.1214, but no local tests have been conducted due to missing plugin files.
Footnote 210: [http://swot.sisinflab.poliba.it/tinyme/](http://swot.sisinflab.poliba.it/tinyme/)
Footnote 211: [http://swot.sisinflab.poliba.it/minime-swift/](http://swot.sisinflab.poliba.it/minime-swift/)
Footnote 212: Swift project page: [https://developer.apple.com/swift/](https://developer.apple.com/swift/)
Footnote 213: [http://swot.sisinflab.poliba.it/tinime-swift/](http://swot.sisinflab.poliba.it/tinime-swift/)
Footnote 214: [http://swot.sisinflab.poliba.it/tinime/plugin.html](http://swot.sisinflab.poliba.it/tinime/plugin.html)
#### 5.3.17 O-Device
O-DEVICE [39] is a rule-based object oriented OWL reasoner using the CLIPS rule engine. The latest activity in the project repository was in 2013 and the latest version is from 2010. For this reason the project is considered abandoned. No local testing has been done, as most of the files are from 2009.
#### 5.3.18 OWLgres
OWLgres [78] is a Description Logic like reasoner using PostgreSQL. A project page or source code repository could not be found. At [https://www.semanticweb.org/wiki/Owlgres.html](https://www.semanticweb.org/wiki/Owlgres.html) there is a link to the project page [http://pellet.owdl.com/owlgres](http://pellet.owdl.com/owlgres), but it is no longer accessible.
#### 5.3.19 OWLIM
OWLIM [7] is a family of Semantic Web components and contains a reasoner. A project page or source code repository could not be found.
#### 5.3.20 Pocket KRHyper and KRHyper
Pocket KRHyper [70] is a Description Logic reasoner, highly optimized to run on embedded systems. Its predecessor is KRHyper. Own tests weren't possible due to the official download link [http://www.uni-koblenz.de/](http://www.uni-koblenz.de/)\(\sim\)%7B%7Diason/downloads is not working anymore.
#### 5.3.21 Pronto
PROTON [55] is a reasoner for managing temporal information in OWL ontologies. The project is hosted on Github and provides the source code, documentation and help scripts215. The installation instructions weren't sufficient enough and own tests on Ubuntu 20.04 failed, because of a missing file216.
Footnote 215: [https://github.com/klinovp/pronto](https://github.com/klinovp/pronto)
#### 5.3.22 QueryPIE
QueryPIE [90] is a reasoner that uses backward reasoning and is optimized to handle ontologies with billions of triples. The project is hosted on Github217. Its worth mentioning, because it is one of the few reasoners that can handle large ontologies. The latest activity on the repository was in 2017. Downloading
and building with Ant was successful, but using it requires a custom Java program and involves additional software such as Hadoop218, so I stopped further testing.
Footnote 218: [https://hadoop.apache.org/](https://hadoop.apache.org/)
Footnote 219: [https://github.com/ArArgyridis/SPOR](https://github.com/ArArgyridis/SPOR)
Footnote 220: [https://github.com/gonas/gonrosi](https://github.com/gonas/gonrosi)
#### 5.3.23 Spor
SPOR [1] stands for "SPatial Ontology Reasoner" and supports efficient reasoning on Geographic Object-Based Image Analysis (GEOBIA) ontologies using fuzzy, spatial, and multi-scale representations. The project is hosted on Github, but no binaries are provided219. No installation or usage instructions are provided, so no tests have been performed. The reasoner can also be used via Gnorasi220 according to [1].
Footnote 221: [https://www.ifs.uni-luebeck.de/moeller/racer/index.html](https://www.ifs.uni-luebeck.de/moeller/racer/index.html)
Footnote 222: [https://github.com/ha-mo-we/Racer](https://github.com/ha-mo-we/Racer)
Footnote 223: User Guide: [https://github.com/h-mo-we/Racer/blob/master/doc/users-guide-2-0.pdf](https://github.com/h-mo-we/Racer/blob/master/doc/users-guide-2-0.pdf), on page 6 it links to [http://www.racer-systems.com/products/download/index.phtml](http://www.racer-systems.com/products/download/index.phtml) which is not available anymore
Footnote 224: [https://protegewiki.stanford.edu/wiki/P4_3_Release_Announcement](https://protegewiki.stanford.edu/wiki/P4_3_Release_Announcement)
#### 5.3.24 QuOnto
QuOnto [32] is a OWL reasoner for the OWL 2 QL profile, optimized to achieve superior performance in classifying OWL 2 QL ontologies, compared to similar reasoners. No project page or source code repositories could be found.
#### 5.3.25 Racer and RacerPro
Racer is a Description Logic SRIQ(D) reasoner. It was written in Java and Common Lisp. According to the project page221 it is the successor of RacerPro [22]. This is interesting, because publications about RacerPro are 11 years older than the release date of Racer. The source code of Racer is hosted on Github222 and licensed under the terms of the BSD-3-clause license. The latest release was in 2014 and the latest commit was in 2018. Because of the advanced knowledge in Common Lisp skills required, no testing has been done. Although, Racer and Racer Pro may run still, but are considered unusable, e.g. because downloads are no longer available223, for instance. The project appears to have been abandoned.
Footnote 222: Checked [https://protegewiki.stanford.edu/wiki/Protoge_Plugin_Library](https://protegewiki.stanford.edu/wiki/Protoge_Plugin_Library) and did a basic Internet search
Footnote 223: [http://www.alphanworks.ibm.com/tech/sher](http://www.alphanworks.ibm.com/tech/sher)
#### 5.3.26 Rat-OWL
Rat-OWL [19] is a reasoner for the non-monotonic extension of Description Logic. According to the publication it is only available for Protege 4.3 (released in 201324), using OWL API 3.4. A project page or source code repository could not be found. The Protege plugin was also no longer available225.
Footnote 226: [https://github.com/gaorasi/gnorasi](https://github.com/gaorasi/gnorasi)
#### 5.3.27 Sher
SHER [14] stands for "scalable highly expressive reasoner" and has a good performance, according to the authors. However, the project website is no longer accessible 226.
Footnote 227: [https://www.ifs.uni-luebeck.de/moeller/racer/index.html](https://www.ifs.uni-luebeck.de/moeller/racer/index.html)
Footnote 228: [https://github.com/ha-mo-we/Racer](https://github.com/ha-mo-we/Racer)
#### 5.3.28 Snorocket
Snorocket [43] is a highly optimized reasoner for a specific subset of the OWL 2 EL profile. The project is hosted on Github227 and can be used via the OWL API or via the Protege plugin228. The version 3.0.0 is listed as the latest version, but no downloads could be found229. The same goes for version 2.0.0230. Although the latest commit was in 2019, the project is considered abandoned due to the lack of download links.
Footnote 228: User Guide: [https://github.com/h-mo-we/Racer/blob/master/doc/users-guide-2-0.pdf](https://github.com/h-mo-we/Racer/blob/master/doc/users-guide-2-0.pdf), on page 6 it links to [http://www.racer-systems.com/products/download/index.phtml](http://www.racer-systems.com/products/download/index.phtml) which is not available anymore
Footnote 229: [https://protegewiki.stanford.edu/wiki/P4_3_Release_Announcement](https://protegewiki.stanford.edu/wiki/P4_3_Release_Announcement)
Footnote 225: Checked [https://protegewiki.stanford.edu/wiki/Protoge_Plugin_Library](https://protegewiki.stanford.edu/wiki/Protoge_Plugin_Library) and did a basic Internet search
Footnote 226: [http://www.alphanworks.ibm.com/tech/sher](http://www.alphanworks.ibm.com/tech/sher)
Footnote 227: [https://github.com/aehrc/snorocket](https://github.com/aehrc/snorocket)
Footnote 228: [https://protegewiki.stanford.edu/wiki/Snorocket](https://protegewiki.stanford.edu/wiki/Snorocket)
Footnote 229: [https://protegewiki.stanford.edu/wiki/Snorocket_3.0.0](https://protegewiki.stanford.edu/wiki/Snorocket_3.0.0)
Footnote 230: [https://protegewiki.stanford.edu/wiki/Snorocket_2.0.0](https://protegewiki.stanford.edu/wiki/Snorocket_2.0.0)
#### 5.3.29 Snorocket
Snorocket [43] is a highly optimized reasoner for a specific subset of the OWL 2 EL profile. The project is hosted on Github227 and can be used via the OWL API or via the Protege plugin228. The version 3.0.0 is listed as the latest version, but no downloads could be found229. The same goes for version 2.0.0230. Although the latest commit was in 2019, the project is considered abandoned due to the lack of download links.
#### 5.3.29 SoftFacts
SoftFacts [80] is an information retrieval system for relational databases that provides reasoning, e.g. instance query answering. There is a project with the same name, but it is empty231. No usable source code or binaries could be found.
Footnote 231: [https://github.com/straccia/SoftFacts](https://github.com/straccia/SoftFacts)
Footnote 232: [http://spark.apache.org/](http://spark.apache.org/)
Footnote 233: [https://github.com/raghavam/sparkel](https://github.com/raghavam/sparkel)
Footnote 234: [https://github.com/raghavam/sparkel/blob/master/script/install-all.sh](https://github.com/raghavam/sparkel/blob/master/script/install-all.sh)
#### 5.3.30 SparkEL
SparkEL [48] is a distributed OWL 2 EL reasoner using Apache Spark232. It is written in Scala and the source code is hosted on Github233. The latest commit was in 2016 and the binaries provided are outdated. I tried to install it using the install script234, but it failed because dependencies, such as SBT 0.13.9, weren't accessible anymore. SparkEL may still be usable, but it is doubtful due to outdated dependencies.
Footnote 233: [https://protegewiki.stanford.edu/wiki/SWRL-IQ](https://protegewiki.stanford.edu/wiki/SWRL-IQ)
#### 5.3.31 Spowl
SPOWL [34] uses Apache Spark to provide reasoning for large OWL ontologies. It acts as a compiler that maps TBox axioms to Spark programs. No source code or binaries were available, so the project is considered abandoned.
#### 5.3.32 Swrl-Iq
SWRL-IQ [16] is a plugin for the outdated Protege version 3. According to the authors it combined features that no other reasoning/query system had at the time, such as constraint-solving based on CLP(R) (Constraint Logic Programming with Reals) as well as SWRL extensions for non-monotonic aggregation, limited higher order logic. The plugin page235 on the Protege wiki was updated in 2012. The manual may still be useful 236. The project page 237 is no longer accessible and the project is considered abandoned.
Footnote 236: [https://protegewiki.stanford.edu/images/5/57/SWRL-IQ_manual.pdf](https://protegewiki.stanford.edu/images/5/57/SWRL-IQ_manual.pdf)
Footnote 237: [https://www.onistr.org/display/SWRLIQ/SWRL-IQ](https://www.onistr.org/display/SWRLIQ/SWRL-IQ)
#### 5.3.33 TReasoner
TReasoner [21] is a reasoner for SHOIQ(D) that implements a tableau algorithm. The project is hosted on Google Code238. The latest commit was in 2014. There is no documentation available, so I assume it is only usable within a custom Java project.
Footnote 238: [https://code.google.com/archive/p/treasoner/](https://code.google.com/archive/p/treasoner/)
#### 5.3.34 WebPIE
WebPIE [89] is an OWL/RDFS inference machine using Hadoop239. The reasoning is being mapped on Map and Reduce operations, allowing distributed, more performant processing of large quantities of data. According to the project website [https://www.few.vu.nl/](https://www.few.vu.nl/) jui200/webpie.html#sourcecode the project is no longer maintained. Some links in the documentation are broken and the only available file was published in 2012. Due to the outdated files and the need to setup a cluster on your own240, no further testing have been conducted. WebPIE might still usable, but it is doubted.
Footnote 239: [https://hadoop.apache.org/](https://hadoop.apache.org/)
Footnote 240: [https://www.few.vu.nl/](https://www.few.vu.nl/) jui200/webpie.html#tutorial
#### 5.3.35 Wolpertinger
Wolpertinger [63] is a fixed-domain OWL reasoner. OWL is based on the Open World assumption, which means, an axiom can be true regardless if it is known to be true or not. There are applications where this can lead to undesirable (because contradictory) situations. The reasoner doesn't use the standard model-theoretical semantics, instead the domain is reduced to an explicitly given set of axioms. Wolpertinger is
written in Java and the source code is hosted on Github241 (latest commit was in 2019). There are no binaries available. I tried to build one myself but ran into errors, that have been reported in an issue242.
Footnote 241: [https://github.com/wolperfinger-reasoner/Wolperfinger](https://github.com/wolperfinger-reasoner/Wolperfinger)
Footnote 242: [https://github.com/wolperfinger-reasoner/Wolperfinger/issues/2](https://github.com/wolperfinger-reasoner/Wolperfinger/issues/2)
## 6 Conclusion
In this work 73 OWL reasoners and 22 systems (using a third-party reasoner) were analyzed and metadata such as usability and maintenance status was collected. All the OWL reasoners found that were considered usable and maintained, need to be analyzed in more detail and with regards to their applications. For instance, not all OWL 2 DL reasoners fully support the OWL 2 DL profile. Also, some systems were developed using outdated development environments such as Java 1.5, which was released 2011. All systems are described in a more general terms as they are currently exist.
In the following is a list of positive observations:
1. **Rooted in science:** Most of the analyzed systems are mentioned in at least one publication. It is either a system description paper or an evaluation/comparison of two or more reasoners. Many papers on the underlying methods and describe them in detail, which helps to understand the inner workings of a system.
2. **Diverse software:** OWL reasoners are developed in a variety of languages and environments. Java is the most prominent one. The functional scope of each individual system is also diverse. Each language/environment has its own advantages that make it ideal for specific use cases, such as high performance computing or embedded systems.
3. **Constant interest**: New projects have been started in recent years, which may be an indicator that there is still sufficient research interest in this area. Many projects host their code on Github, which also encourages new people to contribute. The negative observations are summarized below:
1. **Inadequate documentation:** Most of the projects provide little or no documentation to their users. It seems that the authors assume that their users have the necessary knowledge in any software environment they use, be it a simple Java binary, a Hadoop or Haskell/Scala setup. Also, some projects have split their documentation into different files, or even whole web sites, making it much harder to learn about a particular system. Scientific publications may be considered as part of the documentation, but they usually do not contain any end-user information (such as installation manual or usage introductions).
2. **Short term usage:** A large percentage of the OWL reasoners had prototype status to demonstrate a novel approach. This may partly explain why the documentation was scarce in comparison to other projects. However, prototypes are not of interest to people who want to use a stable basement for a new software, whether they are scientists or not. Poor documentation can also hinder new users who want to learn about a software.
3. **Bad maintenance situation:** Only 28 out of 73 OWL reasoners are actively maintained, which means they have had some development activity during the last 3 years. Few had project repositories set up in a way that would support long-term development. Continuous integration pipelines or even (unit-, system-,...) tests were scarce. Some projects on Github had more than 10 open pull requests unanswered, meaning that there were people who wanted to contribute code changes, but the author ignored them. Similar observations have been made in [30].
## 7 Future work
As mentioned in the previous section, all usable and maintained OWL reasoners found need to be analyzed in more detail and with regard to their applications. Verification of my findings, especially for those systems
where my knowledge is very limited, would be important. In some cases I have created an issue in the project repository, which could be a good starting point for further investigation. Performance comparisons are also of interest due to the hardware developments in the recent years (e.g. parallel computing, GPU computing). Only a few OWL reasoners have used parallel computing at all.
The part of the Semantic Web/Logic community working with OWL reasoners should consolidate the current state. It is important to find out, which systems are still sufficient enough to handle current use cases. It is also important to provide the necessary documentation for end users to try them out. I am afraid we can only salvage what is left of systems like HermiT, but this could be the first step towards new developments.
For all the reasons mentioned above, I claim that a large number of analyzed OWL reasoners are not of interest to companies, especially in the areas of software development and knowledge organization. Many systems are almost black boxes to their users, with poor documentation, outdated dependencies and no support from their developers. OWL reasoners such as HermiT are still widely used, although they are being abandoned, because there are not many alternatives available. Therefore surveys/studies that further investigate this issue are very important.
## 8 Acknowledgement
This work was supported by a grant from the German Federal Ministry of Education and Research (BMBF) for the KI-Werk Projekt ([https://www.cbasynergy.net/cba/ki-werk.html](https://www.cbasynergy.net/cba/ki-werk.html)).
## 9 Appendix
\begin{table}
\begin{tabular}{|l|} \hline
**OWL Reasoner** \\ \hline AllegroGraph \\ \hline Arachne \\ \hline BaseVISor \\ \hline BORN \\ \hline CEL \\ \hline ElepHant \\ \hline ELK \\ \hline Expressive Reasoning Graph Store (IBM) \\ \hline EYE \\ \hline EYE.js \\ \hline jcel \\ \hline JFact \\ \hline Konclude \\ \hline LiFR \\ \hline LiRoT \\ \hline Ontop (with Query reasoner) \\ \hline Openllet \\ \hline OWLRL \\ \hline RDFox \\ \hline RDFSharp.Semantics \\ \hline reasonable \\ \hline TRILL (and TRILLP and TORNADO) \\ \hline Vampire \\ \hline VLog \\ \hline Whelk \\ \hline \end{tabular}
\end{table}
Table 3: List of usable and maintained OWL reasoners
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Reasoner** & **Maintenance** & **Is usable** \\ \hline AllegroGraph & maintained & usable \\ \hline Arachen & maintained & usable \\ \hline BaseVISor & maintained & usable \\ \hline BORN & maintained & usable \\ \hline Bossam & abandoned & no files to try available \\ \hline CB (Consequence-based reasoner) & abandoned & no, compilation failed \\ \hline CEL & maintained & usable \\ \hline Cerebra Engine & abandoned & no files to try available \\ \hline CICLOP & abandoned & no files to try available \\ \hline Clopper & abandoned & usable \\ \hline DBOWL & abandoned & no files to try available \\ \hline Deslog & abandoned & no files to try available \\ \hline DistEL & abandoned & no, missing info about how to compile it \\ \hline DLP & abandoned & no files to try available \\ \hline DReW & abandoned & no, DLV required, but not available \\ \hline ElpHant & maintained & usable \\ \hline ELK & maintained & usable \\ \hline ELLY & abandoned & no, jar files seem broken \\ \hline Expressive Reasoning Graph Store (IBM) & maintained & usable \\ \hline EYE & maintained & usable \\ \hline EYE.js & maintained & usable \\ \hline F-OWL & abandoned & no files to try available \\ \hline FaC7++ & abandoned & usable \\ \hline Flower & abandoned & no, Maven build failed \\ \hline fuzzyDL & abandoned & not tried, because Protege 4.3 didn’t run \\ \hline HAM-ALC & abandoned & no files to try available \\ \hline HermiT & abandoned & usable \\ \hline HS-Reasoner & abandoned & not tried, no knowledge about Haskell \\ \hline jcel & maintained & usable \\ \hline JFact & maintained & usable \\ \hline KAON2 & abandoned & usable \\ \hline Konclude & maintained & usable \\ \hline leanCoR & abandoned & no, SWI setup in Ubuntu 20.04. failed \\ \hline LiFR & maintained & usable \\ \hline LillyTab & abandoned & no, jar files seem broken \\ \hline LiRoT & maintained & usable \\ \hline Mini-ME & maintained & no, download links for plugin broken [... ] \\ \hline Mini-ME Swift & no information available & usable \\ \hline NORA & maintained & not tried, because it requires additional software [...] \\ \hline Objective & abandoned & not tried, because files are outdated \\ \hline Ontop (with Query reasoner) & maintained & usable \\ \hline Openlet & maintained & usable \\ \hline Owilers & abandoned & no files to try available \\ \hline OWLIM & abandoned & no files to try available \\ \hline OWLRL & maintained & usable \\ \hline Pellet & abandoned & usable \\ \hline Pocket KRHyper & abandoned & no files to try available \\ \hline Pronto & abandoned & no, start script seems faulty \\ \hline QueryPIE & abandoned & not tried, because it requires additional software \\ \hline Quill (part of TToWL) & abandoned & no files to try available \\ \hline QuOnOn & abandoned & no files to try available \\ \hline RACER / RacerPro & abandoned & no files to try available \\ \hline RAT-OWL & abandoned & no files to try available \\ \hline RDFwa & maintained & usable \\ \hline RDFSharp.Semantics & maintained & usable \\ \hline reasonable & maintained & usable \\ \hline REL (part of THOWL) & abandoned & no files to try available \\ \hline Sequoia & maintained & not tried, because it requires a Scala environment \\ \hline SHER & abandoned & no files to try available \\ \hline Snorocket & abandoned & no files to try available \\ \hline SoftFacts & abandoned & no files to try available \\ \hline SparREL & abandoned & no, setup script seems faulty \\ \hline SPOR & abandoned & not tried, no knowledge in compiling C++ programs \\ \hline SPOWL & abandoned & no files to try available \\ \hline SWRL-IQ & abandoned & no, outdated Protege required \\ \hline Tiny-ME & no information available & usable \\ \hline TREasoner & abandoned & not tried, it requires a custom Java project \\ \hline TRILL (and TRILLP and TORNADO) & maintained & usable \\ \hline Vampire & maintained & usable \\ \hline VLog & maintained & usable \\ \hline WebPIE & abandoned & not tried, because it requires additional software \\ \hline Whek & maintained & usable \\ \hline Wolpertinger & abandoned & no, Maven build failed \\ \hline \end{tabular}
\end{table}
Table 4: List of all analyzed OWL reasoners |
2310.00342 | RBF Weighted Hyper-Involution for RGB-D Object Detection | A vast majority of conventional augmented reality devices are equipped with
depth sensors. Depth images produced by such sensors contain complementary
information for object detection when used with color images. Despite the
benefits, it remains a complex task to simultaneously extract photometric and
depth features in real time due to the immanent difference between depth and
color images. Moreover, standard convolution operations are not sufficient to
properly extract information directly from raw depth images leading to
intermediate representations of depth which is inefficient. To address these
issues, we propose a real-time and two stream RGBD object detection model. The
proposed model consists of two new components: a depth guided hyper-involution
that adapts dynamically based on the spatial interaction pattern in the raw
depth map and an up-sampling based trainable fusion layer that combines the
extracted depth and color image features without blocking the information
transfer between them. We show that the proposed model outperforms other RGB-D
based object detection models on NYU Depth v2 dataset and achieves comparable
(second best) results on SUN RGB-D. Additionally, we introduce a new outdoor
RGB-D object detection dataset where our proposed model outperforms other
models. The performance evaluation on diverse synthetic data generated from CAD
models and images shows the potential of the proposed model to be adapted to
augmented reality based applications. | Mehfuz A Rahman, Jiju Peethambaran, Neil London | 2023-09-30T11:25:34Z | http://arxiv.org/abs/2310.00342v1 | # RBF Weighted Hyper-Involution for RGB-D Object Detection
###### Abstract
A vast majority of conventional augmented reality devices are equipped with depth sensors. Depth images produced by such sensors contain complementary information for object detection when used with color images. Despite the benefits, it remains a complex task to simultaneously extract photometric and depth features in real time due to the immanent difference between depth and color images. Moreover, standard convolution operations are not sufficient to properly extract information directly from raw depth images leading to intermediate representations of depth which is inefficient. To address these issues, we propose a real-time and two stream RGBD object detection model. The proposed model consists of two new components: a depth guided hyper-involution that adapts dynamically based on the spatial interaction pattern in the raw depth map and an up-sampling based trainable fusion layer that combines the extracted depth and color image features without blocking the information transfer between them. We show that the proposed model outperforms other RGB-D based object detection models on NYU Depth v2 dataset and achieves comparable (second best) results on SUN RGB-D. Additionally, we introduce a new outdoor RGB-D object detection dataset where our proposed model outperforms other models. The performance evaluation on diverse synthetic data generated from CAD models and images shows the potential of the proposed model to be adapted to augmented reality based applications.
_Keywords =_
RGB-D, Detection, Depth, Convolution, Involution, Fusion
## 1 Introduction
Object detection aims to classify and localize an object of interest from a two/three-dimensional scenes. Recognition of objects is an integral part of autonomous robotics and augmented reality (AR) applications, and hence has attracted a lot of interest among the computer vision community. The research on this topic has made significant progress over the recent past with the help of deep learning models. However, most of the existing state-of-the-art object detection models are built for two-dimensional (2D) RGB (Red, Green, Blue) images with little or no three-dimensional (3D) perspective of the objects which is crucial for applications such as autonomous driving and scene understanding besides augmented/mixed reality applications. A few deep learning models specifically address 3D object detection from point clouds such as Light Detection and Ranging (LiDAR) scans Pan et al. (2021); Qi et al. (2021); Tian et al. (2021). However, LiDAR sensors for point cloud generation are expensive and produce sparse output that requires a lot of pre-processing.
There has been a rapid improvement and increased availability of affordable commercial depth sensors over the last decade. Depth sensors have also become a conventional part of many modern AR headsets (e.g. Microsoft HoloLens 2). These depth sensors can capture depth images (also known as depth maps) where each pixel encodes the distance of a
discrete point in the scene from the sensor. When the depth images are used with its corresponding color images, we get four channel RGB-D (red, green, blue, depth) images. Prior state-of-the-art research Gupta et al. (2014); Xiao et al. (2021) has already proven the significance and the performance improvement of RGB-D based object detection over RGB based detection. Depth images complement RGB based object detection in multiple ways. Firstly, depth images better visualize object boundaries, making it easier to locate objects and properly cover them with bounding boxes. This is particularly important in cases where the object boundaries are not clear in color images due to poor illumination or heavy shadows, as shown in Figure 1(a). Secondly, depth images can resolve scale distortions that often appear in color images due to perspective projections. Depth images provide useful information to object detectors, making it easier to learn the relative sizes of objects in a scene. One such phenomenon is illustrated in Figure 1(b). Thirdly, depth images can detect camouflaged objects that might not be easily visible in color images due to their similarity to their background which is demonstrated with the RGB and corresponding depth map of a penguin in Figure 1(c). Finally, depth images can handle delusive color and texture in images (Figure 1(d)) that can mislead the object classification if solely relied on color and texture information.
Despite having conclusive evidence about the benefits of using extra depth information, it is challenging to process depth map and color image inputs simultaneously in object detection models due to the fundamental differences in depth and color images. Consequently, over the past few years, RGB-D based object detection has been tackled using two stream networks to extract features from color and depth images separately and then combining these features at selected stages of the model Gupta et al. (2016, 2014); Ophoff et al. (2018, 2019). However, most fusion stages of the extracted depth and color features are naively selected. Further, such fusion schemes employ simple concatenation of features that lack proper learnable parameters to train with backpropagation of neural networks. More importantly, the depth and color feature fusion stage sometimes blocks proper information exchange between depth and color image features Xiao et al. (2021). Moreover, some researchers encode the depth map into a different representation Gupta et al. (2014); Li et al. (2018); Xu et al. (2017) which is time consuming and designed based on intuition. The standard convolutional operation is designed considering feature extraction from color image but not from raw depth image. Therefore, there is a need to find an alternative for standard convolution to directly process raw depth image. Further, most of the state-of-the-art RGB-D object detection models rely on two-stage detectors from outmoded RCNN series of models Girshick (2015); Girshick et al. (2014) which makes them considerably slower when compared to more recent real-time object detection Bochkovskiy et al. (2020); Tan et al. (2020); Wang et al. (2021) models.
We attempt to tackle some of the above mentioned issues using a depth aware involution based fusion network for RGB-D object detection. The proposed single stage architecture, shown in Figure 1, works in real-time incorporating two new components with notable performance. The specific contributions of this work are listed below.
* We propose a dynamic depth aware hyper-involution module as an alternative to standard convolution for proper utilization of raw depth information and spatial specific features.
* We propose an improved encoding-decoding type fusion stage in the middle layers of the model that can combine the features extracted from depth stream and RGB stream to extract the most significant semantic information.
* We develop a pipeline to automatically generate realistic RGB-D images from 3D CAD models and background images for training and testing the performance and applicability of the detection model in diverse environment.
* We build a completely new outdoor RGB-D dataset with annotations for RGB-D based object detection.
Figure 1: Few instances where the usefulness of depth for object detection are visible. Image courtesy: Nathan Silberman and Fergus (2012); Polseno (2020); Ranftl et al. (2021); Rankuzz.com (2020); Starecat.com (2022)
## 2 Related Works
This section explores the background on three different topics according to our specific research objectives. In recent years, the research community has fervently introduced a plethora of state-of-the-art models for conventional RGB-based object detection. The object detection architectures can be categorized into two groups namely: single stage and two-stage detectors Zaidi et al. (2022). Single stage detectors predict the position and class label of the object within an image in a single pass through the neural network without the need for additional region proposals or refining components. At the moment, the leading single stage models Wang et al. (2022, 2021a, 2021a, 2021b) are the successors of YOLO Redmon et al. (2016), Redmon and Farhadi (2017, 2018) and FCOS Tian et al. (2019, 2020) series. Conversely, two-stage detectors use a combination of two neural networks to detect objects in the image. First the region proposal network (RPN) generates a set number of potential locations where objects may be present in the image. These proposals are then passed to the detection network which refines location and identification of the objects in the proposals. Some latest addition to state-of-the-art two-stage models includes Hong et al. (2022), Sun et al. (2021). Overall, these RGB based detection models mainly introduce various components in their extended architecture to compete for speed and accuracy ignoring the importance of cross modal perception. In this paper, we investigate research challenges of RGB-D based object detection and develop an improved model for RGB-D based object detection. Therefore, this section first describes various existing RGB-D object detection architectures including their limitations followed by brief studies on alternatives to standard convolution and hyper-networks which are core components of our RGB-D based detection.
### RGB-D Object Detection
#### 2.1.1 HHAs
Gupta et al.Gupta et al. (2014) introduced a fusion-based model for the task of RGB-D based object detection. This is the first work that verified important arguments in favor of depth-aware methods to improve object detection performance. Moreover, they also introduced a geocentric embedding technique to convert raw depth images to three-channel HHA format (Horizontal disparity, Height above ground, and Angle with respect to gravity direction) before giving input to their model for extracting depth features. However, the depth image to HHA conversion process is hand designed which is unnecessarily time consuming Hazirbas et al. (2016). In a sequel work, Gupta et al.Gupta et al. (2016) addressed scarcity of depth data for the training of RGBD models. The authors utilized supervision transfer which basically train the depth feature extraction backbone by teaching the network to regenerate the semantic representations at intermediate level learned from RGB based backbone pre-trained with a massive RGB image dataset. Although this strategy improved the accuracy when compared to their previous work Gupta et al. (2014), it relies on two-stage Fast RCNN detector Girshick (2015) which is not suitable for real time applications. Xu et al. Xu et al. (2017) also utilized the concept of supervision transfer Gupta et al. (2016) and proposed a three-stream model that slightly improved the performance of RGB-D detection. Nevertheless, this model also relies on the time consuming HHA conversion Girshick et al. (2014) of raw depth map and the three parallel backbones of the model, inspired from AlexNet Krizhevsky et al. (2012), has its own Region Proposal Network (RPN) and separate Faster RCNN head which further adds to the training time and computational cost. Cross-Modal Attentional Context (CMAC) algorithm proposed by Li et al. Li et al. (2018) utilized Long Short Term Memory (LSTM) Hochreiter and Schmidhuber (1997) to extract global context features from each region proposals and Spatial Transformer Networks (STN) Jaderberg et al. (2015) to accurately identify different parts of an object. However, this model also relies on HHA conversion of depth and there are some disadvantages of using LSTM as it consumes more memory, prone to easy over-fitting and sensitive to random weight initialization.
#### 2.1.2 Raw Depth Maps
Some of the recent work on RGB-D object detection use raw depth map instead of converting it to HHA with their specific limitations. For instance, Zhang et al. Zhang et al. (2020) introduced a model that consists of three major streams including a backbone for feature extraction from raw depth map and Channel Weights Fusion (CWF) that process the concatenated RGB-D features. However, in this model depth feature extraction prior is designed based on several intuition and considering only human depth image pattern inside indoor environment which potentially limits its capacity to extract depth information in diverse environments. In another work, Ophoff et al. Ophoff et al. (2018) explored three different stages of feature fusion for RGB-D based pedestrian detection. For each fusion stages of the model, a single stage object detector is utilized making it suitable for real time applications. Despite the real-time advantage, this work has drawbacks including naive concatenation of the depth and RGB image features without any special trainable operation and the issue of increase in dimensions after feature concatenation. In Ophoff et al. (2019), the authors extended the model for multi-class object detection in real time and proposed simple fusion layer, i.e., use of convolution after concatenation to reduce the combined feature dimension. However, this model requires separate pre-training of depth and RGB network before training the main model and hence, making it redundant. A recent work Xiao et al. (2021) introduced two components to improve the information flow between depth and RGB features in
RGB-D object detection. These two components help to bring significant performance improvements compared to state-of-the-art. Overall, despite several strategies to improve feature fusion of depth and RGB images, none of these works explored the effectiveness or alternatives of standard convolution operation to properly extract depth data.
### Alternatives to Standard Convolution
In the recent past, different flexible and effective alternatives of standard convolution operation LeCun et al. (1998) have been proposed. A few of them dynamically adapt using pixel information while others adapt using depth. For instance, deformable convolution Dai et al. (2017) learns geometric transformations of images such as scale, pose and deformation of parts. Then a faster and lightweight deformation convolution called deformable ConvNets v2 (DCNv2) Zhu et al. (2019) were introduced, that remains unaffected by features from irrelevant regions of the image which was an issue in Dai et al. (2017). Pixel Adaptive Convolution (PAC) Su et al. (2019) is adapted according to the contents of images while maintaining several favorable properties of standard convolution. A conditionally parameterized convolution, named CondConv Yang et al. (2019), can be learned based on specific input samples. Similarly, dynamic convolution Chen et al. (2020) adapts based on input samples and can be described as a superposition of multiple convolution kernels. Before applying the superposition, the kernels are aggregated by a value found by applying an attention model function on the input. Several studies attempted to utilize depth maps to manipulate convolution kernels.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Convolution Name** & **Input** & **Speciality** & **Output** & **Type** & **Computation and Parameters** \\ \hline \hline Standard convolution LeCun et al. [1998] & RGB & Learns various important image features & Feature tensor & Static & Varies \\ \hline Deformable convolution Dai et al. [2017] & RGB & Learns geometric transformation along with image features & Feature tensor & Dynamic & Higher than standard convolution \\ \hline DCNv2 Zhu et al. [2019] & RGB & Improved version of Deformable convolution and avoids irrelevant regions in image & Feature tensor & Dynamic & Higher than standard convolution \\ \hline PAC Su et al. [2019] & RGB & Adapts according to the content of images & Feature tensor & Dynamic & Higher than standard convolution \\ \hline CondConv Yang et al. [2019] & RGB & Input samples specific learning & Feature tensor & Dynamic & Higher than standard convolution \\ \hline Dynamic convolution Chen et al. [2020] & RGB & Superposition of several convolution filters & Feature tensor & Dynamic & Higher than standard convolution \\ \hline Depth aware conv Wang and Neumann [2018] & RGB, Depth map & Apply a weight based on depth similarity only for semantic segmentation & Feature tensor & Dynamic & Similar to standard convolution \\ \hline S-convChen et al. [2021] & RGB, Depth map & Learns spatial information from depth for better semantic segmentation & Feature tensor & Dynamic & Higher than standard convolution \\ \hline Depth guided filtering Ding et al. [2020] & RGB, Depth map & Filters and dilations are varied according to specific pixels for monocular object detection task & Feature tensor & Dynamic & Higher than standard convolution \\ \hline Depth-wise convolution Ma et al. [2018], Sandler et al. [2018], Tan and Le [2019], Chollet [2017] & RGB & Spatial specific and channel agnostic & Feature tensor & Dynamic & Essentially less than standard convolution \\ \hline \end{tabular}
\end{table}
Table 1: A brief summary of existing alternatives to standard convolution.
For example, Wang and Neumann (2018) introduced two modules which are referred to as depth-aware convolution and depth-aware average pooling where the output gets more impacted by pixels with similar depth value. Chen et al. introduced a different convolution module Chen et al. (2021), called s-conv, that improve segmentation performance by applying dimensional information to its filter weights and generating location adaptive filters. ShapeConv Cao et al. (2021) is another recent work that use depth map to extract information about the content of the patch besides its whereabout to improve the accuracy of semantic segmentation. For 3D object detection from images and depth map, authors in Ding et al. (2020) introduced a depth guided filtering scheme where the convolution filters and dilations are varied according to specific pixels and channels of different images. Another line of research deals with depth-wise convolution Ma et al. (2018); Sandler et al. (2018); Tan and Le (2019); Chollet (2017) which aims to improve efficiency of neural network, but this area of research should not be confused with convolution that are manipulated by depth input.
Contrary to the above facts, each of these alternative to the standard convolution has their own set of limitations. Like DCNv2 Zhu et al. (2019) is much slower and has more parameters compared to standard convolution kernel while CondConv Yang et al. (2019) and Dynamic convolution Chen et al. (2020) are less effective at lower layers of a model compared to higher layers. Moreover, the depth based convolutional operations were only designed for task like semantic segmentationWang and Neumann (2018); Chen et al. (2021); Cao et al. (2021) or 3D monocular object detection Ding et al. (2020). On a seperate note, a recently introduced concept called Involution Li et al. (2021) reversed the fundamental concept of standard convolution to overcome problems like inter-channel redundancy and inability to learn long distance visual interactions. This approach shows great promise, as it is dynamic and requires significantly fewer parameters than other types of standard convolutions. Therefore in this research, we chose to modify involution to dynamically deal with raw depth input. Table 1 summarizes standard convolution alternatives.
### Hyper-networks
Increasing the filter size of convolutional layers are proven to be useful for better capturing the long range information of neural networks Krizhevsky et al. (2017); Ronneberger et al. (2015). In other words, the larger kernels helps to increase the expressiveness of convolution. However, the problem is that the trainable parameters of convolution layers increases significantly with filter size and hence increasing the computational cost. To this end, Ha et al. (2016) introduced a useful concept called Hyper-networks that can improve the expressiveness of neural network model without increasing the parameters count. The key idea here is to use a secondary neural network to generate weights for the main network. Using this concept Ha et al. achieved decent classification performance while reducing the number of parameters. Two other research from Wang et al. Wang et al. (2021) and Hoopes et al. Hoopes et al. (2021) has showed the efficacy of hyper-networks for training deep neural networks that is compatible to the extent of regularization. Most recently Ma et al. (2022) introduced hyper-convolution that uses hyper-network to generate filter weights which help them to increase filter size without affecting the parameters of convolution. This hyper-convolution helped them to create a parameter efficient model for the task of biomedical image segmentation. However, the parameters of their hyper-network still depends on the number of input channels, output channels and number of nodes in the final layer of the hyper-network. Similarly, Nirkin et al. Nirkin et al. (2021) developed a patch-wise hyper-network, called
Figure 2: The proposed two streams and single stage detection architecture for real-time applications.
HyperSeg, which generates the weights of each block in the decoder immediately before they are consumed to solve a segmentation task.
## 3 The Model
In this section, we first introduce the main RGB-D detection architecture. Then we discuss the two main modules namely the depth aware hyper-involution and fusion designed specifically for the RGB-D detection. Finally, the synthetic RGB-D data generation pipeline is described in detail.
### The Two Streams Architecture
As discussed in Section 2, most of the existing state-of-the-art RGB-D object detection models rely on two-stage detection architecture which negatively impact their real-time speed. Therefore, we design a single stage detector architecture which unlike a two-stage detector, does not require a separate sparse prediction stage and predicts bounding boxes in a single pass through the neural network. As demonstrated in Figure 2, first this model takes color image and its corresponding depth map as input to two different streams of the network. One stream of the network containing the depth aware hyper-involution (described in Section III. B) followed by a pooling layer are responsible for extracting the color image features with parallel attention to object's depth. The second stream of the network processes complementary semantic features from the corresponding depth map using a hyper-involution (which has the same filter generator as the depth aware hyper-involution, described in III. B.4, but excluding the depth aware filter) followed by pooling layer to make shapes compatible prior to the information fusion. The information extracted from the two streams of network is then combined using fusion stage described in Section III. C. The model after the fusion consists of a backbone network with 13 convolutional layers, shown in Figure 2, which is inspired by the success of Simonyan and Zisserman (2014). An interesting feature of this backbone structure is that instead of having a large number of hyper-parameters, it has convolution layers of 3x3 filter with stride 1 and always use the same padding and maxpool layer of 2x2 filter of stride 2. This backbone structure plays a crucial role in significantly reducing the overall computational complexity of the detection model. Final stage of the detection model comprise a detection head that provides the final classification and localization prediction via non-max suppression layer. We use the loss function suggested by Redmon and Farhadi (2017) because of its compatibility with this model output and success in state-of-the-art single stage detectors Huang et al. (2018); Jo et al. (2017).
### Depth Aware Hyper-involution
Depth aware hyper-involution, as shown in Figure 4, is a module that we design as an alternative to the standard convolution to ensure that spatial and depth information is accounted while processing the color image features. To get an idea of this module, first we need to understand the basic operation and difference of convolution and involution.
#### 3.2.1 Standard Convolution
A standard convolution LeCun et al. (1998) is the weighted sum of local regions as a fixed sized filter moves in a sliding window fashion over an image. To elaborate this further, imagine an image tensor \(I\) of height \(H\), width \(W\) and channels \(C_{i}\). Each pixels inside the tensor can be denoted as \(I_{i,j}\in\mathbb{R}^{C_{i}}\) representing different image features. Let us denote a set of convolution kernels of size \(F\times F\) as \(\mathcal{K}\in\mathbb{R}^{C_{n}\times C_{i}\times F\times F}\) where \(C_{n}\) represents the number of kernels. When a set of convolution kernels undergo element wise multiplication and addition while sliding over the image tensor, the final output feature tensor can be defined using Equation 1.
\[O_{i,j,k}=\sum_{c=1}^{C_{i}}\sum_{m=\lfloor\frac{F}{2}\rfloor}^{\lfloor\frac{ F}{2}\rfloor}\sum_{n=\lfloor\frac{F}{2}\rfloor}^{\lfloor\frac{F}{2}\rfloor} \mathcal{K}_{k,c,m+\lfloor\frac{F}{2}\rfloor,n+\lfloor\frac{F}{2}\rfloor}I_{ i+m,j+n,c} \tag{1}\]
In Equation 1, \(k\in[1,C_{n}]\) and \(m\) and \(n\) index the offset positions in the kernel. One can notice that the problem with the convolution operation is that it applies a fixed convolution filter at every spatial positions in the image, also referred to as spatial agnostic feature, which suggests that it does not account for the difference in different spatial position in the image. Moreover, it applies separate filters for separate channels of the input image, referred to as channel specific feature, which is considered as a redundant operation adding to the computational cost.
#### 3.2.2 Involution
To address the above issues of standard convolution, involution operation Li et al. (2021) has been put forward. The main difference between the involution and the convolution is the spatial specific and channel agnostic features. Involution
Figure 4: The working mechanism of Depth Aware Hyper-involution. The depth similarity is calculated from the depth map to produce a depth aware filter. Meanwhile, the filter generating hyper-network generate learned filter weights efficiently for each spatial region of the color image. These filters then undergo multiply and add operation with the input to generate the value of the output pixel.
Figure 3: The working mechanism of Involution. The involution kernel \(\mathcal{H}^{i,j}\) (where G=1 for simplicity) is obtained by applying the function \(\phi\) on a single pixel located at \((i,j)\) and then rearranging the channels to form a spatial neighborhood. The element wise multiplication and addition operation in involution is split into two steps as shown by the \(\otimes\) and \(\oplus\), respectively, Image courtesy: Li et al. (2021).
basically incorporates a generalized version of self-attention mechanisms that enable them to focus on specific regions of the input image and capture long-range dependencies. This enhances the module's ability to model complex spatial relationships in the data, making it a potentially more effective approach for image processing tasks. Additionally, the channel agnostic aspect helps to efficiently reduce parameters while still maintaining its ability to capture complex visual pattern in the data. Precisely, an involution kernel of size \(F\times F\) can be denoted as \(\mathcal{H}\in\mathbb{R}^{H\times W\times F\times F\times G}\) where \(G\) indicates the group of channels (\(C\)) in the input tensor that shares the same involution kernel. When such involution kernels undergo element wise multiplication and addition on the image tensor, the final output feature tensor can be defined as in Equation 2,
\[O_{i,j,k}=\sum_{m=\lfloor\frac{F}{2}\rfloor}^{\lfloor\frac{F}{2}\rfloor}\sum_{n =\lfloor\frac{F}{2}\rfloor}^{\lfloor\frac{F}{2}\rfloor}\mathcal{H}_{m+\lfloor \frac{F}{2}\rfloor,n+\lfloor\frac{F}{2}\rfloor,\lceil\frac{kQ}{Q}\rceil}^{i,j} I_{i+m,j+n,k} \tag{2}\]
In Equation 2, \(\mathcal{H}^{i,j}\) represents the involution kernel which is dynamically sampled from pixel position \(I_{i,j}\) in the input tensor. Therefore, unlike the fixed filter of convolution operation, the involution filter is dynamically generated based on each spatial position of the input images as shown in **Figure**3. This characteristic helps the involution operation to give distinct focus on each spatial position in the image. Moreover, involution applies the same filter for a group channels in the input image thereby using much less parameters compared to a standard convolution and hence, reduces the memory consumption.
#### 3.2.3 Depth Aware Involution
Nevertheless, involution was designed specifically considering the feature extraction from color image. It remains unaware about the depth of each pixel or spatial information while extracting feature from the color image. For example, the RGB image in Figure 5 highlights three pixels where pixels L, M and N have the same pixel color as the chair and table has the same dark color. However, upon examining the depth map shown in Figure 5, it becomes clear that the depth of pixel L differs from that of pixels M and N. This is because the depth of pixel L is influenced by the chair, which is closer to the sensor than the part of the desk that pixels M and N correspond to.
To alleviate the effects of such unaccounted depth disparities in the detection accuracy, we redesign the involution operation to consider the spatial and geometric patterns from the depth map. Given an input image tensor \(I\) and depth map \(D\), the output of our depth aware hyper-involution operation is formulated as follows (Equation 3).
\[O_{i,j,k}=\sum_{m=\lfloor\frac{F}{2}\rfloor}^{\lfloor\frac{F}{2} \rfloor}\sum_{n=\lfloor\frac{F}{2}\rfloor}^{\lfloor\frac{F}{2}\rfloor} \mathcal{P}_{m+\lfloor\frac{F}{2}\rfloor,n+\lfloor\frac{F}{2}\rfloor,\lceil \frac{kQ}{Q}\rceil}^{i,j}\\ \mathbf{W}_{m+\lfloor\frac{F}{2}\rfloor,n+\lfloor\frac{F}{2} \rfloor}^{i,j}I_{i+m,j+n,k} \tag{3}\]
Figure 5: Difference between pixels of RGB image and its corresponding depth map.
where \(\mathcal{P}^{i,j}\) represents the kernel that is dynamically generated via a new parameter-efficient filter generation hyper-network (described in Section 3.2.4) which is conditioned on the pixel \(I_{i,j}\). \(\mathbf{W}_{m+\lfloor\frac{F}{2}\rfloor,n+\lfloor\frac{F}{2}\rfloor}^{i,j}\) is a weighing function that captures the depth similarity between two pixels \(D_{i,j}\) and \(D_{i+m,j+n}\) as in Equation 4.
\[\mathbf{W}_{p,q}^{i,j}=\frac{1}{\sqrt{1+(\gamma\cdot(d(D_{i,j})-d(D_{p,q})))^{2 }}} \tag{4}\]
In Equation 4, \(d(D_{i,j})\) and \(d(D_{p,q})\) denotes the corresponding depth values at position \(D_{i,j}\) and \(D_{i+m,j+n}\), respectively. The choice of Equation 4 is based on the idea that the depth differences of various spatial location and objects in the real scene should be addressed by using depth pixels from the depth map instead of solely relying on color that can often mislead like the one in Figure 5. Additionally, the function decay rate is controlled by the parameter \(\gamma\). Section B.4 discusses a performance comparison that aims to investigate the impact of different depth weighing function options for the depth aware-hyper involution. The value of \(\gamma\) is a constant which can be tuned until the detection model reaches desired accuracy. In our case, the optimal value of \(\gamma\) was 9.5 after testing in the range 0.5 to 10 with an interval of 0.5. More importantly, Equation 4 calculation does not add any extra parameter to equation 3. Furthermore, it is important to know that almost all the RGB-D datasets used for this research rely on different existing algorithms to deal with missing depth pixel values. For example, NYU Depth v2 uses in-painting algorithm Levin et al. (2004) while SUN RGBD uses a different depth map improvement algorithm to estimate missing depth values Song et al. (2015). Therefore, this equation is not affected by missing depth pixels. Note that the hyper-involution shown in Figure 2 in our main object detection algorithm has the same filter generation technique like the depth aware hyper-involution but does not have the depth aware part \(\mathbf{W}_{m+\lfloor\frac{F}{2}\rfloor,n+\lfloor\frac{F}{2}\rfloor}^{i,j}\) since it is used to extract complementary semantic features from depth map.
#### 3.2.4 Depth Weighting Functions
We considered radial basis function (RBF) as our depth weighing function. An RBF is a function that calculates a real number output solely based on the distance between the input and a constant reference point. This reference point can be either the origin or a specific center point 7. To quantitatively verify the usefulness of the proposed RBF depth weighing function in Equation 4, we compare the performance with three other RBF kernels. First, we evaluate with a Gaussian function, shown in Equation 5, where the value decreases as the difference between two depth values increases and vise-versa.
\[\mathbf{W}_{p,q}^{i,j}=e^{-(\gamma|d(D_{i,j})-d(D_{p,q})|)^{2}} \tag{5}\]
Figure 6: Detection accuracy comparison using different depth similarity weighting functions on three different datasets.
In Equation 5, \(d(D_{i,j})\) and \(d(D_{p,q})\) denotes the corresponding depth values at \(D_{i,j}\) and \(D_{i+m,j+n}\), respectively. The exponent is used in Equation 5 because it allows the function to decay rapidly as the difference between two depth values increases. To put it simply, when there is a greater difference in depth, the function returns a smaller value. Additionally, the exponential function decay rate is controlled by the parameter \(\gamma\). Next, we tried Triangular function (Equation 6).
\[\mathbf{W}_{p,q}^{i,j}=\max(1-|d(D_{i,j})-d(D_{p,q})|,0) \tag{6}\]
For Equation 6, the depth similarity value will always remain in range \([0,1]\). Then we test our model with Equation 7 that was first introduced in 2, which is also referred as the Wendland \(c^{2}\) function.
\[\mathbf{W}_{p,q}^{i,j}=(1-(d(D_{i,j})-d(D_{p,q}))^{4}+(4\cdot(d(D_{i,j})-d(D_{ p,q}))+1) \tag{7}\]
The graph plots in Figure 7 demonstrate how the weighting on these kernels varies with depth similarity. Equation 4 contributes to the optimal detection performance when compared on various datasets, as illustrated in Figure 6.
#### 3.2.5 Filter Generation Hyper-network
We utilize a new function to map each 2D input kernel coordinate to the kernel value as demonstrated in Figure 8. The function is basically a parameter efficient hyper-network. The depth aware hyper-involution kernel weights are thus generated by a neural network (hyper-network) instead of independent learning. The trained weights of the kernel of a specific spatial location \(\theta_{ij}\) can be represented using the following function (Equation 8).
\[\theta_{i,j}=N_{2}\cdot\lambda(N_{1}\cdot X_{i,j}) \tag{8}\]
Figure 8: The filter generation hyper-network. This network samples each pixels in the RGB to learn the filter weights individually for each spatial region of the image.
Figure 7: Graph plots of various RBF functions discussed.
In Equation 8, \(N_{1}\) and \(N_{2}\) represent two linear transformations that collectively constitute a hyper-network. \(N_{1}\) is implemented via 3 layers of 1\(\times\)1 convolution where first two layers contains 8 filters with non-linear activation functions and the last layer consist of 6 filters. Meanwhile, \(N_{2}\) is implemented using a single filter of 1\(\times\)1 convolution followed by a broadcasting of the output based on the size of the kernel. \(\lambda\) implies batch normalization and non-linear activation functions that interleave two linear projections. The main advantage of using this hyper-network in our depth aware hyper-involution is that the number of trainable parameters remains independent of the choice of the kernel size which is not possible in involution Li et al. (2021) and standard convolution LeCun et al. (1998). Thus, the expressiveness of our depth aware hyper-involution can be increased with larger kernel size while keeping the number of trainable parameters constant. Note that the hyper-network used in Ma et al. (2022) is also independent of kernel size but it still depends on the number of input channels, output channels and number of nodes in the final layer of their hyper-network. Whereas our hyper-network does not rely on the the number of channels or number of nodes as these values remains constant. It is also worth mentioning that the name hyper-involution is motivated by the use of such an efficient hyper-network Ha et al. (2016).
#### 3.2.6 Visual Analysis
To visually analyze the trained depth aware hyper-involution kernel, we pick the sum of F \(\times\) F values from each kernel (here F represent the height and width of the kernel) as its representative value and compare it with similarly trained convolution and involution kernels. All the representatives at various geometric positions represent the corresponding heat map. A number of these heatmaps are demonstrated in Figure 9 where the columns following the input images represent mapping of learned kernels of convolution, involution and our depth aware hyper-involution respectively. From Figure 9, it is visible that depth aware hyper-involution is better at capturing various important semantic features of the input images by using the extra information from depth map. To be more specific, if one notice the heatmap of first row last column of Figure 9 the bookshelf at the back is properly mapped capturing all its sharp edges by the depth aware hyper-involution while the right corner of the bookshelf are obscured in the respective mapping of convolution and involution due to darkness. This clearly bolster the idea that our depth aware hyper-involution can highlight sharp
Figure 9: The heat maps in each row interpret the generated filters for an image instance from the NYU Depth v2 dataset. The columns after the input images illustrate the kernels of convolution, involution and depth aware hyper-involution respectively.
edges regardless of darkness by utilizing depth information. Another observation can be the second row last column of Figure 9 where the depth aware hyper-involution clearly maps and differentiate the darker regions of the input by highlight it with yellow color when compared with involution and convolution. In this image one can also notice that the depth aware hyper-involution also gives similar color coding in the heat map to the pairs of chairs that are in the same depth from the camera viewpoint which is possible by the extra information from depth. Moreover, depth aware hyper-involution seems to be more superior at capturing objects outer surface detail of the input compared to involution and convolution as shown in the image in last row and forth column of Figure 9 where the flower texture in the bed are better mapped by the depth aware hyper-involution than the other two filters. Depth aware hyper-involution is also superior at preserving the texture information at different spatial regions of the original image which can be deduced from the image in third row last column of Figure 9 where the texture details of the floor is mapped with greater detail by the depth aware hyper-involution kernel which is an advantage of its spatial specific feature.
### Fusion Stage
The fusion stage combines the extracted features from color image with the extracted depth features. This stage is important considering that we use two separate streams of neural network structures to process the inputs where one stream extract complementary semantic information from depth map and the other extract features from color image. Hence, this module must ensure that the two different stream of information combines without losing any information. As discussed in Section 2.1.2, previous state-of-the-art research has limitations in their fusion, as the flow of information between RGB and depth features is blocked. This is because the information is only combined at a specific stage in the model, which hinders the backbone network from learning modality-specific representations. Moreover, some of the work uses simple concatenation operation to combine the RGB and depth feature map with no trainable parameters. Therefore, these networks cannot learn to adapt while combining modality specific information. To this end, we propose a unique fusion strategy that can train in parallel with the network and minimizes information loss while combining the two streams of information. In our fusion module demonstrated in Figure 10, we first try to address the modality specific difference between depth and RGB information by using a residual mapping. Residual mapping is used in the module to allow the network to learn the transformation of depth feature map into a compatible version that can undergo element-wise addition with RGB feature map. Then it performs element-wise addition to combine the residual mapping of depth and the RGB feature tensors. However, simply combining the two tensors with element-wise addition will not make this dynamically trainable with the model because of the lack of trainable weights. Moreover, simple element-wise addition of tensors may also produce a coarse representation of the combined feature map. Therefore, we follow an encoder-decoder structure after the element-wise addition stage, inspired by the success of this kind of networks for semantic segmentation tasks Siddique et al. (2021); Zhou et al. (2018). The encoder part normally takes the added feature tensor and encode rich feature information via an up-sampling layer followed by a down-sampling convolution. Meanwhile, the decoder is responsible in generating more representative visual for the later part of the detector. The decoder can use fully connected layers for this purpose but it becomes computationally expensive. So we utilize transposed convolution operation which increases the dimensions of the input tensor by using a filter bigger than the input. The final element-wise addition copies the rich encoded information from the encoder and uses it as a part of the decoder. This enables the model to preserve information from a richer matrix and produce a fine grained feature map. Furthermore, as there are several trainable weights in convolution and transposed convolution of encoder and decoder blocks, it helps to train the fusion stage while training the detector.
To understand the effect of fusion against normal concatenation, we visualize their respective output feature maps. Some of these results are demonstrated in Figure 11 where the rows following the original images represent the output feature maps for an image instance after using normal concatenation and fusions stage respectively. The output feature maps of fusion stage qualitatively verify the fact that the fusion stage mechanism is much superior in combining the different modality of information and learn to preserve greater details from the original image and its depth. To be precise, if one compares the feature map in the second and third row of first column in Figure 11, the wall with the white board is clearly visible in the fusion feature whereas it is completely obscured in concatenation output. Similarly, a comparison of second and third row image of the second column in Figure 11 shows how the fusion feature map output captures the checkerboard texture of the wall behind the red curtains in the original image while this detail is missed concatenation output. Therefore, this visually support the idea behind using the encoder to encode rich semantic feature while the decoder up-sample the combined feature map. Another important distinction which can be observed if one compares the second and third row image of the third column in Figure 11 where the outer boundary of the chair and desk can be clearly visible in the fusion output unlike the concatenation output. Likewise, images in row two and three of the last column in Figure 11 shows how the fusion output preserves the outer boundary of the two monitors of the input image while the monitors look like a single monitor in the concatenation feature map. This comparison indicates that the fusion stage is better at learning while combining different modality of feature map input.
Figure 10: The working mechanism of the fusion stage module.
### The Loss Function
Considering our single stage detector we select the loss function used in Redmon and Farhadi (2017) for training. This loss function mainly accounts for three different losses, namely the localization loss, classification loss, and confidence loss. The classification loss is computed using Equation 9
\[Loss_{class}=\sum_{i=0}^{S^{2}}I_{i}^{obj}\sum_{cl\in classes}(cl_{p}-cl_{g})^{2} \tag{9}\]
Equation 9 utilizes a binary value \(I_{i}\) to indicate if an object is present in the grid cell \(i\). The total number of grids present in the output tensor is denoted by \(S^{2}\). Here, \(cl_{p}\) and \(cl_{g}\) represent the predicted class and ground truth class, respectively. Equation 10 is used to calculate the localization loss by using the center coordinates (x and y) and dimensions (w and h) of both the predicted and ground truth bounding boxes, with a parameter \(\lambda_{coordinate}\) set to 5 to apply more penalty for localization errors.
\[Loss_{local}=\lambda_{coord}\sum_{i=0}^{S^{2}}\sum_{j=0}^{A}I_{ i,j}^{obj}[(x_{i,p}-x_{i,g})^{2}+(y_{i,p}-y_{i,g})^{2}]\\ +\lambda{coord}\sum_{i=0}^{S^{2}}\sum_{j=0}^{A}I_{i,j}^{obj}[( \sqrt{w_{i,p}}-\sqrt{w_{i,g}})^{2}+(\sqrt{h_{i,p}}-\sqrt{h_{i,g}})^{2}] \tag{10}\]
In Equation 10, square root of the bounding boxes height and width are taken considering the fact that minor differences in the dimensions of larger boxes are less significant than in smaller boxes. Moreover, \(A\) stands for the the total anchor boxes used which are selected using K-means clustering. The class confidence loss is determined by Equation 11, where the confidence values of the prediction \(C_{i,p}\) and ground truth \(C_{i,g}\) are compared, including a parameter \(\lambda_{No-obj}\) set to 0.5 for minimizing the impact of confidence loss for cells with no objects present.
\[Loss_{conf}=\sum_{i=0}^{S^{2}}\sum_{j=0}^{A}I_{ij}^{obj}(C_{i,p}-C_{i,g})^{2}+ \lambda_{No-obj}\sum_{i=0}^{S^{2}}\sum_{j=0}^{A}I_{ij}^{noobj}(C_{i,p}-C_{i,g })^{2} \tag{11}\]
Overall, the loss function is expressed as Equation 12, which incorporates the three losses with the respective weightings.
\[Loss=Loss_{class}+Loss_{local}+Loss_{conf} \tag{12}\]
Figure 11: The figures in each row following the row of input images interpret the generated feature map output after the concatenation and fusion stage respectively. The image samples are taken from SUN RGB-D and NYU Depth v2.
### Automatic RGB-D Data Generation
Prior works on RGB-D object detection mostly relied on benchmark datasets like SUN-RGBD and NYU Depth V2 to evaluate their model performance. Despite the fact that these two benchmark datasets have real data from different types of depth sensors and challenging scenes to evaluate model detection capabilities, they are limited to indoor scenes with objects captured mostly in homes, universities, office space, and furniture stores. Therefore, the generalization capability of the RGB-D detectors and their performance on other complex real-world scenarios with custom objects of interest are often unclear. Furthermore, detection of objects where the clients have no or few images of the objects of interest are common in industry settings. To this end, we designed a synthetic RGB-D data generation pipeline to further explore the ability of our model to detect custom objects in diverse environments. As demonstrated in Figure 12, our RGB-D data generation framework consists of three main components. Firstly, a 3D-2D foreground projector for generating the perspective projections of 3D CAD (Computer Aided Design) models. Then, a generative composition model to create realistic composite images of the projected foreground image with selected background images. Finally, a depth map generator that produces the depth maps corresponding to the composite images. To be precise, 3D-2D foreground projector module takes a 3D CAD model as input and generates 2D foreground perspective viewpoint images of the model. This generates 2D images using three important viewpoint parameters namely: azimuth, elevation, and distance. Besides these viewpoint parameters, additional orientations of the 3D models using 6 degrees of freedom is also exploited while generating the silhouettes of the CAD models. Next, we apply the Spatial Transformer Generative Adversarial Network (ST-GAN) Lin et al. (2018) to combine our generated foreground image to the background image while maintaining the geometric correction and the scene semantics. Then we utilize a hybrid version of the dense vision transformer (DPT-hybrid) Ranftl et al. (2021) as our final component, i.e., the depth map generator. DPT-hybrid initially takes the composite RGB images and transform them into tokens using the ResNet-50 He et al. (2016) feature extractor. This helps to produce aligned depth maps for each of the generated images. Interestingly, this pipeline were able to produce 16000 RGB-D data within 3 minutes on top of a Nvidia Quadro RTX 6000 GPU which suggest its high utility use case for the industry.
## 4 Experiments
In this section, we first provide an overview of the proposed outdoor datasets. Afterwards, we present the performance of the model on these datasets followed by analysis of different components by ablation study. We evaluate our RGB-D object detection model using the benchmark NYU Depth v2 Nathan Silberman and Fergus (2012) and SUN RGB-D Song et al. (2015) datasets. The official training test split guideline is followed for both of these dataset. To further explore the capacity of our model, we also use the synthetic RGB-D data generated by the automated pipeline containing around 16000 RGB-D data. As customary with all other object detection research, mean average precision (mAP) and average precision (AP) are used as evaluation metrics, following same technique proposed by PASCAL VOC Everingham et al. (2015).
Figure 12: Automated RGB-D data generation pipeline.
One of the significant limitations of the benchmark RBG-D object detection dataset used in the literature is that they contain RGB-D data only from indoor environments. This limitation can leave several questions of the research community unanswered like the query about the performance of RGB-D object detection in challenging outdoor lighting condition or is RGB-D data only useful for indoors. To address such concern, in this thesis, we propose a new RGB-D dataset which we call the Outdoor RGD-D detect dataset which is fully annotated. All the RGB-D images pairs in this dataset are only focused on a variety of outdoor environments. The RGB images in this dataset is outsourced from three different benchmark dataset that includes the Places dataset Zhou et al. (2017), Open Images Kuznetsova et al. (2020), Krasin et al. (2017) and the multi-class wildlife dataset Zhang et al. (2020). The corresponding depth maps of the images were predicted using dense vision transformer (DPT-hybrid) Ranftl et al. (2021). We select three object classes labels for detection in this dataset that include Human, Animals and Vehicle classes that are most commonly seen in outdoor environments. Despite having only three classes for detection it is a very challenging dataset for the detection model given the fact that the classes have a wide variety of sub types for example: Vehicle class has instances of bus, truck, suv, bike etc. while Animal class has images of Kangaroos, Ostrich, Dog, etc. Moreover, the outdoor environments in the images also have a huge variety which can range from dense forest to busy downtown area along with different weather and lighting conditions. The dataset has a total of 1819 RGB-D samples which is split into 997 samples for training and the remaining 822 samples for testing. Another important feature of this dataset is that it does not have any class imbalance unlike the frequently used benchmarks in the literature.
### Implementation Details
We implemented our RGB-D detection model using Tensorflow version 2.5. We train our model on a remote server of ACENET Canada that has NVIDIA Quadro RTX 6000 GPU with 24 Gigabytes of memory. We also utilized MATLAB programming to decode the compressed NYU Depth v2 and SUN RGB-D dataset. A Python script has been written and used to organize these dataset folders according to our model input requirements. As suggested by Xu et al. (2017), Li et al. (2018), we select 19 furniture classes for object detection in these two datasets which are: bathtub, bed, bookshelf, box, chair, counter, desk, door, dresser, garbage bin, lamp, monitor, nightstand, pillow, sink, sofa, table, television, and toilet. For the synthetic data, we utilize our RGB-D data generation pipeline, as described in Section 3.4, to synthesize around 16000 RGB-D data. Then we used this synthetic data to train our RGB-D detection model. We choose 7 different small working object classes to evaluate our model which includes clamp, pipe, brace, nut, screwdriver, door-stopper and paintbrush.
For training the RGB-D object detection model we use Adam optimizer. It should be noted that we do not apply any pre-trained Imagenet weights and choose to train the model from scratch. The input images were resized to a size of 415 \(\times\) 415. We train the SUN RGB-D and NYU Depth v2 dataset with a learning rate of 0.0005 for 150 epochs and 130 epochs respectively. Similarly, for the outdoor RGB-D dataset we apply a learning rate of 0.0005 for 160 epochs. For the synthetic data, we applied a learning rate of 0.00009 and trained for 120 epochs. For the non max suppression we select an IOU threshold of 0.5 because it strikes a good balance between retaining important information and removing duplicates.
### Results on SUN RGB-D and NYU Depth v2
We compare the detection model with recent state-of-the-art RGB-D object detection methods. For these, we adopt the results reported in their papers to ensure fair comparison.
Our detection model achieves the best performance with mAP 55.4 \(\%\) on NYU Depth v2 surpassing all state-of-the-art RGB-D detectors by at least 1 percent, as shown in Table 2. Moreover, the proposed model significantly improves the performance on several classes such as bed, monitor, desk and toilet. Low detection accuracy with a few objects were most likely caused object occlusion and noisy depth map as our model rely heavily on depth map information.
Table 3 reports the object detection accuracies of various models on SUN RGB-D. From Table 3, it is apparent that our model achieves the second best results on SUN RGB-D dataset reaching an mAP of 52.7 \(\%\). However, our model achieves significant performance on individual furniture classes like bed, sofa, toilet and monitor. The heterogeneity of the objects within the box class, including those of varying sizes like small cereal boxes and large packages found in a mail room, presents a challenge for accurate detection and results in a lower accuracy for this class. Furthermore, the desk class in the object detection benchmark Gupta et al. (2014) is facing an issue with accuracy due to ambiguous data. Precisely, some desks resemble tables and vice versa, creating difficulty in distinguishing between the two. Also, the fact that our model were designed to better utilize the boundary information of objects so the similar semantic pattern between desks and tables are likely causing difficulties in proper detection of the desk. Despite these difficulties, it is noteworthy that our model's accuracy for the desk class surpasses that of several other models in the literature. The instances of the lamp class in the dataset present a challenge for accurate classification due to the high intensity of
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Classes & **RGB-D** & **SuperTransfer** & **AC-CNN** Li & **CMAC** Li et al. & **FetNet** Xiao & **Ours** \\ & **RCNN** Gupta & Gupta et al. (2014) & Gupta et al. (2016) & et al. (2018) & et al. (2021) & **GutNet** Xiao & **Ours** \\ \hline \hline bathtub & 22.90 & 50.60 & 52.20 & 55.60 & 56.40 & 53.30 \\ \hline lamp & 29.30 & 42.50 & 42.90 & 45.00 & 50.80 & 49.50 \\ \hline bed & 66.40 & 81.00 & 82.40 & 83.90 & 78.30 & 94.09 \\ \hline monitor & 43.60 & 62.90 & 63.60 & 65.80 & 69.50 & 73.37 \\ \hline bookshelf & 21.80 & 52.60 & 52.50 & 54.00 & 57.30 & 52.40 \\ \hline night- & 39.50 & 54.70 & 55.20 & 57.60 & 59.00 & 59.60 \\ stand & & & & & \\ \hline box & 3.00 & 5.40 & 8.60 & 9.80 & 8.00 & 17.50 \\ \hline pillow & 37.40 & 49.10 & 49.70 & 52.70 & 60.80 & 56.45 \\ \hline chair & 40.80 & 53.00 & 54.80 & 55.40 & 68.20 & 69.46 \\ \hline sink & 24.20 & 50.00 & 51.40 & 53.80 & 60.30 & 52.40 \\ \hline counter & 37.60 & 56.10 & 57.30 & 59.20 & 37.60 & 54.34 \\ \hline sofa & 42.80 & 65.90 & 66.80 & 69.10 & 69.00 & 69.50 \\ \hline desk & 10.20 & 21.00 & 22.70 & 24.10 & 32.50 & 38.73 \\ \hline table & 24.30 & 31.90 & 33.50 & 35.00 & 36.00 & 36.90 \\ \hline door & 20.50 & 34.60 & 34.10 & 36.30 & 44.20 & 41.20 \\ \hline tv & 37.20 & 50.10 & 51.80 & 56.90 & 55.40 & 55.46 \\ \hline dresser & 26.20 & 57.90 & 58.10 & 58.50 & 59.10 & 53.70 \\ \hline toilet & 53.00 & 68.00 & 70.40 & 74.70 & 71.20 & 72.50 \\ \hline garbage- & 37.60 & 46.20 & 46.50 & 47.20 & 51.90 & 52.20 \\ bin & & & & & & \\ \hline \hline mAP & 32.50 & 49.10 & 50.20 & 52.30 & 54.00 & 55.40 \\ \hline \end{tabular}
\end{table}
Table 2: Experimental results on NYU Depth v2. The first, second and third best results are highlighted in green, blue and red color, respectively. Note that mAP values to be read as percentages.
Figure 13: A few detection results where the top five images shows detection on SUN RGB-D and the bottom images are detections on NYU Depth v2.
light emission from the lamp obscuring the visible shape in the RGB images. Although the shape of the lamps are comparatively discernible in the depth maps but they are obtained from four distinct sensors in SUN RGB-D dataset. These variety in depth information, along with the differences between the depth maps and the RGB images, can negatively impact the accuracy of the lamp detection in our model because the depth aware hyper-involution relies on both RGB and depth data to learn its filter weights. Figure 13 visualize some of the detection from these two datasets for qualitative evaluation. The performance on benchmark NYU Depth v2 datasets indicates the efficacy of our detection architecture and its customized modules. Furthermore, the precision-recall curves displayed in Figure 14 demonstrate an appropriate equilibrium between precision and recall for several classes.
### Results on Synthetic Dataset
We select 7 different small working object classes from synthesized data to evaluate our model which are: clamp, pipe, brace, nut, screwdriver, door-stopper and paintbrush. Figure 15 shows some of the qualitative detection results with red boxes on the synthetic data and also gives an overall idea about the quality of our synthesized data. The model achieved an overall mAP of 58.7 percent on this dataset, as shown in Table 4. It gets a low mAP for very small object like nuts which is mostly because of the noise in predicted depth data. More importantly, the model achieves significantly higher mAP on several individual small object classes like doorstopper, brace and clamp in complex synthetic factory environment. This results suggest the superiority of this model for object detection in complex environments like that of inside a factory.
### Results on Outdoor RGB-D Dataset
Figure 16 shows some of the qualitative detection results on our outdoor RGB-D detection dataset. This figure shows that despite having only three classes for detection, the dataset poses great challenge for object detection due to the variety of objects in each class. For example, first image of row one and the first two images of row two of Figure 16
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline Classes & **RGB-D** & **SuperTransfer** & **AC-CNN Li** & **CMAC Li et al.** & **FetNet Xiao** & **Ours** \\ & **RCNN Gupta** & Gupta et al. & et al. (2016) & et al. (2018) & et al. (2021) & et al. (2021) & **Ours** \\ \hline \hline bathtub & 49.60 & 65.30 & 65.80 & 69.00 & 62.50 & 63.98 \\ \hline lamp & 22.00 & 32.10 & 33.80 & 35.60 & 65.00 & 61.29 \\ \hline bed & 76.00 & 83.00 & 83.30 & 86.10 & 80.90 & 81.42 \\ \hline monitor & 10.80 & 36.80 & 39.50 & 40.50 & 43.10 & 50.46 \\ \hline bookshelf & 35.00 & 54.40 & 56.20 & 57.90 & 47.90 & 53.45 \\ \hline night- stand & 37.20 & 46.60 & 47.10 & 49.80 & 62.00 & 60.93 \\ \hline box & 5.80 & 14.40 & 16.40 & 18.20 & 13.30 & 18.17 \\ \hline pillow & 16.50 & 23.40 & 25.20 & 26.70 & 63.90 & 52.09 \\ \hline chair & 41.20 & 46.90 & 47.50 & 50.30 & 69.30 & 63.10 \\ \hline sink & 41.90 & 43.90 & 45.30 & 46.60 & 65.40 & 66.98 \\ \hline counter & 8.10 & 14.60 & 16.00 & 17.40 & 49.20 & 17.80 \\ \hline sofa & 42.20 & 61.30 & 61.90 & 67.20 & 56.30 & 57.90 \\ \hline desk & 16.60 & 23.90 & 24.90 & 26.80 & 30.40 & 35.40 \\ \hline table & 43.00 & 48.70 & 49.00 & 52.90 & 49.50 & 49.71 \\ \hline door & 4.20 & 15.30 & 16.60 & 17.30 & 52.60 & 51.80 \\ \hline tv & 32.90 & 50.50 & 54.10 & 56.70 & 40.30 & 39.18 \\ \hline dresser & 31.40 & 41.30 & 42.70 & 44.40 & 41.90 & 40.23 \\ \hline toilet & 69.80 & 79.40 & 84.20 & 84.90 & 85.50 & 83.42 \\ \hline garbage- bin & 46.80 & 51.00 & 53.40 & 54.40 & 56.90 & 54.00 \\ \hline \hline mAP & 35.20 & 43.80 & 45.40 & 47.50 & 54.50 & 52.70 \\ \hline \end{tabular}
\end{table}
Table 3: Experimental results on SUN RGB-D. The first, second and third best results are highlighted in green, blue and red color, respectively. Note that mAP values to be read as percentages.
Figure 14: Precision recall curves for different classes on SUN RGB-D and NYU Depth v2. The top row shows classes in NYU Depth v2 while the bottom row shows classes in SUN RGB-D.
Figure 15: A few detection results on the synthesized data.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Method & **mAP** & doorstopper & pipe & clamp & screwdriver & brace & paintbrush & nut \\ \hline \hline FETNet & 56.8 & 80.6 & 71.3 & 59.6 & 68.1 & 49.6 & 54.7 & 14.3 \\ Xiao et al. [2021] & & & & & & & & \\ \hline
**Ours** & 58.9 & 84.1 & 74.9 & 67.2 & 62.7 & 53.0 & 52.5 & 17.9 \\ \hline \end{tabular}
\end{table}
Table 4: Experimental results on automatically synthesized dataset.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Method & **mAP** & Vehicle & Human & Animal \\ \hline \hline FETNet & 78.4 & 79.5 & 77.7 & 78.1 \\ Xiao et al. [2021] & & & & \\ \hline
**Ours** & 80.2 & 81.1 & 80.7 & 78.8 \\ \hline \end{tabular}
\end{table}
Table 5: Experimental results on outdoor RGB-D dataset.
shows that our detector was able to detect the van, bus and truck as vehicle class despite their differences in visual features. The figures proves that the detector was able to learn this variety of feature within a class. The detector was also able to detect objects that were almost blurred in the image due to their speed just like the image in row one and column two of Figure 16. The detection results in first row third column of Figure 16 shows that the human was detected despite wearing a helmet that cover his head which proves that the detector has the generalization capacity. The detection of animal from far away in a dense jungle/forest environment in second row third column also suggest to the model's accuracy. In the quantitative experiments as shown in Table 5, our model achieved an mAP of 80.1 which is also significantly higher FETNet Xiao et al. (2021). Therefore, both qualitative and quantitative results indicate the high capacity of our detection model in real world outdoor environments under a variety of lighting conditions.
### Inference
Inference GFLOPs refers to the number of floating point operations required to perform a single prediction or inference step on a trained model. This measurement is often used to evaluate the computational complexity and performance of a given model, and is usually expressed in GigaFLOPs (\(10^{9}\) FLOPs). The calculation of Inference GFLOPs involves counting the number of additions and multiplications required to compute the activations for each layer in a network for a given input, and converting that count to FLOPs. We compare the inference GFLOPs with several state-of-the-art RGB and RGB-D based object detectors to evaluate the real-time computational performance of our detection model. For inference time comparison of the RGB-D based detectors we select FETNet Xiao et al. (2021) and some other implementations of RGB-D based detection using their proposed module, as reported in their paper. As shown in Table 6, our detection model achieved the best inference GFLOPs which suggest our model has the least computational complexity. Moreover, our model also significantly outperforms real-time single stage detector YOLOv2 Redmon and Farhadi (2017) in terms GFLOPs. This result also indicate the real-time performance of our RGB-D detection model. A potential reason for achieving significantly less inference GFLOPs is that our backbone structure has convolution layers of 3x3 filter with stride 1 which always use the same padding and maxpool layer of 2x2 filter of stride 2. The
\begin{table}
\begin{tabular}{c c c} \hline \hline & **Input** & \\
**Model** & RGB & RGB-D & **GFLOPs** \\ \hline YOLOv2 Redmon and Farhadi (2017) & ✓ & & 63.03 \\ \hline Cascecade-RCNNCai and Vasconcelos (2018) & ✓ & & 168.3 \\ \hline Faster-RCNN Ren et al. (2015) & ✓ & & 140.5 \\ \hline Cascade-RCNN+FEM+MVIT Xiao et al. (2021) & & ✓ & 158.5 \\ \hline Faster-RCNN+FEM+MVIT Xiao et al. (2021) & & ✓ & 130.7 \\ \hline FETNet Xiao et al. (2021) & & ✓ & 279.3 \\ \hline Our model & ✓ & **26.72** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Inference GFLOPs comparison with state-of-the-art RGB and RGB-D based detection algorithms.
Figure 16: A few detection result on the Outdoor RBG-D.
fusion layer uses mostly 3 filters for its convolution operations. Moreover, we apply just 8 filters for depth aware hyper-involution which also contributes to the impressive inference times.
### Ablation Study
#### 4.6.1 Module Test
To consider a baseline for the purpose of ablation study we have modified the detection architecture by replacing the depth aware hyper-involution operation with standard convolution and replacing the proposed fusion stage with simple concatenation of features map. We then modified the baseline by replacing the concatenation with fusion to identify the performance of the proposed fusion. As demonstrated in the graph plot in 17 the original model achieved the highest accuracy which verify the usefulness of the depth aware hyper-involution. As shown in graph plot 18, our main detection model has the minimum inference GFLOPs when compared with the baseline and baseline with just fusion or depth aware-involution. This implies that the fusion stage and depth aware hyper-involution does not increase computational complexity and helps to maintain real-time performance of the detection model. Moreover, the model also has less parameters when compared to the model with only fusion and standard convolution which suggest the depth aware hyper-involution operation consume less memory than standard convolution which is shown in plot 19. As demonstrated in the plot 19, when normal concatenation is replaced with the suggested fusion in the baseline model, the number of parameters increases significantly. This indicates that the fusion module has more trainable parameters, which can enhance the model's learning ability.
#### 4.6.2 Number of Parameters Vs Kernel Sizes
Furthermore, we conducted another ablation study to see the effect on number of parameters of depth aware hyper-involution for different kernel sizes. We also compare it with the parameters of standard convolution LeCun et al. (1998) and involution Li et al. (2021) for similar kernel sizes. As shown in Table 7 and the graph plot in 20, the parameters of depth aware hyper-involution remains the same for all kernel sizes which is not the case in involution and standard convolution. Moreover, the number of parameters in depth aware hyper-involution is less than that of standard convolution for all sizes of filters. This clearly indicates the usefulness of the hyper-network in generating filters for the depth aware hyper-involution. Note that, we applied 8 filters for all these modules during comparison.
Figure 17: Comparison of mAP for different version of the model in ablation study.
Figure 19: Comparison of parameters for different version of the model in ablation study.
Figure 18: Comparison of inference for different version of the model in ablation study.
## 5 Conclusions
In this paper, we delineate the importance of depth maps for object detection task and investigated into the alternatives of convolution for a better feature extraction from RGB-D images. Aimed at maximizing the utilization of the depth information, we design a depth-weighted hyper-involution and a new fusion mechanism which enables dynamic learning during model training and prevents information loss. Building on top of these modules, we developed a single stage RGB-D based object detection model which uses minimal number of network parameters. The proposed object detection framework exhibits higher accuracy while maintaining low computational complexity. Qualitative and quantitative experiments performed using the proposed model on benchmark datasets suggest the effectiveness of the proposed architecture. Moreover, a fully automated RGB-D data synthesis pipeline was developed to tackle the scarcity of large datasets for RGB-D-based object detection research. We also introduced two new RGB-D datasets providing the research community with more options to evaluate and compare their RGB-D object detection performance in diverse environments. Although we designed depth aware hyper-involution module for RGB-D object detection, this filter has proven to map some important semantic features that can potentially be a good fit for other tasks such as object parts segmentation or salient object detection. A more focused investigation of depth aware hyper-involution module in the context of specific applications such as robotic surgery or augmented reality is necessary.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Layer & 3x3 filter & 5x5 filter & 7x7 filter \\ \hline \hline Standard convolution & 216 & 600 & 1176 \\ \hline Involution & 145 & 289 & 505 \\ \hline Depth aware hyper-involution & 273 & 273 & 273 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of the number of parameters for different kernel sizes.
Figure 20: Parameter comparison for different kernel sizes of convolution, involution and our depth aware hyper-involution.
## Acknowledgements
This work was supported by Mitacs through the Mitacs Accelerate program.
|
2309.12625 | DRG-LLaMA : Tuning LLaMA Model to Predict Diagnosis-related Group for
Hospitalized Patients | In the U.S. inpatient payment system, the Diagnosis-Related Group (DRG) is
pivotal, but its assignment process is inefficient. The study introduces
DRG-LLaMA, an advanced large language model (LLM) fine-tuned on clinical notes
to enhance DRGs assignment. Utilizing LLaMA as the foundational model and
optimizing it through Low-Rank Adaptation (LoRA) on 236,192 MIMIC-IV discharge
summaries, our DRG-LLaMA-7B model exhibited a noteworthy macro-averaged F1
score of 0.327, a top-1 prediction accuracy of 52.0%, and a macro-averaged Area
Under the Curve (AUC) of 0.986, with a maximum input token length of 512. This
model surpassed the performance of prior leading models in DRG prediction,
showing a relative improvement of 40.3% and 35.7% in macro-averaged F1 score
compared to ClinicalBERT and CAML, respectively. Applied to base DRG and
complication or comorbidity (CC)/major complication or comorbidity (MCC)
prediction, DRG-LLaMA achieved a top-1 prediction accuracy of 67.8% and 67.5%,
respectively. Additionally, our findings indicate that DRG-LLaMA's performance
correlates with increased model parameters and input context lengths. | Hanyin Wang, Chufan Gao, Christopher Dantona, Bryan Hull, Jimeng Sun | 2023-09-22T05:18:54Z | http://arxiv.org/abs/2309.12625v2 | # DRG-LLaMA : Tuning LLaMA Model to Predict Diagnosis-related Group for Hospitalized Patients
###### Abstract
In the U.S. inpatient payment system, the Diagnosis-Related Group (DRG) is pivotal, but its assignment process is inefficient. The study introduces DRG-LLaMA, an advanced large language model (LLM) fine-tuned on clinical notes to enhance DRGs assignment. Utilizing LLaMA as the foundational model and optimizing it through Low-Rank Adaptation (LoRA) on 236,192 MIMIC-IV discharge summaries, our DRG-LLaMA -7B model exhibited a noteworthy macro-averaged F1 score of 0.327, a top-1 prediction accuracy of 52.0%, and a macro-averaged Area Under the Curve (AUC) of 0.986, with a maximum input token length of 512. This model surpassed the performance of prior leading models in DRG prediction, showing a relative improvement of 40.3% and 35.7% in macro-averaged F1 score compared to ClinicalBERT and CAML, respectively. Applied to base DRG and complication or comorbidity (CC)/major complication or comorbidity (MCC) prediction, DRG-LLaMA achieved a top-1 prediction accuracy of 67.8% and 67.5%, respectively. Additionally, our findings indicate that DRG-LLaMA's performance correlates with increased model parameters and input context lengths.
1 Division of Hospital Internal Medicine, Mayo Clinic Health System, Mankato, Minnesota
2 Department of Computer Science, University of Illinois Urbana-Champaign, Champaign, Illinois
3 Enterprise Inpatient Clinical Documentation Integrity, Mayo Clinic, Rochester, Minnesota
4 Division of Hospital Internal Medicine, Mayo Clinic, Phoenix, Arizona
5 Carle Illinois College of Medicine, University of Illinois Urbana-Champaign
[email protected]
## Introduction
The emergence of LLMs, such as GPT-3 Brown et al. (2020) and InstructGPT Ouyang et al. (2022), has brought about a transformative shift in the landscape of Natural Language Processing (NLP). These LLMs have demonstrated exceptional capabilities across many NLP tasks in the general domain. However, the integration of LLMs into the medical field remains at a nascent stage within the academic community. Recent instances of progress highlight their significant potential, including OpenAI's GPT-4 Nori et al. (2023), Google's Med-PaLM2 Singhal et al. (2023), and Google Deepmind's Med-PaLM M Tu et al. (2023). GPT-4 and Med-PaLM 2 have achieved impressive performance on the United States Medical Licensing Examination (USMLE), and Med-PaLM M can even classify radiology images. Nonetheless, the medical domain introduces elevated concerns regarding safety and privacy, necessitating detailed analysis regarding the performance and limitations of LLMs to address the inherent risks such as hallucination, bias, and reasoning deficiencies Au Yeung et al. (2023).
Since its inception by Medicare in 1983, DRG has served as the foundation for the inpatient prospective payment system within the United States Quinn (2014). Each distinct DRG code is delineated by a particular set of patient attributes, including principal diagnosis, specific secondary diagnoses, procedures, sex and discharge status CMS (2016). Traditionally, the assignment of DRGs constitutes a labor-intensive manual endeavor undertaken by coding specialists, typically subsequent to a patient's discharge. Given the pivotal role of DRGs and their bundled metrics (e.g., case-mix index, geometric length of stay) in the operational and financial performance of hospitals, a pressing interest exists in the accurate early prediction of DRGs during a patient's hospitalization. This prediction is vital for efficacious resource planning and allocation. The task of DRG prediction presents distinct challenges compared to automated International Classification of Diseases (ICD) coding. This distinction stems from differences in the nature of the task: DRGs involve multi-class classification, where one DRG code is assigned to each visit, in contrast to the multi-label classification of ICDs, where multiple codes may apply to a single visit Kaur et al. (2022). Additionally, the hierarchical structure of the codes, such as the presence of a principal diagnosis in DRGs, and the context of utilization in hospital operations further differentiate the two tasks CMS (2016). Previous studies have showcased advancements in DRGs classification accuracy through various machine-learning algorithms Gartner et al. (2015) and deep neural networks Islam et al. (2021). More recently, a deep learning-based NLP model leveraging adjusted Convolutional Attention for Multi-Label Classification (CAML) has been applied to predict DRGs based on clinical notes and yielded promising outcomes Mullenbach et al. (2018); Liu et al. (2021).
With LLM's remarkable natural language synthesis and generating capabilities, we hypothesize LLM could be applied to effectively predict DRGs directly from clinical notes. In this work, we present DRG-LLaMA, a fine-tuned LLM derived from LLaMA Touvron et al. (2023). DRG-LLaMA is trained on discharge summaries from the MIMIC-IV dataset for the task of DRG prediction. In our investigation, we approached DRG prediction from two perspectives: 1) as a single-label classification task, where the
model makes an end-to-end prediction of the DRG label, and 2) as a two-label classification task, where the model predicts base DRG and CC/MCC status as two separate labels, followed by the inference of the final DRG label from these two components (i.e., base DRG and CC/MCC status). Our work revealed superior performance of DRG-LLAMA in DRG prediction compared to the previously reported leading models of CAML [10] and ClinicalBERT [1].
## Results
### Study cohort
A summary of the study cohort and data preprocessing steps was shown in Figure 1. We focused on hospital stays with Medicare severity-DRGs (MS-DRGs) within the MIMIC-IV dataset. The "brief hospital course" section from discharge summary was extracted to serve as input text. We also filtered out low-quality discharge summaries and rare DRGs with less than 2 occurences in the cohort. 90% of the data was allocated as training set while the rest 10% as testing set, and this partitioning was stratified on DRGs. The training and testing set contains 738 and 723 unique DRG labels, respectively. There is no significant difference in the average word counts in the training vs. testing set (398 vs. 399; p = 0.51 from two-sided t-test). The distribution of cases per DRG is imbalanced, with a median number of 124.5 in the training set (Supplementary Figure 1).
### DRG prediction as a single-label classification task
We presented the results with a maximum input token size of 512 in Table 1. DRG-LLAMA consistently outperformed ClinicalBERT and CAML across all evaluation metrics, with the most notable contrast seen in macro-F1 score (showing a relative improvement of 40.3% and 35.7% compared to ClinicalBERT and CAML, respectively). The accuracy of top-1 and top-5 predictions achieved by our fine-tuned DRG-LLAMA -7B model was 52.0% and 84.8%, respectively. When only considering the most frequent 300 DRGs, the top-1 accuracy improved to 55.7%, and this further increased to 69.4% in the most frequent 30 DRGs. As expected, DRG-LLAMA's performance declined in less frequent DRGs (Figure 1(a)). When compared to CAML, ClinicalBERT achieved higher AUC and top-1 prediction accuracy but lower macro-averaged F1 score. High AUC scores were obtained for all models due to the many infrequent DRG classes, resulting in high true negative predictions for all negative class predictions. [10].
We investigated DRG-LLAMA's performance across varying model sizes and input context lengths (Table 2), observing a consistent improvement in all evaluation metrics with larger models and longer input contexts, measured in maximum token numbers. The optimal configuration, utilizing a 13B LLAMA model and a maximum input token size of 1024, achieved a top-1 prediction accuracy of 54.6%, a top-5 prediction accuracy of 86.5%, and a macro-F1 score of 0.361.
### DRG prediction as a two-label classification task
In the two-label approach, we first dissect each DRG into two distinct components: a base DRG label and a CC/MCC label (denoting complication or comorbidity / major complication or comorbidity). This dissection process was based on the composition delineated within the MS-DRG v34.0 definitions manual [2]. The five distinct labels attributed to CC/MCC are as follows: "without CC/MCC", "with CC", "with MCC", "without MCC", and "not applicable". As an example, in DRG code 53 of "spinal disorders and injuries without CC/MCC," "spinal disorders and injuries" represents the base DRG label, while "without CC/MCC" serves as the CC/MCC label. Following this mapping process, the 738 DRG codes were converted into a combination of 340 base DRG labels each paired with one of the five CC/MCC labels. Results of two-label approaching DRG-LLAMA -7B with a maximum input token size of 512 was shown in Table 3. The top-1 prediction accuracy for base DRG and CC/MCC reached 67.8% and 67.5% respectively. This result suggests that predicting the principal diagnosis or procedure without considering CC/MCC is a significantly easier task on its own.
Upon integrating a mapping rule designed to infer DRGs through the combination of base DRG and CC/MCC labels, the accuracy reached 51.5% across all DRGs. Notably, this performance was comparable with the accuracy attained in the single-label approach of 52.0% using the same base model, showing that the LLM was able to achieve state-of-the-art performance via either classification setting.
### Error analysis
As noted above, a correlation exists between the number of training cases and prediction performance. The accuracy of DRG prediction depends on various factors. DRGs with a top-5 prediction accuracy exceeding 80% are typically associated with a median of 309 training cases per label. In contrast, those DRGs with a top-5 accuracy below 20% are associated with only a median of 17 training cases per label (as shown in Figure 1(b)). However, other factors, such as the type of DRG, also affect prediction performance. For instance,
Figure 1: Flow diagram of the cohort processing steps.
out of the DRGs with a top-1 prediction accuracy of 100%, 8 out of 9 are surgical DRGs, which have distinct hospital courses that make them easier for the model to comprehend (as listed in Supplementary Table 2). We randomly selected 10 samples from the subset where the model presented erroneous predictions within its top ten outcomes for manual error analysis (as listed in Table 4). Broadly, the identified errors were categorized as follows: erroneous CC/MCC (1/10), correct information needed for DRG prediction unavailable (1/10), difficulty in selecting correct base DRG (3/10), inadequate clinical concept extraction (4/10) and an isolated case of a plausible incorrect DRG label (1/10). Certain errors, like inadequate clinical concept extraction, indicate the model's weaknesses. Other errors, such as the difficulty in selecting the base DRG, likely stem from the intricacies of the DRG assignment rules. Furthermore, errors such as the unavailability of correct information required for DRG prediction underscore the limitations of solely relying on discharge summaries for DRG predictions.
## Discussion
**Large language model context:** Language models based on the transformer architecture, either pretrained or fine-tuned using biomedical corpora, have demonstrated efficacy across a spectrum of NLP benchmarks within the biomedical realm [14, 15, 16].
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Model & DRG set & MACRO- & ACC@1 & ACC@5 & ACC@10 & MACRO- & MICRO- & Number \\ & & F1 & & & AUC & AUC & (\%) of \\ \hline DRG-LLaMA -7B & All DRGs & **0.327** & **0.520** & **0.848** & **0.912** & **0.986** & **0.994** & 26,244 \\ & & **(0.004)** & **(0.003)** & **(0.002)** & **(0.002)** & **(0.001)** & **(0.000)** & (100.00) \\ & Top 300 DRGs & 0.497 & 0.557 & 0.876 & 0.932 & 0.988 & 0.995 & 22,940 \\ & & (0.005) & (0.004) & (0.002) & (0.001) & (0.000) & (0.000) & (87.4) \\ & Top 50 DRGs & 0.700 & 0.666 & 0.931 & 0.965 & 0.989 & 0.998 & 10,270 \\ & & (0.004) & (0.004) & (0.002) & (0.001) & (0.000) & (0.000) & (39.1) \\ & Top 30 DRGs & 0.737 & 0.694 & 0.941 & 0.971 & 0.988 & 0.998 & 7,666 \\ & & (0.005) & (0.005) & (0.003) & (0.002) & (0.001) & (0.000) & (29.2) \\ ClinicalBERT & All DRGs & 0.233 & 0.502 & 0.815 & 0.881 & 0.979 & 0.991 & 26,244 \\ & & (0.003) & (0.003) & (0.002) & (0.002) & (0.001) & (0.000) & (100.0) \\ CAML & All DRGs & 0.241 & 0.447 & 0.785 & 0.865 & 0.976 & 0.991 & 26,244 \\ & & (0.003) & (0.002) & (0.002) & (0.002) & (0.001) & (0.000) & (100.0) \\ \hline \hline \end{tabular} F1 and AUC scores were calculated using macro-averaged or micro-averaged method as shown in the header. Notably, in a multi-class classification problem, micro-averaged F1 score is equal to top-1 prediction accuracy when labels of all classes are considered [16]. Accuracy @1, @5 and @10 measure whether the top-1, top-5 and top-10 predictions by the model contain correct DRG code, respectively. Standard deviations are shown in parentheses and calculated using a bootstrapping procedure. Top DRGs are selected based on the number of cases per DRG in the dataset. Number (%) of cases represents hospital stays covered by the given DRG group in the testing set. **Bolded scores** denote the best performance with respect to the task. DRG-LLaMA outperformed ClinicalBERT and CAML across all evaluation metrics, with better performance in more frequent DRGs. _DRG_ denotes diagnosis-related group, _AUC_ denotes area under the receiver operating characteristic curve, and _ACC_ denotes accuracy.
\end{table}
Table 1: Main Results on DRG prediction with a max input token size of 512
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Model size & Max input token & MACRO- & ACC@1 & ACC@5 & ACC@10 & MACRO- & MICRO- \\ & size & F1 & & & & AUC & AUC \\ \hline
13B & 1024 & **0.361** & **0.546** & **0.865** & **0.925** & **0.986** & **0.994** \\ & & **(0.004)** & **(0.003)** & **(0.002)** & **(0.001)** & **(0.001)** & **(0.000)** \\ & 512 & 0.334 & 0.524 & 0.853 & 0.914 & 0.984 & 0.993 \\ & & (0.005) & (0.002) & (0.002) & (0.002) & (0.001) & (0.000) \\ & 340 & 0.312 & 0.499 & 0.834 & 0.902 & 0.983 & 0.992 \\ & & (0.006) & (0.003) & (0.002) & (0.002) & (0.001) & (0.000) \\
7B & 1024 & 0.346 & 0.539 & 0.861 & 0.923 & 0.986 & 0.994 \\ & & (0.004) & (0.003) & (0.002) & (0.001) & (0.001) & (0.000) \\ & 512 & 0.327 & 0.520 & 0.848 & 0.912 & 0.986 & 0.994 \\ & & (0.004) & (0.003) & (0.002) & (0.002) & (0.001) & (0.000) \\ & 340 & 0.303 & 0.493 & 0.828 & 0.896 & 0.981 & 0.992 \\ & & (0.005) & (0.003) & (0.002) & (0.002) & (0.001) & (0.001) \\ \hline \hline \end{tabular} Experiments were performed on LLaMA with a size of 7 billion and 13 billion parameters. **Bolded scores** denote the best performance. We observed that DRG-LLaMA ’s performance consistently improved with larger models and longer input contexts.
\end{table}
Table 2: DRG-LLaMA performance on different model and max input token sizes
et al. 2021). When contrasted with their predecessors rooted in the BERT architecture (Devlin et al. 2018), LLMs stand out due to their substantial size and their pretraining on expansive, cross-disciplinary text corpora. LLMs exhibit a notable capacity for comprehending and reasoning with clinical knowledge. Without domain-specific fine-tuning or specialized prompt crafting, GPT-4 exceeded the passing score on USMILE by over 20 points and set a new state-of-the-art (Nori et al. 2023). On this premise, it is plausible to speculate that once attuned to the medical domain, an LLM could deliver robust performance across diverse NLP tasks, including the prediction of DRGs.
Toward deploying a local LLM, we used LLaMA, a robust and openly accessible foundational LLM with parameters ranging from 7 billion to 65 billion (Touvron et al. 2023a). Instruction-following models fine-tuned from LLaMA such as Alpaca (Taori et al. 2023) and Vicuna (Chiang et al. 2023), exhibit performance on par with GPT-3.5. Within the medical context, several groups have directed their efforts towards fine-tuning LLaMA. Notable examples among these are ChatDoctor (trained on authentic patient-physician dialogues), HuaTuo (fine-tuned with a Chinese medical knowledge graph), and PMC-LLaMA (fine-tuned on biomedical academic papers) (Wang et al. 2023; Li et al. 2023; Wu et al. 2023). These LLaMA-based models focused on medical question answering, yielding encouraging outcomes.
**Impact of DRG prediction:** In this study, we demonstrated superior performance of the fine-tuned LLaMA in the text classification task of DRG prediction. Previous studies have underscored the effectiveness of employing diverse machine learning algorithms and deep neural networks for DRG prediction within healthcare systems outside the United States (Gartner et al. 2015; Islam et al. 2021). These studies focused on using structured data as input variables instead of clinical text. More recently, CAML model exhibited superior ability to predict DRGs (Liu et al. 2021). CAML model, exclusively utilizing clinical notes, surpassed the performance of a Long Short-Term Memory (LSTM) model using structured clinical variables (Liu et al. 2021). When compared with ClinicalBERT, CAML provided improved F1 scores but lower AUC (Liu et al. 2021; Alsentzer et al. 2019). We observed that DRG-LLaMA outperformed prior leading models of ClinicalBERT and CAML.
**Remarks on DRG prediction results:** ClinicalBERT and CAML already stand as robust baselines, with the added benefit of much faster training times (supplement Table 1). While BERT-based models have a maximum input length of 512 tokens, CAML has the flexibility to handle longer context (Devlin et al. 2018; Liu et al. 2021). We also observed that the performance of DRG-LLaMA enhanced with the utilization of larger models and longer input context length. Interestingly, a recent study revealed that the optimal performance of LLMs is attained when pertinent information is positioned at either the beginning or the end of the input context, with a decline as the input context expands (Liu et al. 2023). In our constrained experiments conducted with a maximum input token limit up to 1024, we have yet to encounter this limitation. In our study, the performance of both the baseline models and DRG-LLaMA surpassed the outcomes reported in prior research (Liu et al. 2021). Beyond the substantially larger training dataset employed in MIMIC-IV compared to MIMIC-III (236,192 vs. 17,815), it is plausible that this enhanced performance is predominantly linked to our strategic input data selection.
The study by (Liu et al. 2021) included only clinical notes charted up to 48 hours post-admission or 48 hours after ICU admission. In the MIMIC-III database, a large portion of records during this time window comprises nursing and radiology notes, potentially lacking the pivotal admission History of Present Illness (HPI) notes. In contrast, our methodology entailed the utilization of discharge summaries as the input data source. Discharge summary is a comprehensive clinical narrative encapsulating pivotal events, diagnostics, and treatments during hospitalization. To accommodate the input token limitations of LLaMA, we exclusively focused on the "brief hospital course" section of the summary, intentionally excluding other segments such as physical examinations, radiology, laboratory, and medication list. Additionally, to enhance data consistency, we formulated an algorithm aimed at addressing discrepancies in DRG nomenclature and assignments across different years.
**Nuance of DRG prediction task:** In the context of the DRG system, a DRG code comprises a base DRG and a CC/MCC status. The base DRG represents the principal diagnosis (for medical cases) or procedures (for surgical cases) leading to the patient's admission. Meanwhile, CC/MCC categoriza
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Component & MACRO- & ACC@1 & ACC@5 & ACC@10 & MACRO- & MICRO- & Number of \\ & F1 & & & & AUC & AUC & AUC & labels \\ \hline Base DRG & 0.520 & 0.678 & 0.912 & 0.953 & 0.990 & 0.995 & 340 \\ & (0.005) & (0.002) & (0.001) & (0.001) & (0.001) & (0.000) & \\ CC/MCC & 0.680 & 0.675 & - & - & 0.909 & 0.918 & 5 \\ & (0.003) & (0.003) & & & (0.001) & (0.001) & \\ DRG & - & & 0.515 & - & - & - & 738 \\ & & (0.003) & & & & & \\ \hline \hline \end{tabular} Experiments were performed with DRG-LLaMA -7B and a maximum input token size of 512. The top-1 prediction accuracy for base DRG and CC/MCC reached 67.8% and 67.5% respectively. A top-1 prediction accuracy of 51.5% was achieved by employing the mapping rule on base DRG and CC/MCC labels, as elaborated in the methodology section.
\end{table}
Table 3: Main Results on DRG prediction as a two-label task with a max input token size of 512
tions gauge the severity of the patient's condition. In the 34.0 version of the MS-DRG system, there are 154 three-way split DRGs, 44 two-way split DRGs with MCC/CC and no CC, 65 two-way split DRGs with MCC and CC/no CC, and 75 two-way split DRGs with MCC and CC/no CC, respectively. The 154-way split DRGs with MCC/CC and CC/no CC are shown in Table 3.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Case ID & Pertinent narratives in discharge summary & True DRG & Predicted DRG & Comment \\ \hline Case 1 & altered mental status\_respiratory failure\_acute blood loss anemia and anemia of chronic disease\_clostridum difficile infection\_hypotension\_was initially on levophed and dopamine\_. & Heart failure and shock with mcc & Respiratory system diagnosis with ventilator support 96 hours & Difficulty in selecting base DRG \\ Case 2 & gastrointestinal bleeding\_most likely ischemic otitis\_viral gastroenteritis\_acute renal failure\_anemia\_. & Renal failure with cc & Other digestive system diagnoses with cc & Difficulty in selecting base DRG \\ Case 3 & worsening diabetic foot ulcer\_diabetic foot infection\_svt\_cardiology was consulted\_. & Cellulitis without mcc & Diabetes with cc & Inadequate clinical concept extraction \\ Case 4 & neutropogenic fevers\_infectious workup was negative except for a urine culture growing enterococcus\_pt is neutropogenic, thrombocytopenic, and anemic\_hiv-stable\_. & Kidney and urinary tract infections without mcc & Major hematological and immunological diagnosis except sickle cell crisis and coagulation disorders with mcc & Difficulty in selecting base DRG \\ Case 5 & reported chest pain\_soliatry episode of nsvt\_ua without pyuria\_safe for d/c home\_. & Esophagitis gastroenter toenteristis and miscellaneous digestive disorders without mcc & Cardiac arrhythmia and conduction disorders with cc & Correct information needed for DRG prediction not available \\ Case 6 & septic arthritis, likely seeded by her recurrence of her e. coli bacteremia\_rheum and id recommend wash out\_wash out was deferred by orthopedics\_. & Septicemia or severe sepsis without mcv 96 hours with mcc & Revision of hip or knee replacement with mcc & Inadequate clinical concept extraction \\ Case 7 & acute to subacute hyponatremia\_admitted with low na 120\_unti with evidence of pyuria\_. & Kidney and urinary tract infections without mcc & Renal failure with cc & Inadequate clinical concept extraction \\ Case 8 & presents with diffuse acute-on-chronic abdominal pain\_gi bleed\_treated with octreeotide drip and pantoprazole iv\_capsule endoscopy was performed\_encephalopathy\_visual hallucinations\_. & Septicemia or severe sepsis without mcc & Septicemia or severe sepsis without mcy 96 hours with mcc & Septicemia or severe sepsis without mcy 96 hours with mcc & Possible incorrect DRG label \\ Case 9 & admitted for altered mental status\_delirium\_silient aspiration for which received a peg tube\_hypertension treated with amlodipine\_osa & Esophagitis gastroenter toenteritis and miscellaneous digestive disorders without mcc & Esophagitis gastroenter toenteritis and miscellaneous digestive disorders with mcc & Erroneous cc/mcc inacc \\ Case 10 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 11 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 12 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 13 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 14 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 15 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 16 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 17 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 18 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 19 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 19 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 20 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 21 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 22 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 23 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 3 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 4 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 4 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 5 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 5 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 6 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was likely due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 7 & presents with word finding difficulties and lethary\_...egg showed moderate encephalopathy\_ams was observed due to overmedication\_followed by psychiatry - seroquel and ability were held\_... & Psychoses & Other disorders of nervous system with cc & Inadequate clinical concept extraction \\ Case 9 & presents with word finding difficulties and lethary\_..
and 77 base DRGs with no splits (examples in Supplementary Note 1) (CMS 2016). We experimented to resemble this structure through a two-label DRG prediction strategy. Surprisingly, the top-1 accuracy for CC/MCC stands at 67.5%, similar to 67.8% of the base DRG despite the considerably smaller label count (5 labels in CC/MCC vs. 340 labels in base DRG). These unexpected results likely stem from the noisy nature of CC/MCC assignment. For instance, the DRG code " pulmonary edema and respiratory failure" does not have a CC/MCC split. Therefore, a hospital stay with this DRG code may truly contain MCC, but the MCC would not be labeled as positive in the training set. To address this challenge, we formulated rules in both the DRGs dissection phase (extracting base DRGs and CC/MCC from DRGs) and the inference phase (deriving DRGs based on base DRGs and CC/MCC). These rules cater to various split scenarios, thus improving accuracy. Implementing such rules has culminated in a final DRG prediction accuracy close to single-label prediction (51.5% vs. 52.0%).
**Remarks on error analysis:** Our error analysis also revealed intriguing observations. While certain vulnerabilities (e.g., erroneous CC/MCC classification and inadequate clinical concept extraction) present opportunities that theoretically can be addressed through employment of larger LLM and more data, other challenges likely stem from inherent limitations within our training data setup. For instance, in Case 2 in Table 4, despite the discharge summary providing a more comprehensive discussion on gastrointestinal bleeding compared to acute renal failure, the latter was deemed the correct base DRG. This selection is guided by the DRG assignment rule, a factor extending beyond the scope of what is directly evident within the discharge summary.
**Limitations of our work:** Our study has several limitations. 1) We were limited by the constraints of the MIMIC-IV dataset and could only use discharge summaries as input data, which are only available after the patient is discharged from the hospital. However, an effective alternative for predicting early DRGs would be to utilize HPI notes and/or Emergency Department (ED) notes. This approach has the potential to significantly impact hospital operations. The "assessment and plan" in HPI notes are similar in structure to the "brief hospital course" in discharge summaries. Thus, LLMs might find it easier to extract information related to the principal diagnosis from these notes, given their earlier time stamp in the hospitalization process.
2) We were also restricted by computational resource limitations, so we could only experiment with the LLaMA model up to a parameter size of 13 billion. Unfortunately, we couldn't perform an extensive hyperparameter search. The largest LLaMA models have over 65 billion parameters.
**Conclusion and future work:** The results presented in this study highlight the potential of adapting LLMs for medical purposes, particularly in predicting DRGs. Future research should involve collaborating with healthcare systems and utilizing admission notes to enable early DRG prediction. Additionally, our findings suggest that experiments utilizing the latest LLMs, including the recently launched 70-billion-parameter LLaMA-2 model with a maximum context length of 4096 tokens (Touvron et al. 2023b), should be considered.
Figure 2: **Relationship between training cases per DRG and prediction accuracy by DRG-LLaMA. Results from DRG-LLaMA -7B with a maximum input token size of 512. (a) Scatter plot of top-5 prediction accuracy versus DRG ranks by number of training cases. Y-axis is top-5 prediction accuracy of each DRG label. X-axis is the rank of the 723 DRGs by their number of training cases, where DRG ranked 1st has the most training cases, and DRG ranked 723rd has the least training cases. Black dots indicate individual DRGs. The solid line represents smoothing spline estimated relationship (equivalent degrees of freedom: 6.35; R2: 0.434). The gray shaded area denotes a 95% Bayesian confidence interval for the smoothing spline estimated function. As expected, DRG-LLaMA ’s performance declined in less frequent DRGs. (b) Boxplot of training cases per DRG with groups of different prediction accuracy. DRGs are grouped by range of top-5 prediction accuracy as shown in X-axis. Y-axis is the number of training cases per DRG. The green line represents the median value; the box limits show the interquartile range (IQR) from the first (Q1) to third (Q3) quartiles; the whiskers extend to the furthest data point within Q1-1.5*IQR (bottom) and Q3+1.5*IQR (top). DRG groups with better prediction performance generally have a greater number of training cases, although there is a large variance in the number of training cases within the best performing group.**
Finally, a crucial area for exploration concerns the practical implications of such DRG prediction, particularly when integrated into existing hospital coding workflows.
## Methodology
### Dataset and Preprocessing
We conducted a study using the publicly available MIMIC-IV dataset, which comprises 431,231 unique hospital admissions from 299,712 patients admitted to an ICU or the ED of the Beth Israel Deaconess Medical Center in Boston, Massachusetts [13]. The dataset covers the period from 2008 to 2019. We used regular expressions to extract the "brief hospital course" section from the discharge summary as input text. We then filtered the discharge summaries that were of low quality, identified by either duplicated content or containing less than 40 words.
Our focus was on hospitalizations with MS-DRGs. However, Centers for Medicare & Medicaid Service adjusts MS-DRG regulations annually, resulting in varying DRG assignments for identical conditions over time within the MIMIC-IV dataset [13]. To address this discrepancy, we designed an algorithm based on clinical knowledge to harmonize MS-DRG codes across different time points to a unified version (Supplementary Method 1). We selected MS-DRG version 34.0 published in 2016, which included a total of 757 DRG codes, 738 of which were present in our dataset [12]. We allocated 90% of the data to the training set and the remaining 10% to the testing set, stratified by DRG codes.
### Model Development
We performed fine-tuning of the LLaMA model using discharge summaries and DRG codes within the context of a classification task. Our approach includes two distinctive strategies (Also shown in Figure 3).
Single label approachIn this approach, the model generates a single-label multi-class prediction for the DRG code from a training set of natural text discharge summaries \(T_{SUM}\) and labels containing \((T_{SUM,i},y_{i})\in\mathcal{D}\)1. First, let us tokenize \(T_{SUM}\) based on the LlaMA Tokenizer into \(\mathbf{K}=tokenize(T_{S}UM)\). \(\mathbf{K}\) is a list of indices that index into learnable embedding weights. Let \(LLM()\) be a function that outputs the embedding for each token after running the transformer model. Finally, the raw logits are calculated as
Footnote 1: We omit the index notation \(i\) for the rest of the descriptions without loss of generality
\[\mathbf{\hat{y}}=LLM(\mathbf{K})_{-1}\]
where we use the last token embedding of \(LLM(\mathbf{K})\) as the predicted raw logit score of each DRG code \(\mathbf{\hat{y}}\in\mathbb{R}^{738}\). Note that this logit score is the raw, unnormalized output of the last layer of the LLM. Before applying the activation function like the softmax function, which converts these scores to probabilities, the values produced by the network are referred to as logits.
The conventional categorical cross-entropy loss function for multi-class classification is used. i.e., a classic multi-class problem with loss: The target DRG \(y\) is an integer between \(0\) and \(737\) (note that we use an integer representing a specific DRG code for simplicity).
\[\ell(\mathbf{\hat{y}},y)=-\log\frac{\exp(\mathbf{\hat{y}}_{y})}{\sum_{c=1}^{C}\exp( \mathbf{\hat{y}}_{c})}\]
Where \(y\in\{0,1,\dots,737\}\) is the target DRG, and \(\mathbf{\hat{y}}_{c}\) is the \(c^{th}\) index of \(\mathbf{\hat{y}}\).
Figure 3: An illustration of both approaches we tested: Single Label Prediction–which directly predicts the DRG code from the text–as well as Two Label Prediction–which breaks down the classification task into 2 tasks. The two predictions are then combined using filtering rules (discovered from data for each DRG) at inference time for the final DRG prediction. LoRA training is used to train the LLM due to computational constraints.
Two-label approachIn contrast, the two-label approach entails the model initially predicting the base DRG and the CC/MCC status as two separate classification tasks. Subsequently, a mapping rule is applied to derive DRG code. Details on the dissection and inference process from DRGs to base DRGs and CC/MCC status and vice versa can be found in Supplementary Method 2. This approach entailed a loss function configured as the cross-entropy loss of the base DRG, plus half of the cross-entropy loss of the CC/MCC status.
More formally,
\[\ell(\hat{\mathbf{y}},y)=\ell_{DRG\_base}(\hat{\mathbf{y}}_{DRG\_base},y_ {DRG\_base})+\] \[\lambda\ell_{CC}(\hat{\mathbf{y}}_{CC},y_{CC})\]
Where \(\ell_{DRG\_base}(\hat{\mathbf{y}}_{DRG\_base},y_{DRG\_base})\) and \(\ell_{CC}(\hat{\mathbf{y}}_{CC},y_{CC})\) are also categorical cross entropy losses. We chose \(\lambda=\frac{1}{2}\) for our work. As shown in Table 3, \(y_{DRG\_base}\in\{0,1,\dots,339\}\) and \(y_{CC}\in\{0,\dots,4\}\), representing the categories of ["without CC/MCC", "with CC", "with MCC", "without MCC", and "not applicable"] respectively.
To enable ease of implementation, we used an output logit dimension of \(\hat{\mathbf{y}}\in\mathbb{R}^{340+5}\) and indexed the first 340 dimensions for \(\hat{\mathbf{y}}_{DRG\_base}=\hat{\mathbf{y}}_{0,\dots,339}\) and indexed the last 5 dimensions for \(\hat{\mathbf{y}}_{CC}=\hat{\mathbf{y}}_{340,\dots,344}\). At inference time, we take the base DRG and CC/MCC predictions as the argmax of their respective logits.
\[\hat{y}_{DRG\_base}=argmax_{\hat{y}_{DRG\_base}}(\hat{\mathbf{y}}_{DRG\_base})\] \[\hat{y}_{CC}=argmax_{\hat{y}_{CC}}(\hat{\mathbf{y}}_{CC}|\hat{y}_{CC} \in V_{\hat{y}_{DRG\_base}})\]
Subsequently, we apply the mapping rule, as detailed in Supplementary Method 2, to derive the final DRG prediction from base DRG and CC/MCC labels.
Addressing Computational Constraints via LoRA TrainingGiven the constraints of available computational resources, an extensive hyperparameter search was not viable. Instead, our focus encompassed exploring the performance across diverse model sizes and token lengths. We used LoRA during training, which involves freezing the pre-trained model weights and incorporating trainable rank decomposition matrices into each layer of the transformer architecture [14]. Lora training of the attention mechanism is shown in Figure 3.
As a quick summary, let us assume that we have original weight matrix \(\mathbf{W}_{0}\in\mathbb{R}^{d\times k}\). LoRA works by adding a low-rank matrix to the original weight matrix: \(\Delta\mathbf{W}+\mathbf{W}_{0},\Delta\mathbf{W}=\mathbf{B}\mathbf{A}\) where \(\mathbf{B}\in\mathbb{R}^{d\times r}\) and \(\mathbf{A}\in\mathbb{R}^{r\times k}\). Note that one should choose \(r\ll\min(d,k)\) and only adapt the attention weights to ensure constraints on the dimensionality of the new weights and preserve original model performance. Training is only performed on this \(\Delta\mathbf{W}\), and original model weights are kept the same. We also only tune the weights of the attention mechanism for further cost savings while preserving performance.
Training DetailsModel training adopted standard Huggingface training framework and the sequence classification module [17]. Since LLaMA is a decoder-only (causal) model, we follow the traditional approach of using the embedding of the last token to do the classification, as other causal models (e.g. GPT-2 [17]) do. Logits score of each DRG label was calculated from this linear output layer, and probabilities of DRGs could be derived using a softmax function.
We referenced the training protocol of Alpaca-Lora [16]. Our model was trained using cross-entropy loss with the Adam optimizer (learning rate = \(2\times 10^{-5}\) and weight decay = 0.01) for 3 epochs on all training data and batch size of 4. Lora parameters were configured with r set to 8, an alpha value of 8, and a dropout rate of 0.05. All attention blocks were included in the Lora target modules. The training regimen for all DRG-LLaMA models were executed on a singular Nvidia RTX A6000 GPU with 48GB of graphics memory.
### Baseline Models
As baseline models for benchmarking, We selected CAML [14, 15] and ClinicalBERT [10]. CAML is an adjusted convolutional neural network (CNN). In CAML, clinical notes are tokenized and embedded with pre-trained word embeddings to form input representations. Subsequently, inputs are passed on to a neural network with one-dimensional convolutions that pool CNN features using the attention mechanism. In line with the approach detailed in Liu et al. (2021), our training of CAML included early stop when there was no improvement in micro-averaged F1 score for 10 consecutive epochs, with a maximum epochs of 50. All default hyper-parameters were kept, except for max_seq_length which was set to 512.
ClinicalBERT was built upon BioBERT, a domain-specific BERT model pre-trained on PubMed abstracts and full-text articles from PubMed Central [11]. ClinicalBERT performed further pre-training of BioBERT using 2 million clinical notes from MIMIC-III [12]. In our fine-tuning process of ClinicalBERT, we conducted three training epochs, same as DRG-LLaMA. We set a learning rate of \(2\times 10^{-5}\) and a batch size of 16, consistent with previous recommended practice for classification-oriented fine-tuning of BERT [14, 15].
### Statistical analysis
We used the implementation from [14] to calculate AUC and F1-score in both macro- and micro- approach for predictive models. We also reported accuracy of DRG prediction at top one, five and ten results. Standard deviations was calculated using a bootstrapping procedure with 30 iterations. For each bootstrap iteration, we randomly resampled the whole sample size from the testing set with replacement. Smoothing spline fit in Figure 1(a) was performed using npreg package in R with generalized cross-validation method and default parameters [13].
### Data availability
Access to MIMIC-IV can be requested at [https://physionet.org/content/mimiciv/](https://physionet.org/content/mimiciv/), which requires a signed safe usage agreement.
### Code availability
Scripts for this work were written in Python. They are available with accompanied documentation at [https://github.com/hanyin88/drg-llama](https://github.com/hanyin88/drg-llama).
### Ethical Concerns
MIMIC-IV is a free EHR dataset that is deidentified according to the Health Insurance Portability and Accountability Act (HIPAA) Safe Harbor provision (Johnson et al., 2023)
Since we primarily used open source models such as LLaMA and ClinicalBERT from Huggingface, an open source repository of machine learning models (Wolf et al., 2020) as well as CAML from github, and trained it on MIMIC, privacy risks are quite low. However, this risk should not be counted out when working with LLMs, and it is possible that LLaMA and ClinicalBERT may be trained on sensitive data in their respective pretrainining stages.
## Acknowledgement
This research was supported by NSF award SCH-2205289, IIS-2034479, SCH-2014438.
## Author Contributions
H.W. designed, conducted, and analyzed the results of experiments. H.W. and C.G wrote the original draft. J.S. obtained funding and computing resource for the project. All authors contributed to the conceptualization of the research questions. All authors reviewed, revised, and approved the final manuscript. |
2309.05359 | A Note on Location Parameter Estimation using the Weighted
Hodges-Lehmann Estimator | Robust design is one of the main tools employed by engineers for the
facilitation of the design of high-quality processes. However, most real-world
processes invariably contend with external uncontrollable factors, often
denoted as outliers or contaminated data, which exert a substantial distorting
effect upon the computed sample mean. In pursuit of mitigating the inherent
bias entailed by outliers within the dataset, the concept of weight adjustment
emerges as a prudent recourse, to make the sample more representative of the
statistical population. In this sense, the intricate challenge lies in the
judicious application of these diverse weights toward the estimation of an
alternative to the robust location estimator. Different from the previous
studies, this study proposes two categories of new weighted Hodges-Lehmann
(WHL) estimators that incorporate weight factors in the location parameter
estimation. To evaluate their robust performances in estimating the location
parameter, this study constructs a set of comprehensive simulations to compare
various location estimators including mean, weighted mean, weighted median,
Hodges-Lehmann estimator, and the proposed WHL estimators. The findings
unequivocally manifest that the proposed WHL estimators clearly outperform the
traditional methods in terms of their breakdown points, biases, and relative
efficiencies. | Xuehong Gao, Zhijin Chen, Bosung Kim, Chanseok Park | 2023-09-11T10:10:47Z | http://arxiv.org/abs/2309.05359v1 | # A Note on Location Parameter Estimation using the Weighted Hodges-Lehmann Estimator
###### Abstract
**Abstract**: Robust design is one of the main tools employed by engineers for the facilitation of the design of high-quality processes. However, most real-world processes invariably contend with external uncontrollable factors, often denoted as outliers or contaminated data, which exert a substantial distorting effect upon the computed sample mean. In pursuit of mitigating the inherent bias entailed by outliers within the dataset, the concept of weight adjustment emerges as a prudent recourse, to make the sample more representative of the statistical population. In this sense, the intricate challenge lies in the judicious application of these diverse weights toward the estimation of an alternative to the robust location estimator. Different from the previous studies, this study proposes two categories of new weighted Hodges-Lehmann (WHL) estimators that incorporate weight factors in the location parameter estimation. To evaluate their robust performances in estimating the location parameter, this study constructs a set of comprehensive simulations to compare various location estimators including mean, weighted mean, weighted median, Hodges-Lehmann estimator, and the proposed WHL estimators. The findings unequivocally manifest that the proposed WHL estimators clearly outperform the traditional methods in terms of their breakdown points, biases, and relative efficiencies.
**Keywords:** Contaminated data; Weighted Hodges-Lehmann estimator; Robustness; Location estimator.
## 1 Introduction
In statistics, the location estimation of a distribution is usually based on relatively complete and real sample data. However, in many practical cases, most processes are affected by contaminated data, which are caused by external uncontrollable factors, such as measurement errors, volatile operation conditions, etc (Park, Kim, and Wang 2022). These contaminated data within the sample have a significant influence on biasedly estimating the performance of the whole system. To enhance the reliability and performance of products, processes, and systems, robust design is
usually employed in various engineering and quality assurance disciplines. In this sense, robust estimation has emerged as a critical area of research in robust design to provide more reliable and stable location parameter estimates in the face of these challenges (Gao et al. 2022; Park, Gao, and Wang 2023). In robust design, the basic assumption is that the sample comes from a normal or some other distribution. After that, the sample median, weighted median (Gao et al. 2022), and Hodges-Lehmann (HL) (Hodges and Lehmann 1963) estimators are usually considered as the alternative to the location estimator (i.e., sample mean) because they have a large breakdown point and perform well in either the presence or absence of outliers.
In the past few decades, various approaches have been proposed to deal with the robust estimation of the location parameter with the HL estimator when the sample has contamination data or outliers. For instance, Alloway Jr and Raghavachari (1991) constructed a control chart based on the HL estimator. Rousseeuw and Verboven (2002) studied several well-known robust estimators (i.e., the HL estimator and M-estimators) in very small samples. Schoonhoven et al. (2011) studied several robust location estimators for constructing the location control chart. They also analyzed the HL estimator based on the pseudo-median, which was proved to be unbiased. Park Chaneok (2016) proposed a dual quadratic response surface model to compare the joint use of various estimators, where the sample median and HL estimators were used at each design point.
As illustrated above, most studies developed their alternative robust location estimators based on the sample median and HL estimator. However, to reduce the bias of experimental data, weighting adjustment is commonly considered as a sensible remedy to make the sample more representative of the statistical population. The different weights make the alternative of the robust location estimator challenging because each of the observations cannot be treated equally. In such a case, Gao et al. (2022) combined the weights and developed a robust estimator based on the weighted median to substitute the weighted mean so that the optimal solution is not sensitive to the contaminated data or outliers within the demand for medical staff. However, when considering observations with associated weights, the conventional HL estimator proves inadequate in incorporating these weights, potentially resulting in significant deviations in the evaluation outcomes. Therefore, there is a pressing need to create an alternative version based on the conventional HL estimator, which can seamlessly integrate weights and is an aspect overlooked in prior research. As a consequence, this study aims to develop some new robust estimators given different weights in the experimental data, called weighted Hodges-Lehmann (WHL) estimators in this study, and evaluate their robust performance.
The remainder of this paper is organized as follows. In Section 2, this study presents the proposed WHL estimators with definitions. The breakdown points of the newly proposed WHL estimators are investigated in Section
3. In Section 4, a comprehensive simulation study is carried out, where the proposed location estimators are compared with previous conventional location estimators to illustrate their performances in terms of bias and relative efficiency. Finally, Section 5 concludes this study, outlining its contributions and possible directions for future work.
## 2 Methodology
It is well known that the HL estimator is a robust and nonparametric location estimator (Hodges and Lehmann 1963), which is defined as the median pairwise averages of the sample observations. Given a set of observations \(\mathbf{x_{1},x_{2},...,x_{n}}\), the basic Hodges-Lehmann estimator, denoted by \(HL\), is given by
\[HL=\text{Median}\left(\frac{H_{ij}}{2}\right)=\text{Median}\left(\frac{x_{i}+x _{j}}{2}\right) \tag{1}\]
where both \(i\) and \(j\) are the index of the observations and the set \(H_{ij}\) for all \(i\) and \(j\) is given by
\[H_{ij}=\begin{bmatrix}h_{11}&h_{12}&...&h_{1n}\\ h_{21}&h_{22}&...&h_{2n}\\ &:&\ddots&:\\ h_{n1}&h_{n2}&...&h_{nn}\end{bmatrix}=\begin{bmatrix}x_{1}+x_{1}&x_{1}+x_{2}&x_{ 1}+x_{n}\\ x_{2}+x_{1}&x_{2}+x_{2}&...&x_{2}+x_{n}\\ &:&\ddots&:\\ x_{n}+x_{1}&x_{n}+x_{2}&&x_{n}+x_{n}\end{bmatrix} \tag{2}\]
As presented above, the weights were not considered in the traditional HL estimator. Therefore, new robust estimators that incorporate the weights of observations must be incorporated. Accordingly, this study proposes two new categories of WHL estimators to substitute the HL estimator in dealing with observations with weights. Particularly, the first category of WHL estimator is the median of all pairwise weighted averages of the sample observations, and the second category of WHL estimator is the weighted median of all pairwise weighted averages of the sample observations.
### The first category of WHL estimators
As presented in Formula (1), the \(\text{Median}\left(\frac{x_{i}+x_{j}}{2}\right)\) is considered a "pseudo-median" and closely related to the population median. However, after the weighting adjustment is applied to the sample observations, the weighted average needs to be used. Then we have the following definition:
**Definition 1**: Given a set of observations \(\mathbf{x_{1},x_{2},...,x_{n}}\) with corresponding positive weights \(w_{1},w_{2},...,w_{n}\), such that \(\sum_{i=1}^{n}w_{i}=1\). The first category of WHL estimator is defined as the median of all pairwise weighted averages of the sample observations, denoted by \(WHL1\), which is given by:
\[WHL1=Median\left(\frac{L_{ij}}{w_{i}+w_{j}}\Big{|}w_{i}+w_{j}\right) \tag{3}\]
where
\[L_{ij}=\begin{bmatrix}w_{1}x_{1}+w_{1}x_{1}&w_{1}x_{1}+w_{2}x_{2}&w_{1}x_{1}+w_ {n-1}x_{n-1}&w_{1}x_{1}+w_{n}x_{n}\\ w_{2}x_{2}+w_{1}x_{1}&w_{2}x_{2}+w_{2}x_{2}&...&w_{2}x_{2}+w_{n-1}x_{n-1}&w_{2}x_ {2}+w_{n}x_{n}\\ \vdots&\ddots&\vdots&\\ w_{n-1}x_{n-1}+w_{1}x_{1}&w_{n-1}x_{n-1}+w_{2}x_{2}&...&w_{n-1}x_{n-1}+w_{n-1}x _{n-1}&w_{n-1}x_{n-1}+w_{n}x_{n}\\ w_{n}x_{n}+w_{1}x_{1}&w_{n}x_{n}+w_{2}x_{2}&w_{n}x_{n}+w_{n-1}x_{n-1}&w_{n}x_{n }+w_{n}x_{n}\end{bmatrix} \tag{4}\]
Similar to the \(HL\) estimator presented in (Park, Kim, and Wang 2022), the \(WHL1\) can also be calculated for three cases: namely (i) \(i<j\), (ii) \(i\leq j\), and (iii) \(\forall(i,j)\). These three versions are denoted as follows:
\[WHL1(i<j)=Median\left(\frac{w_{i}x_{i}+w_{j}x_{j}}{w_{i}+w_{j}}\Big{|}w_{i}+w_{ j}\right),\quad\text{for $i<j$} \tag{5}\]
\[WHL1(i\leq j)=Median\left(\frac{w_{i}x_{i}+w_{j}x_{j}}{w_{i}+w_{j}}\Big{|}w_{i} +w_{j}\right),\quad\text{for $i\leq j$} \tag{6}\]
\[WHL1(\forall(i,j))=Median\left(\frac{w_{i}x_{i}+w_{j}x_{j}}{w_{i}+w_{j}}\Big{|} w_{i}+w_{j}\right),\quad\text{for $\forall(i,j)$} \tag{7}\]
### The second category of WHL estimators
Then, we derive the second category of WHL estimators that are measured by the weighted median of all pairwise weighted averages of the sample observations, and its definition is given by:
**Definition 2**: Given a set of observations \(x_{1},x_{2},...,x_{n}\) with corresponding positive weights \(w_{1},w_{2},...,w_{n}\), such that \(\sum_{i=1}^{n}w_{i}=1\). The second type of WHL estimator is defined as the weighted median of all pairwise, denoted by \(WHL2\) and given by:
\[WHL2=\text{Weighted median}\left(\frac{L_{ij}}{w_{i}+w_{j}}\Big{|}w_{i}+w_{j}\right) \tag{8}\]
Similar to the \(HL\) estimator presented in (Park, Kim, and Wang 2022), \(WHL2\) can also be calculated for three cases: (i) \(i<j\), (ii) \(i\leq j\), and (iii) \(\forall(i,j)\). These three versions are presented as follows:
\[WHL2(i<j)=Weighted\text{median}\left(\frac{w_{i}x_{i}+w_{j}x_{j}}{w_{i}+w_{j} }\Big{|}w_{i}+w_{j}\right),\quad\text{for $i<j$} \tag{9}\]
\[WHL2(i\leq j)=\text{ Weighted median}\left(\frac{w_{i}x_{i}+w_{j}x_{j}}{w_{i}+w_{j}} \left|w_{i}+w_{j}\right.\right),\quad\text{for }i\leq j \tag{10}\]
\[WHL2[\forall(i,j)]=\text{Weighted median}\left(\frac{w_{i}x_{i}+w_{j}x_{j}}{w_{i}+w_{j}} \left|w_{i}+w_{j}\right.\right),\quad\text{for }\forall(i,j) \tag{11}\]
where the Weighted median () is the function to calculate the 50% weighted percentile.
## 3 Breakdown point
In the context of robust design, the robustness of an estimator is usually evaluated using the well-known breakdown point criterion (Donoho and Huber 1983; Park, Kim, and Wang 2022), which quantifies the maximum proportion of outliers that the estimator can endure before it breaks down. For example, the breakdown points of the sample mean and sample median for the location parameter are 0 and 0.5, respectively. Since the breakdown point can generally be written as a function of the sample size (e.g., usually denoted by \(n\)), this study also develops the finite-sample breakdown-point function for the various location estimators mentioned above. In addition to the breakdown point criterion, this study also utilizes bias and relative efficiency to evaluate the performance of the newly proposed WHL estimators.
### Breakdown point of \(Whl1\)
After comparing the \(WHL1\) estimators with the HL estimator, it was found that the only difference between them is that the \(WHL1\) estimators apply a weighted average in each pair. In this sense, the \(WHL1\) estimators have the same breakdown point as the HL estimator for the three corresponding cases. Based on our previous work (Ouyang Linhan 2019; Park, Kim, and Wang 2022), we summarize the breakdown points for the three cases presented in Formulars (4)-(6) given a sample size \(n\). Thus, we have the corresponding breakdown points for the above three \(WHL1\) cases, which are provided as follows.
**Theorem 1:** The breakdown points for the above three \(WHL1\) cases are:
\[BP_{WHL1(i<j)}=\frac{n-\frac{1}{2}-\sqrt{\left(n-\frac{1}{2}\right)^{2}-2 \left[\frac{n^{2}-n-2}{4}\right]}}{n} \tag{12}\]
\[BP_{WHL1(iSj)}=\frac{\left|n+\frac{1}{2}-\sqrt{\left(n+\frac{1}{2}\right)^{2}-2 \left[\frac{n^{2}+n-2}{4}\right]}\right|}{n} \tag{13}\]
\[BP_{WHL1\nu(i,j)}=\frac{\left|n-\sqrt{n^{2}-\left[\frac{n^{2}-1}{2}\right]} \right|}{n} \tag{14}\]
**Proof**: To validate the breakdown points for the above three \(WHL1\) cases, it is required to account for the proportion of incorrect or arbitrarily large pairwise caused by the outliers. With **Definition 1**, given a contaminated observation \(x_{i}^{\prime}\) or \(x_{j}^{\prime}\), we have
\[WHL1^{\prime}=\text{Median}\left(\frac{w_{i}{x_{i}}^{\prime}+w_{j}x_{j}}{w_{i} +w_{j}}\left|w_{i}+w_{j}\right.\right)\text{ or }WHL1^{\prime}=\text{Median}\left(\frac{w_{i}x_{i}+w_{j}x_{j}^{ \prime}}{w_{i}+w_{j}}\left|w_{i}+w_{j}\right.\right) \tag{15}\]
According to the above equation, it is easily seen that the pairwise is directly affected by its observation values rather than the weights. In addition, the traditional \(HL\) estimator is a special case of the \(WHL1\) estimator when the weights are equal. Since the pairwise in the traditional \(HL\) estimator is only influenced by its observation values, the \(WHL1\) estimator has the same breakdown points as the traditional \(HL\) estimator provided by Park, Kim, and Wang (2022). Then we complete the proof for validating the breakdown points for the \(WHL1\) estimator.
To present the robustness property of the \(WHL1\) estimator, several robust estimators are compared given different sample sizes (i.e., from 1 to 50), and a plot of these values is provided in Fig. 1. Particularly, the breakdown point for the \(WHL1\) is provided first, which is the same as that of the \(HL\) estimator estimated by Park, Kim, and Wang (2022). The breakdown point of the median is also shown in Fig. 1. In terms of the weighted median (see the blue and green lines in Fig. 1), the lower and upper bounds of the breakdown point are presented as they are strongly correlated with the observation weights. The \(WHL1\)'s breakdown point is related to the number of pairwise rather than observations, making the breakdown point of the \(WHL1\) estimator different from that of the traditional weighted median.
### Breakdown point of \(Whl2\)
Furthermore, this study examines the breakdown point of the \(Whl2\) estimator. Unlike \(Whl1\), the \(Whl2\) estimator considers the weighted median for all pairwise rather than the sample size. In this sense, the breakdown point of the \(Whl2\) estimator is highly correlated with the outlier weights. It is well known that the weighted median is the 50% weighted percentile (Gao et al. 2022). Because the summation for all pairwise sample weights is greater than 1, it is necessary to renormalize the weights for the above three versions. To represent the pairwise weight of sample observations \(i\) and \(j\), three cases are considered and given as follows:
\[W_{ij}(i<j)=(w_{i}+w_{j})/\sum_{i}^{n}\sum_{j}^{i-1}(w_{i}+w_{j})\,,\quad for \;i<j \tag{16}\]
\[W_{ij}(i\leq j)=(w_{i}+w_{j})/\sum_{i}^{n}\sum_{j}^{i}(w_{i}+w_{j})\,,\quad for \;i\leq j \tag{17}\]
\[W_{ij}[\forall(i,j)]=(w_{i}+w_{j})/\sum_{l}^{n}\sum_{j}^{n}(w_{i}+w_{j})\,, \quad for\;\forall(i,j) \tag{18}\]
According to the equations (16)-(18), the total numbers of elements/pairwise for the aforementioned three cases are (i) \(m=(n^{2}-n)/2\), (ii) \(m=(n^{2}+n)/2\), and (iii) \(m=n^{2}\). In the following, we derive the breakdown point for each case.
Figure 1: Illustration of breakdown points of robust estimators
#### (i) Case 1
After applying the weighting adjustment to the sample, the 50% weighted percentile is calculated. For convenience, let \(z_{1},z_{2},...,z_{m}\) represent the ascending order statistics with corresponding weights \(\omega_{(1)},\omega_{(2)},...,\omega_{(m)}\) such that \(\sum_{i=1}^{m}\omega_{(i)}=1\), where \(m=(n^{2}-n)/2\) is the total number of elements. Thus, the weighted median in this case, denoted by \(WM1\), can be obtained as follows:
\[WM1=\inf\biggl{\{}z\colon\sum_{i=1}^{k}\omega_{(i)}>\frac{1}{2}\biggr{\}}= \sup\biggl{\{}z\colon\sum_{i=1}^{k}\omega_{(i)}\leq\frac{1}{2}\biggr{\}}=z_{( k+1)} \tag{19}\]
For more details on how to obtain the weighted median, readers are referred to previous studies (Gao 2020; Gao et al. 2022).
**Theorem 2**: Given a set of observations \(x_{1},x_{2},...,x_{n}\) with corresponding positive weights \(w_{1},w_{2},...,w_{n}\), \(WHL2(i<j)\) can be obtained using equation (9). After a combination of reordering and reweighting operations, ascending order statistics \(z_{1},z_{2},...,z_{m}\) with corresponding weights \(\omega_{(1)},\omega_{(2)},...,\omega_{(m)}\) such that \(\sum_{i=1}^{m}\omega_{(i)}=1\) can be obtained. Thus, the breakdown point for \(WHL2(i<j)\) is given as follows:
\[BP_{WHL2(i<j)}=\frac{\max\Bigl{\{}k\leq\frac{n^{2}-n}{2}-1\colon\sum_{i=1}^{k} \omega_{(i)}<\frac{1}{2}\Bigr{\}}}{\frac{n^{2}-n}{2}} \tag{20}\]
#### (ii) Case 2
From **Definition 2**, it was found that \(WHL2(i<j)\) is the weighted median of all pairwise rather than the traditional median. In other words, the robustness of \(WHL2(i<j)\) is related to the pairwise weights. To derive the breakdown point for \(WHL2(i<j)\), it is required to know the number of pairwise while i < j, which is given by
\[m=\frac{n^{2}-n}{2} \tag{21}\]
Combined with Definition in (27) proposed by Gao et al. (2022), the breakdown point for \(WHL2(i<j)\) is given by
\[BP_{WHL2(i<j)}=\frac{\max\Bigl{\{}k\leq m-1\colon\sum_{i=1}^{k}\omega_{(i)}< \frac{1}{2}\Bigr{\}}}{m}=\frac{\max\Bigl{\{}k\leq\frac{n^{2}-n}{2}-1\colon \sum_{i=1}^{k}\omega_{(i)}<\frac{1}{2}\Bigr{\}}}{\frac{n^{2}-n}{2}} \tag{22}\]
Thus, we complete the proof for Theorem 2.
Similarly, let \(z_{1},z_{2},...,z_{m}\) represent the ascending order statistics with corresponding weights \(\omega_{(1)},\omega_{(2)},...,\omega_{(m)}\) such that \(\sum_{i=1}^{m}\omega_{(i)}=1\), where \(m=\frac{n^{2}+n}{2}\) is the total number of elements/ pairwise in Case 2. Then, the weighted median, in this case, denoted by \(WM2\), can be obtained as follows:
\[WM2=\inf\left\{z:\sum_{i=1}^{k}\omega_{(i)}>\frac{1}{2}\right\}=\sup\left\{z: \sum_{i=1}^{k}\omega_{(i)}\leq\frac{1}{2}\right\}=z_{(k+1)},\qquad k\leq\frac{ n^{2}+n}{2} \tag{23}\]
**Theorem 3**. Given a set of observations \(x_{1},x_{2},...,x_{n}\) with corresponding positive weights \(w_{1},w_{2},...,w_{n}\), \(WHL2(i\leq j)\) can be obtained using equation (10). After a combination of reordering and reweighting operations, ascending order statistics \(z_{1},z_{2},...,z_{m}\) with corresponding weights \(\omega_{(1)},\omega_{(2)},...,\omega_{(m)}\) such that \(\sum_{i=1}^{m}\omega_{(i)}=1\) can be obtained. Thus, the breakdown point for \(WHL2(i\leq j)\) is given as follows:
\[BP_{WHL2(i\leq j)}=\frac{\max\left\{k\leq\frac{n^{2}+n}{2}-1:\sum_{i=1}^{k} \omega_{(i)}<\frac{1}{2}\right\}}{\frac{n^{2}+n}{2}} \tag{24}\]
**Proof:** From Definition 2 presented in (10), \(WHL2(i\leq j)\) is also the weighted median of all pairwise rather than the traditional median. To derive the breakdown point for \(WHL2(i\leq j)\), it is required to know the number of pairwise while \(i\leq j\), which is given by
\[m=\frac{n^{2}+n}{2} \tag{25}\]
Combined with Definition in (27) proposed by Gao et al. (2022), the breakdown point for \(WHL2(i\leq j)\) is given by
\[BP_{WHL2(i\leq j)}=\frac{\max\left\{k\leq m-1:\sum_{i=1}^{k}\omega_{(i)}<\frac {1}{2}\right\}}{m}=\frac{\max\left\{k\leq\frac{n^{2}+n}{2}-1:\sum_{i=1}^{k} \omega_{(i)}<\frac{1}{2}\right\}}{\frac{n^{2}+n}{2}} \tag{26}\]
Thus, we complete the proof for Theorem 3.
**(iii) Case 3**
Similarly, let \(z_{1},z_{2},...,z_{m}\) represent the ascending order statistics with corresponding weights \(\omega_{(1)},\omega_{(2)},...,\omega_{(m)}\) such that \(\sum_{i=1}^{m}\omega_{(i)}=1\), where \(m=n^{2}\) is the total number of elements/pairwise in Case 3. Then, the weighted median, in this case, denoted by \(WM3\), can be obtained as follows:
\[WM3=\inf\!\left(\!z\!:\sum_{i=1}^{k}\omega_{(i)}>\frac{1}{2}\!\right)=\sup\left\{ \!z\!:\sum_{i=1}^{k}\omega_{(i)}\leq\frac{1}{2}\!\right\}=z_{(k+1)},\qquad k\leq n ^{2}-1 \tag{27}\]
**Theorem 4**. Given a set of observations \(x_{1},x_{2},...,x_{n}\) with corresponding positive weights \(w_{1},w_{2},...,w_{n}\), \(WhL2[\forall(i,j)]\) can be obtained using Equation (11). After a combination of reordering and reweighting operations, ascending order statistics \(z_{1},z_{2},...,z_{m}\) with corresponding weights \(\omega_{(1)},\omega_{(2)},...,\omega_{(m)}\) such that \(\sum_{i=1}^{m}\omega_{(i)}=1\) can be obtained. Thus, the breakdown point for \(WhL2[\forall(i,j)]\) is given as follows:
\[BP_{WHL2[\forall(i,j)]}=\frac{\max\left\{k\leq n^{2}-1\!:\!\sum_{i=1}^{k} \omega_{(i)}<\frac{1}{2}\!\right\}}{n^{2}} \tag{28}\]
From Definition 2 presented in (11), \(WhL2[\forall(i,j)]\) is also the weighted median of all pairwise rather than the traditional median. To derive the breakdown point for \(WhL2[\forall(i,j)]\), it is required to know the number of pairwise while \(\forall(i,j)\), which is given by
\[m=n^{2} \tag{29}\]
Combined with the Definition in (27) proposed by Gao et al. (2022), the breakdown point for \(WhL2[\forall(i,j)]\) is given by
\[BP_{WHL2[\forall(i,j)]}=\frac{\max\left\{k\leq m-1\!:\!\sum_{i=1}^{k}\omega_ {(i)}<\frac{1}{2}\!\right\}}{m}=\frac{\max\left\{k\leq n^{2}-1\!:\!\sum_{i=1}^{ k}\omega_{(i)}<\frac{1}{2}\!\right\}}{n^{2}} \tag{30}\]
Thus, we complete the proof for Theorem 4.
### Breakdown point comparison
Because the breakdown point of the \(WhL2\) estimators is strongly correlated with the weights, it is difficult to derive the specific formulation of the breakdown point. Thus, this study considered two cases (best and worst) to obtain the upper- and lower-bound breakdown points.
**(1) Best case**
Suppose that the set of \(z_{1},z_{2},...,z_{m}\) is the order statistic of \(\omega^{u}{}_{(1)},\omega^{u}{}_{(2)},...,\omega^{u}{}_{(m)}\) such that \(\omega^{u}{}_{(1)}\leq\omega^{u}{}_{(2)}\leq\omega...,\leq\omega^{u}{}_{(m)}\) and \(\sum_{i=1}^{m}\omega^{u}{}_{(i)}=1\). In the best case, the set of contaminated pairwise has smaller weights. Thus, the upper-bound breakdown point is given by
\[BP^{u}=\frac{\max\left\{k\leq m-1\colon\sum_{i=1}^{k}\omega^{u}{}_{(i)}<\frac{1}{2} \right\}}{m} \tag{31}\]
Note that with the definition \(f(0)=0\) and \(f(x)=\sum_{i=1}^{x}\omega^{u}{}_{(i)}\) for \(x=1,2,...,m,f(x)\) is a discrete convex function. For more details, please refer to **Lemma 1** of Gao et al. (2022).
### Worst case
We then consider the worst case, in which the lower bound of the breakdown point for \(WHL2\) can be obtained. Suppose that the set of \(z_{1},z_{2},...,z_{m}\) is the order statistics of \(\omega^{l}{}_{(1)},\omega^{l}{}_{(2)},...,\omega^{l}{}_{(m)}\) such that \(\omega^{l}{}_{(1)}\geq\omega^{l}{}_{(2)}\geq...,\geq\omega^{l}{}_{(m)}\) and \(\sum_{i=1}^{m}\omega^{l}{}_{(i)}=1\). When the set of contaminated pairwise has larger weights, the lower-bound breakdown point can be obtained as follows:
\[BP^{l}=\frac{\max\left\{k\leq m-1\colon\sum_{i=1}^{k}\omega^{l}{}_{(i)}<\frac{ 1}{2}\right\}}{m} \tag{32}\]
Note that with definition \(f(0)=0\) and \(f(x)=\sum_{i=1}^{x}\omega^{l}{}_{(i)}\) for \(x=1,2,...,m,f(x)\) is a discrete concave function. For more details, please refer to **Lemma 2** of Gao et al. (2022).
Thus, the breakdown point of the \(WHL2\) estimator can be roughly estimated. Particularly, to estimate the lower-bound breakdown point, the worst case is considered with the assumption \(\omega^{l}{}_{(i+1)}-\omega^{l}{}_{(i)}=\omega^{l}{}_{(i)}-\omega^{l}{}_{(i-1 )}>0\). To estimate the upper-bound breakdown point, the best case is considered under the assumption \(\omega^{l}{}_{(i-1)}-\omega^{l}{}_{(i)}=\omega^{l}{}_{(i)}-\omega^{l}{}_{(i+1 )}>0\). To illustrate the robustness of the newly proposed \(WHL2\) estimator, different methods are compared, and Table 1 provides the breakdown points when the sample size increases from 1 to 20. As shown in Table 1, the breakdown point of the proposed \(WHL1\) estimator is the same as that of the traditional \(HL\) estimator. However, the breakdown point of the \(WHL2\) estimator is different from that of the \(WHL1\) estimator owing to the intrinsic weights.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Sample size**} & \multirow{2}{*}{**BP**} & \multirow{2}{*}{**(Weighted)**} & \multicolumn{3}{c}{**Number of pairwise+BP (\(Whl1\))**} & \multicolumn{3}{c}{**Number of pairwise+BP (\(Whl2\))**} \\ \cline{3-9} & & & & & & & & \\ \cline{3-9} & & & & & & & & & \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 1: Properties (i.e., the number of pairwise and breakdown points) of the newly proposed WHL estimators
In addition, a visual illustration is also provided in Fig. 2, in which the breakdown points of the three different cases of \(WHL1\) are separately compared with the traditional methods. It is easy to observe that given a specific sample size, the median, \(HL\), and \(WHL1\) have a fixed breakdown point. However, the breakdown points of the weighted median and \(WHL2\) belong to a certain range owing to their intrinsic weights [see Fig. 2 (a)-(c)]. Interestingly, \(WHL2\) has a stable range of breakdown points when the sample size increases from 1 to 20, compared with the weighted median. The lower-bound breakdown point of \(WHL2\) is relatively higher than that of the other methods, except for the median.
## 4 Simulation study
### Evaluation criterion
To evaluate the performances of the proposed WHL estimator, the bias and relative efficiency are evaluated and compared with the conventional location estimators. In this study, the bias is the difference between the expected value and the true value of the parameter being estimated, which is defined as
\[\boxed{\text{Bias}\left(\hat{\theta},\theta\right)=\left|\hat{\theta}-\theta \right|} \tag{33}\]
where \(\theta\) is the weighted mean and \(\hat{\theta}\) is the alternative location estimator in this study.
Efficiency (Serfling 2011) is another method to measure the quality of an estimator in the experimental design, and the relative efficiency of the two procedures is the ratio of their efficiencies. Here, the relative efficiency is used as a metric for comparing the effectiveness of the two estimators, which has been widely used in previous studies (Lehmann 2004; Park and Leeds 2016; Gao and Cui 2021; Park, Kim, and Wang 2022; Park, Gao, and Wang 2023). The relative efficiency of \(\theta\) and \(\hat{\theta}\) is defined as
\[\boxed{\text{Relative efficiency}\left(\hat{\theta},\theta\right)=\frac{ \text{Var}(\theta)}{\text{Var}(\hat{\theta})}\times 100\%} \tag{34}\]
where \(\theta\) is often a reference or baseline estimator. In this study, \(\text{Var}(\theta)\) is the weighted variance of the sample, which is given by
Figure 2: Comparison of breakdown points under different robust estimators
\[\mathrm{Var}(\theta)=\frac{\sum_{i}(w_{i}\sigma_{i})^{2}}{(\sum_{i}w_{i})^{2}} \tag{35}\]
### Illustrative examples
In this study, 6 samples with different sample sizes and weighted-value distributions are first carried out. The simulations associated with computations are performed using the R language. For the sample with the sample size \(n\), the observation \(i\) is randomly generated from the normal distribution \(N(\mu_{i},\sigma_{i})\) with a given weight \(w_{i}\). To estimate the bias and relative efficiency, we replicate the sample 10,000 times, which results in 10,000 estimated locations for the proposed WHL estimator.
In Sample 1, the weights are the same as the normalized sample mean values. Here, the sample size is \(n=4\), and there are four mean values (i.e., 4, 3, 2, and 1) with the corresponding standard deviations (i.e., 10, 5, 10, and 5) and the corresponding weights (i.e., 4/10, 3/10, 2/10, and 1/10). The dataset has a weighted mean of \(\sum_{i}w_{i}\mu_{i}=2\) and a weighted variance of \(\sum_{i}(w_{i}\sigma_{i})^{2}=15\). The results (see Table 2) of biases and relative efficiencies are calculated and compared by using weighted mean, weighted median, and the proposed WHL estimator. We use the traditional method of the weighted mean as the baseline method by calculating the relative efficiencies of other methods. Therefore, the relative efficiency of the weighted mean is always around 100%. The weighted mean and the proposed WHL estimator have a smaller bias than the weighted median except for \(WHL2(i<j)\). In terms of relative efficiency, the weighted mean and the proposed WHL estimator also outperform the weighted median.
In addition to Sample 1, more examples (i.e., Samples 2-6) are provided and illustrated in Table 3, where the observation \(i\) is randomly generated from the normal distribution \(N(\mu_{i},\sigma_{i})\) with a given weight \(w_{i}\). With Samples 2-6, the biases and relative efficiencies are tested and compared when the sample size \(n\) goes from 3 to 15. The simulation results of biases for samples 2-6 are presented in Tables 4-8, respectively. In addition, the results of relative efficiencies for Samples 2-6 are presented in Tables 9-13, respectively.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Measures} & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{3-10} & & & \(i<j\) & \(i\leq j\) & \(\forall(i,j)\) & \(i<j\) & \(i\leq j\) & \(\forall(i,j)\) \\ \hline Variance & 15.4898 & 16.9559 & 14.8596 & 14.0680 & 14.0680 & 16.5168 & 14.3338 & 15.1130 \\ Bias & 0.004.1 & 0.3877 & 0.0276 & 0.1996 & 0.1996 & 0.4193 & 0.1936 & 0.2389 \\ Relative efficiency & 96.8473 & 87.6959 & 100.9500 & 106.3350 & 89.8690 & 89.8690 & 104.3850 & 98.8887 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of bias and relative efficiency in different methods for Sample 1
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Measures} & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{3-10} & & & \(i<j\) & \(i\leq j\) & \(\forall(i,j)\) & \(i<j\) & \(i\leq j\) & \(\forall(i,j)\) \\ \hline Variance & 15.4898 & 16.9559 & 14.8596 & 14.0680 & 14.0680 & 16.5168 & 14.3338 & 15.1130 \\ Bias & 0.004.1 & 0.3877 & 0.0276 & 0.1996 & 0.1996 & 0.4193 & 0.1936 & 0.2389 \\ Relative efficiency & 96.8473 & 87.6959 & 100.9500 & 106.3350 & 89.8690 & 89.8690 & 104.3850 & 98.8887 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Description of Samples 2-6
As shown in Tables 4-8, the biases are obtained for the methods of weighted mean, weighted median, and the proposed WHL estimator. Those methods have different performances of biases. Specifically, in Table 4, the biases obtained by using the different methods are quite similar. In Tables 5-8, the proposed WHL estimators (i.e., \(WHL1\) and \(WHL2\)) have smaller biases than the weighted median. It is also obvious that \(WHL2\) estimators have smaller biases than \(WHL1\) estimators and weighted median in Table 6, whereas \(WHL1\) estimators have smaller biases than \(WHL2\) estimators only in Table 7. Generally, the \(WHL2\) estimators need to be preferred to estimate the location parameter.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Sample & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{5-8} size & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 0.005746 & 0.001185 & 0.008027 & 0.004606 & 0.008027 & 0.008027 & 0.004606 & 0.008027 \\
4 & 0.001980 & 0.002878 & 0.001980 & 0.001881 & 0.001881 & 0.001980 & 0.001881 & 0.001881 \\
5 & 0.004424 & 0.000997 & 0.004668 & 0.004130 & 0.004130 & 0.004668 & 0.004130 & 0.004130 \\
6 & 0.001634 & 0.001739 & 0.000963 & 0.000716 & 0.000840 & 0.000963 & 0.000716 & 0.000840 \\
7 & 0.001537 & 0.001743 & 0.001424 & 0.002450 & 0.002449 & 0.001424 & 0.002450 & 0.002449 \\
8 & 0.001554 & 0.000187 & 0.001961 & 0.001416 & 0.001835 & 0.001961 & 0.001416 & 0.001835 \\
9 & 0.002879 & 0.001483 & 0.002535 & 0.002483 & 0.002674 & 0.002535 & 0.002483 & 0.002674 \\
10 & 0.002253 & 0.000377 & 0.002292 & 0.001530 & 0.001977 & 0.002292 & 0.001530 & 0.001977 \\
11 & 0.001037 & 0.002471 & 0.001857 & 0.001638 & 0.001650 & 0.001857 & 0.001638 & 0.001650 \\
12 & 0.002499 & 0.003920 & 0.002611 & 0.002519 & 0.002499 & 0.002611 & 0.002519 & 0.002499 \\
13 & 0.002763 & 0.002294 & 0.002455 & 0.002300 & 0.002497 & 0.002455 & 0.002300 & 0.002497 \\
14 & 0.000832 & 0.002610 & 0.002234 & 0.002321 & 0.002255 & 0.002234 & 0.002321 & 0.002255 \\
15 & 0.002910 & 0.003007 & 0.002887 & 0.002785 & 0.002862 & 0.002887 & 0.002785 & 0.002862 \\ \hline \multicolumn{8}{l}{WM: weighted mean; WMD: weighted median.} \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of biases given different sample sizes for sample 2
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sample & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{5-8} size & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 0.002546 & 0.219605 & 0.113621 & 0.052992 & 0.113621 & 0.113621 & 0.052992 & 0.113621 \\
4 & 0.009528 & 0.269012 & 0.009528 & 0.172761 & 0.172761 & 0.009528 & 0.172761 & 0.172761 \\
5 & 0.020621 & 0.499740 & 0.141553 & 0.185723 & 0.185723 & 0.141553 & 0.185723 & 0.185723 \\
6 & 0.005601 & 0.594033 & 0.144118 & 0.247098 & 0.195608 & 0.144118 & 0.247098 & 0.195608 \\ \hline \end{tabular}
\end{table}
Table 5: Results of biases given different sample sizes for sample 3
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sample & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{4-9} size \(n\) & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 0.003341 & 0.571137 & 0.054455 & 0.190326 & 0.054455 & 0.160538 & 0.270841 & 0.243188 \\
4 & 0.006683 & 0.735182 & 0.205916 & 0.406355 & 0.406355 & 0.279373 & 0.354942 & 0.300079 \\
5 & 0.011738 & 0.860739 & 0.445836 & 0.468608 & 0.468608 & 0.374829 & 0.433715 & 0.381896 \\
6 & 0.005184 & 0.975721 & 0.527651 & 0.631118 & 0.579384 & 0.451484 & 0.504122 & 0.463677 \\
7 & 0.004441 & 1.065208 & 0.668787 & 0.906668 & 0.820475 & 0.519239 & 0.568104 & 0.535909 \\
8 & 0.005840 & 1.139922 & 1.004772 & 1.098953 & 1.060279 & 0.581483 & 0.622855 & 0.598111 \\
9 & 0.013435 & 1.214767 & 1.178325 & 1.286067 & 1.232528 & 0.644959 & 0.680271 & 0.653535 \\
10 & 0.005792 & 1.291698 & 1.346788 & 1.497167 & 1.411557 & 0.721313 & 0.751430 & 0.725734 \\
11 & 0.004787 & 1.339866 & 1.589778 & 1.714265 & 1.654797 & 0.762745 & 0.791099 & 0.768321 \\
12 & 0.012992 & 1.415307 & 1.790764 & 1.908060 & 1.852914 & 0.829621 & 0.855549 & 0.837180 \\
13 & 0.006141 & 1.460031 & 2.029887 & 2.151688 & 2.091706 & 0.863025 & 0.886453 & 0.870907 \\
14 & 0.003357 & 1.498904 & 2.253414 & 2.390950 & 2.332289 & 0.913681 & 0.933902 & 0.918962 \\
15 & 0.007536 & 1.534874 & 2.483312 & 2.607654 & 2.548772 & 0.947184 & 0.965220 & 0.949900 \\ \hline \hline \end{tabular} WM: weighted mean; WMD: weighted median.
\end{table}
Table 6: Results of biases given different sample sizes for sample 4
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sample & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{4-9} size \(n\) & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 0.002277 & 0.013838 & 0.016873 & 0.062361 & 0.016873 & 0.077455 & 0.200847 & 0.138941 \\
4 & 0.008143 & 0.145553 & 0.060557 & 0.121200 & 0.121200 & 0.215706 & 0.318334 & 0.282437 \\
5 & 0.001509 & 0.333143 & 0.112806 & 0.116623 & 0.116623 & 0.325929 & 0.414495 & 0.404251 \\
6 & 0.015216 & 0.516374 & 0.117792 & 0.168326 & 0.143059 & 0.411831 & 0.514329 & 0.503707 \\
7 & 0.001870 & 0.676427 & 0.154714 & 0.241869 & 0.203409 & 0.543847 & 0.635635 & 0.603226 \\
8 & 0.002718 & 0.843777 & 0.260223 & 0.302141 & 0.285979 & 0.655762 & 0.754862 & 0.723264 \\
9 & 0.002947 & 0.990666 & 0.321657 & 0.369027 & 0.343580 & 0.766347 & 0.856898 & 0.825960 \\
10 & 0.014154 & 1.155812 & 0.39575 & 0.454285 & 0.423593 & 0.882856 & 0.971155 & 0.939988 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results of biases given different sample sizes for sample 5
After that, the relative efficiencies are also calculated and provided in Tables 9-13 after utilizing the methods of the weighted mean, the weighted median, and the proposed WHL (i.e., \(WHL1\) and \(WHL2\)) estimator. As shown in Tables 9-13, the proposed \(WHL2\) estimators always keep a high relative efficiency. Specifically, the proposed \(WHL1\) and \(WHL2\) estimators commonly outperform the weighted median in Tables 9 and 10 because they have a similar relative efficiency. In Tables 11-13, the relative efficiencies of \(WHL2\) estimators are decreasing as the sample size grows, but they are still and commonly greater than the weighted median and \(WHL1\) estimators. It can be concluded that \(WHL2\) estimators need to be preferred to estimate the location parameter.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{
\begin{tabular}{c} Sample \\ size \(n\) \\ \end{tabular} } & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{4}{c}{\(WHL1\)} & \multicolumn{4}{c}{\(WHL2\)} \\ \cline{3-10} & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 0.002136 & 0.406575 & 0.001121 & 0.003447 & 0.001121 & 0.039374 & 0.057476 & 0.055461 \\
4 & 0.010714 & 0.475992 & 0.012424 & 0.017067 & 0.017067 & 0.056247 & 0.046069 & 0.022768 \\
5 & 0.011286 & 0.505772 & 0.003995 & 0.006285 & 0.006285 & 0.048698 & 0.040305 & 0.026253 \\
6 & 0.004993 & 0.517687 & 0.003487 & 0.004230 & 0.003859 & 0.038552 & 0.033232 & 0.026058 \\
7 & 0.001503 & 0.501205 & 0.006287 & 0.005895 & 0.005688 & 0.024021 & 0.020558 & 0.012142 \\
8 & 0.005429 & 0.488752 & 0.004337 & 0.003847 & 0.004508 & 0.011991 & 0.008261 & 0.000305 \\
9 & 0.005146 & 0.484854 & 0.001464 & 0.002475 & 0.003025 & 0.013099 & 0.009322 & 0.002399 \\
10 & 0.002231 & 0.472427 & 0.011861 & 0.013229 & 0.013195 & 0.002280 & 0.001231 & 0.006511 \\
11 & 0.003350 & 0.483520 & 0.011782 & 0.013313 & 0.014422 & 0.014805 & 0.012872 & 0.007678 \\
12 & 0.002844 & 0.470293 & 0.006605 & 0.006250 & 0.006258 & 0.009589 & 0.007891 & 0.003086 \\
13 & 0.008720 & 0.435471 & 0.006952 & 0.008108 & 0.007078 & 0.003900 & 0.005804 & 0.009835 \\
14 & 0.004399 & 0.454714 & 0.001435 & 0.003075 & 0.002824 & 0.015838 & 0.014597 & 0.010588 \\
15 & 0.013154 & 0.424622 & 0.008014 & 0.005850 & 0.006492 & 0.003436 & 0.004679 & 0.008141 \\ \hline \multicolumn{10}{l}{WM: weighted mean; WMD: weighted median.} \\ \end{tabular}
\end{table}
Table 8: Results of biases given different sample sizes for sample 6
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{
\begin{tabular}{c} Sample \\ size \(n\) \\ \end{tabular} } & \multirow{2}{*}{WM} & \multirow{2}{*}{WMD} & \multicolumn{4}{c}{\(WHL1\)} & \multicolumn{4}{c}{\(WHL2\)} \\ \cline{3-10} & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 100.22463 & 73.81635 & 92.50902 & 97.88818 & 92.50902 & 92.50902 & 97.88818 & 92.50902 \\
4 & 98.08601 & 82.38248 & 98.08601 & 89.55031 & 89.55031 & 98.08601 & 89.55031 & 89.55031 \\ \hline \end{tabular}
\end{table}
Table 9: Results of relative efficiencies given different sample sizes for sample 2
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sample size \(n\) & WM & WMD & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{3-10} & WM & WMD & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 100.14774 & 87.67921 & 79.21749 & 105.2279 & 79.21749 & 79.21749 & 105.2279 & 79.21749 \\
4 & 98.68856 & 100.4415 & 98.68856 & 101.5152 & 101.5152 & 98.68856 & 101.5152 & 101.5152 \\
5 & 99.96807 & 82.20132 & 100.5852 & 100.6144 & 100.6144 & 100.5852 & 100.6144 & 100.6144 \\
6 & 99.64699 & 90.6188 & 95.15476 & 98.38767 & 97.77224 & 95.15476 & 98.38767 & 97.77224 \\
7 & 97.47182 & 74.12777 & 92.41489 & 96.87102 & 95.11913 & 92.41489 & 96.87102 & 95.11913 \\
8 & 99.58906 & 80.85296 & 101.3963 & 101.2256 & 101.4464 & 101.3963 & 101.2256 & 101.4464 \\
9 & 99.85738 & 69.24401 & 98.95121 & 98.0606 & 98.13669 & 98.95121 & 98.0606 & 98.13669 \\
10 & 99.82555 & 72.98319 & 97.83832 & 97.81601 & 98.0697 & 97.83832 & 97.81601 & 98.0697 \\
11 & 99.4304 & 62.9157 & 96.40107 & 96.02543 & 96.37101 & 96.40107 & 96.02543 & 96.37101 \\
12 & 96.92674 & 64.64522 & 94.12186 & 92.8688 & 93.64005 & 94.12186 & 92.8688 & 93.64005 \\
13 & 99.36175 & 58.83056 & 96.41901 & 94.60463 & 95.60709 & 96.41901 & 94.60463 & 95.60709 \\
14 & 98.18103 & 59.70523 & 93.80456 & 92.08762 & 93.13938 & 93.80456 & 92.08762 & 93.13938 \\
15 & 98.05116 & 53.74487 & 92.85806 & 90.87135 & 91.89797 & 92.85806 & 90.87135 & 91.89797 \\ \hline \multicolumn{10}{c}{WM: weighted mean; WMD: weighted median.} & & & & & \\ \hline \end{tabular}
\end{table}
Table 11: Results of relative efficiencies given different sample sizes for sample 4
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sample size \(n\) & WM & WMD & \multicolumn{3}{c}{\(WHL1\)} & \multicolumn{3}{c}{\(WHL2\)} \\ \cline{3-10} & WM & WMD & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 100.0416 & 61.16901 & 94.86973 & 77.07104 & 94.86973 & 92.30774 & 81.28131 & 84.90442 \\
4 & 99.51343 & 48.7192 & 79.88857 & 49.01845 & 49.01845 & 81.00594 & 72.25766 & 76.28216 \\
5 & 99.17076 & 40.9165 & 47.79451 & 45.29223 & 45.29223 & 71.50564 & 66.02359 & 70.64913 \\
6 & 103.82044 & 35.70755 & 42.93895 & 37.98992 & 40.80616 & 66.53338 & 61.65766 & 65.34185 \\
7 & 98.89074 & 31.33203 & 35.92066 & 26.39867 & 28.88051 & 60.03986 & 56.14569 & 58.41064 \\
8 & 101.87601 & 28.23734 & 24.5483 & 22.0484 & 22.98559 & 55.15699 & 52.11132 & 53.58221 \\ \hline \multicolumn{10}{c}{WM: weighted mean; WMD: weighted median.} & & & & & \\ \hline \end{tabular}
\end{table}
Table 10: Results of relative efficiencies given different sample sizes for sample 3
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sample & \multirow{3}{*}{WM} & \multirow{3}{*}{WMD} & \multicolumn{3}{c}{\(WhL1\)} & \multicolumn{3}{c}{\(WhL2\)} \\ \cline{3-10} size \(n\) & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 100.09982 & 71.21666 & 96.33645 & 84.14587 & 96.33645 & 95.81054 & 87.44182 & 90.38575 \\
4 & 99.73781 & 60.84846 & 88.81656 & 68.40871 & 68.40871 & 91.07134 & 83.38012 & 85.41539 \\
5 & 101.67489 & 55.06522 & 72.22204 & 69.69053 & 69.69053 & 86.26954 & 79.99928 & 83.33422 \\
6 & 100.5992 & 51.6187 & 71.78459 & 65.96182 & 69.56436 & 82.49755 & 77.73682 & 80.06176 \\
7 & 99.72804 & 50.6267 & 67.45218 & 60.10107 & 62.64779 & 79.01362 & 74.97096 & 76.55964 \\
8 & 98.49996 & 51.02162 & 61.85429 & 58.63361 & 59.69813 & 78.1008 & 74.94932 & 76.16586 \\
9 & 103.1819 & 50.49562 & 62.31883 & 58.55635 & 60.57065 & 78.08748 & 75.22478 & 76.46025 \\
10 & 100.28744 & 51.31107 & 59.69346 & 56.262 & 58.32388 & 75.71946 & 73.32832 & 74.40592 \\
11 & 99.65035 & 48.88387 & 55.11792 & 52.38316 & 53.80658 & 72.75247 & 70.72123 & 71.6444 \\
12 & 100.04079 & 48.86973 & 53.74931 & 51.00423 & 52.25708 & 71.05954 & 69.36237 & 70.05226 \\ \hline \hline \end{tabular} WM: weighted mean; WMD: weighted median.
\end{table}
Table 13: Results of relative efficiencies given different sample sizes for sample 6
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sample & \multirow{3}{*}{WM} & \multirow{3}{*}{WMD} & \multicolumn{3}{c}{\(WhL1\)} & \multicolumn{3}{c}{\(WhL2\)} \\ \cline{3-10} size \(n\) & & & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) & \(i<j\) & \(i\leq j\) \\ \hline
3 & 101.71092 & 99.15974 & 94.58614 & 86.62312 & 94.58614 & 105.6430 & 112.0046 & 111.1938 \\
4 & 100.97126 & 93.66929 & 92.58549 & 71.34274 & 71.34274 & 114.0131 & 108.1108 & 108.7297 \\
5 & 101.50531 & 86.56135 & 75.54125 & 73.44196 & 73.44196 & 108.5713 & 104.2459 & 104.1419 \\
6 & 100.80025 & 74.40267 & 74.59482 & 67.62586 & 71.92957 & 105.1041 & 93.74823 & 97.60468 \\
7 & 99.99789 & 67.32531 & 69.18725 & 59.91788 & 63.37215 & 93.09562 & 84.76996 & 86.82905 \\
8 & 99.65877 & 59.29355 & 61.07967 & 57.19438 & 58.32646 & 84.56199 & 76.53326 & 79.08046 \\
9 & 101.04617 & 53.27858 & 57.86424 & 53.30943 & 55.5614 & 78.21559 & 70.93321 & 73.22003 \\
10 & 99.11911 & 46.06184 & 54.13815 & 49.48179 & 52.21207 & 70.05159 & 63.92879 & 66.00857 \\
11 & 102.50873 & 40.53399 & 51.37758 & 48.02073 & 49.55277 & 62.9932 & 57.56659 & 59.43875 \\
12 & 100.78422 & 35.5245 & 47.92536 & 44.91139 & 46.33103 & 56.00456 & 51.36637 & 52.97324 \\
13 & 101.55815 & 31.62712 & 45.28078 & 42.1864 & 43.62824 & 51.39535 & 47.32896 & 48.77728 \\
14 & 100.74512 & 28.30968 & 41.74204 & 39.15376 & 40.41341 & 46.39977 & 42.8751 & 44.15598 \\
15 & 101.77869 & 26.06913 & 38.39224 & 35.99221 & 37.08667 & 43.29751 & 40.081 & 41.3092 \\ \hline \hline \end{tabular} WM: weighted mean; WMD: weighted median.
\end{table}
Table 12: Results of relative efficiencies given different sample sizes for sample 5
### Sensitivity study
In addition to the above 6 small-size samples, this study tests 12 larger-size datasets (i.e., Cases 1-12, see Table 14) generated using different distributions (i.e., uniform, normal, chi-square, and Poisson) with weights to fairly evaluate their sensitivity to different outlier proportions. Notably, three different weight constructions are considered, where the first one is random and unordered, and the second and third are random and ordered, in descending and ascending orders, respectively.
Using the above observations and weight constructions, this study first derives the bias of the proposed estimators without considering any outliers. As presented in Figs. 3-5, the Bias of different estimators (weighted mean, \(HL,\ WHL1\), and \(WHL2\)) were visually compared and illustrated under three weight constructions (W1, W2, and W3 in Table 14). From Fig. 3, it is observed that all estimators are close to zero when the weights are random and unordered. However, when the weights are ordered, obvious visual differences are observed, as shown in Figs. 4 and 5. Specifically, the larger the observation value is, the greater the weight is. As shown in Fig. 4, the Bias of the proposed \(WHL1\) and \(WHL2\) estimators are smaller than those of the existing estimator (\(HL\)). After comparing their
Biases, it is found that the \(WHL2\) estimators outperform the \(WHL1\) estimators. A similar tendency can be identified in Fig. 5, wherein the weights are inversely related to the observation values. This is strong evidence that the proposed robust estimators (\(WHL1\) and \(WHL2\)) are much closer to the weighted mean and more reliable for substituting the weighted mean when no outliers are involved.
## 5 Conclusion
In this paper, we have proposed a new method for estimating the weighted mean and the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted mean of the weighted of the weighted mean of the weighted mean of the weighted mean of the weighted of the weighted mean of the weighted of the weighted mean of the weighted of the weighted mean of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of weighted of the weighted of the weighted of weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of the weighted of weighted of the weighted of the weighted of weighted of the weighted of weighted of
Figure 3: Comparison of Bias using different estimators under random and unordered weights
Figure 4: Comparison of Bias using different estimators under random and ascending-order weights
Next, this study investigates the robustness of the proposed estimators, given different outlier proportions. As illustrated in Figs. 6-9, when outliers are involved, the outlier-resistant behaviors of the weighted mean, \(HL\), \(WHL1\), and \(WHL2\) are visually compared. Clearly, the average bias using the weighted mean is zero without considering any outliers. However, when outliers are considered, the weighted mean shifts away from zero. All the average biases using robust estimators remain smaller than the weighted mean when the outlier proportion increases from 0 to 25%, indicating that the robust estimators applied in this study are much more reliable than the weighted mean when the outliers are involved. In particular, when the weights are random and unordered, it is difficult to determine which model has better robust performance because the weights tend to be more symmetric [see Figs. 6-9 (a)]. However, in practice, the weights are more likely to be nonsymmetric. Thus, the newly proposed robust estimators (\(WHL1\) and \(WHL2\)) outperform \(HL\) when the weights are ordered [see Figs. 6-9 (b) and (c)]. The proportion of outliers is typically very small. The above discussion provides strong evidence to verify the theoretical contributions of the robust optimization models proposed in this study.
Figure 8: Comparison of average bias using different estimators with Chi-square distribution
Figure 6: Comparison of average bias using different estimators with uniform distribution
Figure 7: Comparison of average bias using different estimators with normal distribution
## 5 Conclusions
In this paper, we proposed two new categories of WHL estimators for robust location estimation when the sample data contains weights, where the first category of WHL estimators (i.e., \(WHL1\)) is defined as the median of all pairwise weighted averages and the second WHL estimators (i.e., \(WHL2\)) is defined as the weighted median of all pairwise weighted averages. Then, this study investigated their robust properties and obtained the exact finite-sample breakdown points of the \(WHL1\) estimator and closed-form finite-sample breakdown points of the \(WHL2\) estimator. After that, the newly proposed WHL estimator was compared with the traditional ones in terms of bias and relative efficiency through extensive Monte Carlo simulations under different sample sizes, weight configurations, and data distributions. The simulation results reveal that the newly proposed \(WHL1\) and \(WHL2\) estimators obtain markedly lower bias than the conventional location estimators and the relative efficiency of \(WHL2\) estimators is higher than that of the weighted median and \(WHL1\) estimators in most cases. Through sensitivity analysis, it is found that the newly proposed \(WHL1\) and \(WHL2\) estimators are much closer to the weighted mean and more reliable for substituting the weighted mean when no outliers are involved. The newly proposed \(WHL1\) and \(WHL2\) also remain stable in robustness compared with the \(HL\) estimator in the presence of contaminated data.
In addition to the aforementioned contributions, there are two potential directions worth investigating in the future. It would be interesting to combine the proposed robust estimators and conventional ones to develop some
Figure 9: Comparison of average bias using different estimators with Poisson distribution
more reliable location estimators. Another potential direction is to investigate the exact breakdown points of the newly proposed \(WHL2\) estimators.
## Acknowledgments
This work was supported by National Natural Science Foundation of China (No.72104020) and National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1A2C1091319 and RS-2023-00242528).
|
2309.14254 | End-to-end deep learning inference with CMSSW via ONNX using docker | Deep learning techniques have been proven to provide excellent performance
for a variety of high-energy physics applications, such as particle
identification, event reconstruction and trigger operations. Recently, we
developed an end-to-end deep learning approach to identify various particles
using low-level detector information from high-energy collisions. These models
will be incorporated in the CMS software framework (CMSSW) to enable their use
for particle reconstruction or for trigger operation in real-time.
Incorporating these computational tools in the experimental framework presents
new challenges. This paper reports an implementation of the end-to-end deep
learning inference with the CMS software framework. The inference has been
implemented on GPU for faster computation using ONNX. We have benchmarked the
ONNX inference with GPU and CPU using NERSCs Perlmutter cluster by building a
docker image of the CMS software framework. | Purva Chaudhari, Shravan Chaudhari, Ruchi Chudasama, Sergei Gleyzer | 2023-09-25T16:13:35Z | http://arxiv.org/abs/2309.14254v1 | # End-to-end deep learning inference with CMSSW via ONNX using docker
###### Abstract
Deep learning techniques have been proven to provide excellent performance for a variety of high-energy physics applications, such as particle identification, event reconstruction and trigger operations. Recently, we developed an end-to-end deep learning approach to identify various particles using low-level detector information from high-energy collisions. These models will be incorporated in the CMS software framework (CMSSW) to enable their use for particle reconstruction or for trigger operation in real time. Incorporating these computational tools in the experimental framework presents new challenges. This paper reports an implementation of the end-to-end deep learning inference with the CMS software framework. The inference has been implemented on GPU for faster computation using ONNX. We have benchmarked the ONNX inference with GPU and CPU using NERSC's Perlmutter cluster by building a docker image of the CMS software framework.
## 1 Introduction
The CMS [1] and ATLAS [2] experiments at the large hadron collider (LHC) have been designed to explore physics at the TeV energy scale during its operation span of about 30 years. Both experiments made a most significant discovery of the Higgs boson [3; 4] from the data collected at 8 TeV in 2012 worth integrated luminosity of 23 fb\({}^{-1}\). In addition to the detailed studies of the Higgs boson properties, finding anomalies in the precision standard model (SM) to search for new physics with direct and indirect measurements are some of the important goals for the CMS experiment. The existence of the new particles as predicted by beyond standard model (BSM) theories are expected to have an extremely small production cross-section, necessitating the need to collect more collision data. The high-luminosity (HL-LHC) [5] is planned to level the instantaneous luminosity at \(5\times 10^{34}\)cm\({}^{-2}\)s\({}^{-1}\), with an integrated luminosity of about 3000 fb\({}^{-1}\), ten times more than the LHC. The corresponding mean number of collisions (pileup) per bunch crossing will be 140 posing tremendous challenges for filtering, collecting, processing, reconstructing, and analyzing data due to huge event size, data volume, and complexity.
Advanced machine learning approaches will be employed to overcome significant hurdles provided by rising levels of pile-up and the scarcity of sought-after signals to achieve the key physics goals at HL-LHC. To address this issue, CMS researchers are employing cutting-edge machine learning techniques for data processing and detector reconstruction, with the goal of
optimizing and speeding up these models during training and inference. Most of the particle identification algorithms at the CMS and ATLAS experiments rely on inputs provided by the particle-flow (PF) [6] algorithm used to convert detector-level information to physics objects due to its capability to significantly reduce the size and complexity of particle physics data while offering a physically intuitive and simple-to-use representation for physics analyses. Despite the very high reconstruction efficiency of PF algorithms, some physics objects may fail to be reconstructed, are reconstructed imperfectly, or exist as fakes and limit the search for BSM scenarios. Therefore it is advantageous to consider reconstruction that allows a direct application of machine learning algorithms to low-level data in the detector.
The end-to-end deep learning approach combines a low-level detector representation and deep learning algorithms. This approach has achieved current state-of-the-art performance in identifying electrons, photons, jets, and boosted objects [7; 8; 9; 10] using various deep-learning architecture such as convolutional neural network and graph neural network. One of the main objectives of the CMS experiment's research and development towards HL-LHC is to incorporate such cutting-edge machine learning algorithms for particle identification into the CMS software framework (CMSSW) [11] data processing pipeline. Training the deep neural networks and subsequently obtaining the inference on trained models on massive amounts of data, such as those produced at LHC, is exceedingly time-consuming and demands significant computational resources. Graphical Processing Units have proven to be capable of providing fast, parallelized, and energy-efficient processing of data even on complex deep-learning architectures, making them ideal for a wide range of real-time applications as well as for user analysis-specific tasks.
This paper presents the integration of end-to-end deep learning framework into CMSSW to discriminate electrons from photons (E/Gamma tagger), quark jets from gluon jets (Quark/Gluon tagger), top quark from QCD jets (Top tagger), hadronic taus from QCD jets (Tau tagger). Their inference times are benchmarked on CPU and GPU.
## 2 Simulated dataset for end-to-end deep learning
The end-to-end deep learning technique is based on high-fidelity Monte Carlo simulated event samples. The samples are produced for the 2018 proton-proton collisions data-taking periods at the center of mass energy 13 TeV, without considering any additional particle collision in the single bunch crossing. The events for E/Gamma tagger studies are generated with photon particle gun sample at transverse momentum, \(p_{\mathrm{T}}=50\) GeV. The multijet production of light-flavor and gluon jets via strong interaction, referred to as quantum chromodynamics (QCD) with hard scattered transverse momentum \(\hat{p_{\mathrm{T}}}\) between 300 to 470 GeV are generated with PYTHIA 8 [12] for Quark/Gluon tagger studies. Events from top quark-antiquark pair production where W boson from the top quark decay are required to decay as a quark are used for Top tagger. The monte carlo samples were generated with POWHEG v2.0 [13] at next to leading order in perturbative QCD and uses PYTHIA 8 for Parton showering. The generation of 125 GeV Higgs boson (H) events via gluon fusion at NLO with H decays to tau leptons (H \(\rightarrow\tau\tau\)) is also performed with the POWHEG 2.0 generator for Tau tagger studies.
The LHC provides countercirculating beams of high-energy protons, allowing bunches of protons in these beams to interact with each other in the CMS detector [1] every 25 ns. When protons from opposing beams collide, a wide range of physical processes can occur, leading to the formation of either fundamental or composite particles. These particles, or their decay products, can then enter the CMS detector, which is designed to determine the particle type, energy, and momentum. Each bunch crossing where proton collisions occur is referred to as an "event," which can project hundreds of particles into the CMS detector. The CMS detector
is designed as a series of concentric cylindrical sections with barrel and endcap sections that enclose a primary interaction point where the LHC proton beams collide with each other. The central feature of the CMS detector is a superconducting solenoid that provides a magnetic field of 3.8 T designed to bend the trajectories of charged particles that aid in transverse momentum, \(p_{\mathrm{T}}\) measurement. The silicon tracker is composed of two parts, namely, the silicon pixel detector and the silicon strip detector. The first silicon pixel detector is the innermost part and is composed of four layers in the barrel region (BPIX) and three disks in the endcap region (FPIX). The pixel detector provides crucial information for vertexing and track seeding. The outer part of the tracking system is composed of silicon strip detectors. This is followed by the electromagnetic calorimeter (ECAL), made of lead-tungstate crystals, to measure the energy of electromagnetically interacting particles, and then the hadronic calorimeter (HCAL), made of brass towers, to measure the energy of hadrons. These are surrounded by the solenoid magnet which is finally encased by the muon chambers to detect the passage of muons.
The detector response for all samples was simulated for CMSSW release 12\(0\)2 using GEANT 4 package, which delivers the state-of-the-art in first-principles detector simulation, along with the most detailed geometry models of the CMS detector. For this study, we additionally use a custom CMS data format which includes the low-level tracker detector information, specifically, the reconstructed clusters from the pixel and silicon strip detectors.
## 3 Integration of end-to-end deep learning with CMS software framework
CMSSW is an overall collection of software framework written in C++ that is built around the Event Data Model (EDM). In the (EDM) format, each entry (edm::Event) represents a single collision event capable of holding multiple attributes on top of which various modules can be run to perform simulation, digitization, and reconstruction. These modules are divided according to their functionality into CMSSW base classes for analyzing data collections (edm::EDAnalyzer) or producing new ones (edm::EDProducers), among many other modules. The end-to-end framework (E2EFW) is designed to be highly modular and adaptable in order to accommodate customized workflows for end-to-end ML training and inference. The E2EFW can be optionally run to produce EDM-format files for further downstream processing by other CMSSW modules for a production workflow or to produce ROOT-ntuples for rapid prototyping.
### End-to-end framework pipeline
E2EFW consists of three main package categories: DataFormats, FrameProducers, and Taggers. The DataFormats package consists of all the objects and classes required to execute the E2EFW modules and save the output back to EDM-format files. It contains convenience classes for dealing with inputs and defining associations with other relevant collections. The association maps for linking object-level detector inputs to their related reconstructed physics objects are specifically defined here.
The FrameProducer package is primarily responsible for extracting detector data either as whole event data or as object-level data--and auxiliary functions aiding in this regard. The _data producers_ are provided with hit information from various CMS subdetectors in order to cast the whole event data to desired images or graphs. The _object-level data producers_ enable the generation of multi-subdetector data, such as to produce images or graphs for reconstructed electrons, photons, and jets as used in this paper. There are also a variety of modules that integrate with the output of _detector data producers_ to create localized
windows or crops around the coordinates of a desired reconstructed physical object. The E2EFW has user-configurable options for controlling which subdetectors to include and which jet clustering technique to employ to determine their centers. A separate module is given for electrons and photons. As a result, depending on the user's needs, different combinations of the producers might be utilized for different tasks. For instance, EGFrameProducer is used to produce electron/photon showers for EGTagger while JetFrameProducer is used for Quark/Gluon,Top and Tau tagger.
More detailed analyses of the reconstructed objects will be required for the typical physics application. To support this purpose, a third package category Taggers is provided. These can be used to interface with the output of any preceding FrameProducer package, allowing for modular, highly customized workflows. While most production-level analyses will have their own dedicated analysis workflow, the Taggers provide a quick and convenient avenue for rapid prototyping and analysis, which is desirable during the ML algorithm development but can also be used for running inference in a production-like workflow. The E2EFW presented in this paper supports a number of template modules.
A typical end-to-end framework pipeline is illustrated in Figure 1. First, the detector-level data producer is run to extract whole detector images or graphs corresponding to the ECAL, HCAL, and Track layers and store these back into an EDM-format file in vector shape suitable for object-level cropping. Then, object-level data producers are run to crop and process these whole detector vectors into either photon-level or jet-level data around the coordinates of the reconstructed photons or jets and again push back these vectors into the EDM-format file. Photon-level objects are used only for E/Gamma tagger while jet-level objects are used for all other taggers described in this paper. The Tagger package is configured to use various deep-learning architectures such as Convolutional Neural Network (CNN) (used in this study), and Vision Transformers for images. We considered simpleNet CNN architecture to benchmark the inference for all four taggers. The information from one sub-detector is considered as one channel of the CNN model. The input tensor size and the number of channels can be configured according to the end-user necessity. The E2EFW provides an option to consider 13 channels, such as Track \(p_{\mathrm{T}}\), \(\mathrm{d}_{0}\), \(\mathrm{d}_{z}\), four BPIX layers, ECAL & HCAL, and four strip layers. Table 1 shows various sub-detector channels and input tensors considered for the various Taggers. The E2EFW can be easily configured to use other algorithms like Graph Neural Networks (GNNs) with graph inputs, which is beyond the scope of this paper.
Figure 1: The end-to-end framework (E2EFW ) pipeline used for E/Gamma,Quark/Gluon,Top,Tau taggers [14].
Lastly either one of the four taggers is run to select either photon or reconstructed jet. The data associated with the selected photon-level or jet-level objects are passed to the ONNX [15] Runtime for inference. The resulting predictions on these selected photons or jets are then pushed back into the EDM-format file for further downstream analysis. ONNX (Open Neural Network Exchange) is an open-source framework designed to facilitate interoperability and portability between different deep learning frameworks and tools. Most deep learning frameworks such as TensorFlow, PyTorch, XGBoost support converting those models into the ONNX format or loading a model from an ONNX format. ONNX Runtime is used for inference on ONNX models as it is supported by the CMSSW to run the inference on GPUs. While we have described photon-level and jet-level workflow, the E2EFW can be appropriately adapted to process other objects and full events, by a suitable definition of the process workflow. For instance, if a user wishes to perform event-level analysis and inference on whole detector inputs, an appropriate Tagger package that applies event-level selection can be defined that interfaces directly with the output of detector data producer and bypasses object-level data producers. The E2EFW thus allows a high degree of flexibility for pursuing an assortment of end-to-end ML studies dictated by the user.
### Containerising end-to-end framework at NERSC using docker
Despite the dedicated computing environment provided by CMS experiment, the installation of CMS software framework and use on a third-party computing facility is still very tedious, error-prone, and time-consuming, due to the versioning issues and lack of administrative privileges. To address this issue, our group has demonstrated the efficient use of container images and their application while performing analysis on the Perlmutter supercomputing facility at the National Energy Research Scientific Computing Center (NERSC) [16]. Docker is one such well-known and versatile unit of software for effectively managing dependencies, runtime, system tools, system libraries, and settings. Virtualization and isolation of software packages provide a great deal of flexibility during the development stage, in reliably deploying and efficiently managing pipelines. We dockerized CMSSW version 12\(0\)6 and the required dependencies in a compact docker image that was reliably transferred to the Perlmutter cluster. Perlmutter cluster provides the Cern Virtual Machine File System (CVMFS) required to execute the CMS software framework. The shifter software package at NERSC allows to run the docker image of CMSSW or any other user-created images.
## 4 Computing resources
The end-to-end framework inference has been benchmarked at two different computing sites. Fermilab LHC Physics Center (LPC) provides NVIDIA Tesla P100 GPU that utilizes the Pascal architecture. The Tesla P100 [17] GPU with high bandwidth memory (HBM) of 12 GB was accessed at Fermilab via a dedicated GPU worker node through a 12 GB/s PCIe connection using CUDA Toolkit driver version 12.1. Data was stored and read from an HGST
\begin{table}
\begin{tabular}{l|l|l|l} \hline Tagger & Number & Input tensor & Channel names \\ & of channels & array size & \\ \hline E/Gamma & 1 & \(1\times 32\times 32\) & ECAL \\ Quark/Gluon & 5 & \(5\times 128\times 128\) & Track \(p_{\mathrm{T}}\), d\({}_{0}\), d\({}_{z}\), ECAL \& HCAL \\ Top & 8 & \(8\times 128\times 128\) & Track \(p_{\mathrm{T}}\), d\({}_{0}\), d\({}_{z}\), BPIX layers, ECAL \& HCAL \\ Tau & 8 & \(8\times 128\times 128\) & Track \(p_{\mathrm{T}}\), d\({}_{0}\), d\({}_{z}\), BPIX layers, ECAL \& HCAL \\ \hline \end{tabular}
\end{table}
Table 1: Specifications of the convolutional neural network inputs.
1W10002 hard drive located on the GPU machine during inference. Images produced from the end-to-end framework were provided to the GPU using a single Intel(R) Xeon(R) Silver 4110 8-core CPU. LPC also provides CPU-only resources, the E2EFW inference was benchmarked on AMD EPYC Processor 1-core CPU. We also used the GPU resources located at the Perlmutter cluster at the National Energy Research Scientific Computing Center (NERSC). Perlmutter cluster provides NVIDIA A100 [18] GPU that utilizes Ampere architecture with high bandwidth memory (HBM) of 40 GB and it was accessed through a dedicated GPU worker node through a 25 GB/s PCIe connection using CUDA Toolkit driver version 11.7. Data was stored and read from an Intel Dual-Port NVMe solid-state drive.
## 5 Performance
The latency and throughput for E2EFW framework for inference were obtained by running 1000 events, generated from the Monte Carlo simulations as described in section 2. The inference was obtained by running a single-threaded job out of which the first 300 events were dropped from the calculations to stabilize the results. The measurement was repeated ten times and the average throughput value was estimated. Figure 2 shows the average end-to-end inference framework event throughput per second for E/Gamma,Quark/Gluon,Top,andTau taggers compared for Fermilab LPC GPU and CPU. The ML inference was obtained on a single GPU and single CPU with a single thread for input/output. The speedup of 11-18% was achieved for E2EFW inference on Fermilab GPU compared to CPU achieved due to the 8-core CPU and an NVIDIA Tesla P100 GPU available on the LPC GPU node compared to the 1-core CPU on the LPC CPU. Figure 3 compares the average throughput per second for NVIDIA Tesla P100 GPU at Fermilab LPC and NVIDIA A100 GPU at NERSC for E/Gamma,Quark/Gluon,Top, andTau taggers. The figure presently does not include the results for E/Gamma tagger for A100 GPU at NERSC Perlmutter. Figure 3 demonstrates a small increase for Top and Tau Tagger and a 20% improvement for Quark/Gluon tagger when utilizing A100 GPU against P100. This is because the inference at Perlmutter is obtained by first executing the CMSSW docker image, which takes around 5% of the entire inference time, and then opening an input file, which takes more than 50%. Aside from that, the Tagger package's ONNX Runtime inference is the only component currently implemented on the GPU. Future plan is to port other modules of the framework to the GPU as well. These throughput measurements have 0.5-3% uncertainties.
Figure 4 compares the breakdown of time spent per event (latency) by the end-to-end inference framework modules, such as Event setup (gray), DetFrames (Pink), EGFrames /JetFrames (Teal), Tagger time (orange), and input/output (blue) at LPC CPU in top bar chart and LPC GPU in the bottom bar chart. The timings are compared for E/Gamma,Quark/Gluon,Top andTau Tagger, respectively. Figure 4(top left) shows that around 40% of the time per event was spent in event setup for E/Gamma tagger while 60% of time was spent in Input/Output module for Quark/Gluon, Top and Tau tagger. The I/O time can be improved further by writing only the required information in the EDM-format root files instead of the full input information plus the data producers and object producer collections produced in the end-to-end framework.
## 6 Conclusion
This paper presents the integration of end-to-end deep learning framework within CMS software framework. The framework provides the support to discriminate electrons from photon showers, quark jets from gluon jets, top quark from QCD jets, and tau particles from QCD
jets. Along with the particle identification task on photon-level or jet-level, the framework can be easily adapted to even-level tasks. We utilized the Fermilab LPC and at the NERSC Perlmutter's Central Processing Unit and Graphical Processing units to benchmark the inferences for E/Gamma,Quark/Gluon,Top,and Tau taggers. The inference obtained on NVIDIA Tesla P100 GPU with a single Intel(R) Xeon(R) Silver 4110 8-core CPU shows 11-16% increase in throughput compared to AMD EPYC Processor 1-core CPU. The integration of the end-to-end deep learning framework within the CMS software framework was accomplished at the NERSC supercomputing facility by preparing a CMSSW docker image. Then the inference was benchmarked on NVIDIA Tesla A100 GPU, demonstrating a small improvement in throughput compared to the NERSC performance.
Figure 3: End-to-end inference framework event throughput per second for E/Gamma, Quark/Gluon, Top, and Tau taggers compared for NVIDIA Tesla P100 GPU at Fermilab LPC (blue) and NVIDIA A100 GPU at NERSC Perlmutter (orange) [14].
Figure 2: End-to-end inference framework event throughput per second for E/Gamma, Quark/Gluon, Top, and Tau taggers compared for Fermilab LPC GPU (blue) and CPU (orange) [14].
increase for Top and Tau taggers and a 20% improvement for Quark/Gluon tagger against NVIDIA Tesla P100 GPU.
## Acknowledgments
We would like to thank the CMS Collaboration and the CERN. This work is supported in part by the U.S. Department of Energy (DOE) under award no. DE-SC0012447. This work is partially supported by the Fermilab US-CMS HL-LHC Software and Computing R&D efforts. This research used resources from the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory. PC and SC were participants in the Google Summer of Code program in 2021-2022 and 2020-2021, respectively.
|
2303.17935 | GelSight EndoFlex: A Soft Endoskeleton Hand with Continuous
High-Resolution Tactile Sensing | We describe a novel three-finger robot hand that has high resolution tactile
sensing along the entire length of each finger. The fingers are compliant,
constructed with a soft shell supported with a flexible endoskeleton. Each
finger contains two cameras, allowing tactile data to be gathered along the
front and side surfaces of the fingers. The gripper can perform an enveloping
grasp of an object and extract a large amount of rich tactile data in a single
grasp. By capturing data from many parts of the grasped object at once, we can
do object recognition with a single grasp rather than requiring multiple
touches. We describe our novel design and construction techniques which allow
us to simultaneously satisfy the requirements of compliance and strength, and
high resolution tactile sensing over large areas. The supplementary video can
be found here: https://youtu.be/H1OYADtgj9k | Sandra Q. Liu, Leonardo Zamora Yañez, Edward H. Adelson | 2023-03-31T10:00:40Z | http://arxiv.org/abs/2303.17935v1 | # GelSight EndoFlex: A Soft Endoskeleton Hand with Continuous High-Resolution Tactile Sensing
###### Abstract
We describe a novel three-finger robot hand that has high resolution tactile sensing along the entire length of each finger. The fingers are compliant, constructed with a soft shell supported with a flexible endoskeleton. Each finger contains two cameras, allowing tactile data to be gathered along the front and side surfaces of the fingers. The gripper can perform an enveloping grasp of an object and extract a large amount of rich tactile data in a single grasp. By capturing data from many parts of the grasped object at once, we can do object recognition with a single grasp rather than requiring multiple touches. We describe our novel design and construction techniques which allow us to simultaneously satisfy the requirements of compliance and strength, and high resolution tactile sensing over large areas.
## I Introduction
The human hand has provided inspiration for many robot hands. Human fingers contain an interior articulated skeleton, which is covered with soft skin, providing the fingers with a combination of strength and compliance. The fingers are rounded, with tactile sensing present throughout the skin, and with the best tactile acuity on the front surfaces. When a person holds an object with an enveloping grasp, the object touches the hand at a great many points, allowing the person to recognize the object by its shape, size, and other properties. Our goal is to create a robotic hand that emulates many of these properties.
The ability to identify an object using a single grasp is important and requires "complete" sensing along the grasping surfaces of a finger. Even though many current finger-inspired sensors can perform object recognition well with high-resolution finger tip sensors or with low-resolution larger tactile sensors, they either require that the object is in full contact with the finger tips or multiple regrasps to classify the object within the hand [1, 2]. Furthermore, they do not have the compliance afforded by soft robotics, which can greatly improve secure grasping abilities of the gripper or make them safer for interaction with the world around them.
In other words, soft robotic manipulators could greatly benefit from having structural compliance and rigidity, along with high-resolution sensing of tactile sensors. To this end, we present the following contributions:
* A novel design of a continuous high-resolution tactile sensor along a curved surface;
* An endoskeleton finger design for a human-inspired gripper that incorporates tactile sensing (Fig. 1);
* A neural net that can utilize only the tactile images from a single grasp to classify objects.
## II Related Work
### _Hand Grippers_
Human hand inspired grippers have been previously designed with varying degrees of sensing, rigidness and anthropomorphism [3, 4, 5]. Although robotic systems were historically composed of rigid materials, interest in soft systems has quickly risen [6]. Rigid hands traditionally focused on control systems and force transmission while neglecting contact rich sensing and compliant gripping that more closely characterizes human hands [7]. Soft robotics offers the advantages of compliance, robustness, and can
Fig. 1: **Top** A CAD model of our GelSight EndoFlex gripper with some of the parts labeled. **Bottom** The GelSight EndoFlex is securely grasping a Rubik’s cube and the corresponding processed difference images of four of the six sensing regions are displayed. Of note is that the bottom two sensor images are showing continuous sensing along the side and corner of the cube, while the top two sensor images are showing one image each from the other two fingers.
be compatible with high-resolution geometry sensing with camera-based sensors [8].
Rigid robots have often enjoyed well defined kinematic models and high strength, making them ideal manipulators for repeatable and complex motions [9]. However, gripping often introduces a degree of uncertainty that may require a softer touch to avoid high energy collisions [10]. Soft robotic grippers benefit from their natural robustness and compliance which have proven to be critical when grasping [11]. Due to their compliant nature, soft robots are considered to have an infinite degrees of freedom leading to challenges when developing a robust control system. However, recent advances in simulation and robotics have led to the BCL-26, a soft gripper with 26 controllable degrees of freedom that is capable of dexterous motion with a high degree of anthropomorphism. [12]. Other modern designs such as the RBO Hand 3, show great promise with their dexterous manipulation and potential to incorporate sensors due to its larger size [13].
Many attempts have been made at marrying soft and rigid robotics to achieve flexible yet strong robots [14, 15, 16]. One approach to strengthening and increasing precision of soft grippers has been embedding skeletons within their structure [17]. Although the addition of an endoskeleton brings various benefits, it also comes with some drawbacks including increased manufacturing and modeling complexity. To combat the increase in complexity, simulation has become a popular tool to supplement control design [18]. The properties of soft-rigid robotics appear to be a significant step towards high fidelity biomimetic hand grippers.
Despite the various advances in robotics to achieve a soft human-like hand, there are still critical elements missing from current designs. Most notably, there is an absence of rich geometric-based sensing in rigid and soft hands alike [9, 12]. Therefore, there is still progress to be made in developing a soft anthropomorphic hand with geometry sensing capabilities.
### _Sensing and Soft Grippers_
Most previous tactile sensing work in robotic grippers has been force-based using capacitive or strain sensors [19, 20]. These sensors provide a low cost option with high response time, but these types of sensors are better for sensing stiff and flat surfaces [21]. Vision-based sensors can provide additional sensing data and be highly compatible with soft robots.
Existing vision-based systems rely on cameras to capture the deformation of some elastomer and process the footage to obtain tactile data [8]. One such sensor is the TacTip which uses a camera to measure the deformation of a silicone membrane and superresolution to achieve precise force localization [22]. The soft nature and highly accurate sensing of TacTip has great potential, but the sensor size and lack of geometry sensing limit its application to anthropomorphic hands. The GelSight sensor family offers an alternative with its high resolution tactile sensing and application to curved surfaces [23]. GelSight sensors operate with a camera that views a painted aluminum-silicone skin that can capture finely detailed tactile imprints on its surface. This surface is then illuminated by different LEDs.
Previous GelSight sensing area has been limited by unicamera sensing, wide angle lenses with some distortion, and their large size [24, 25, 26, 27]. GelSight applications have seen limited integration of the sensing surface with the gripper body [28]. Therefore, there is still space to explore soft human-like grippers with structural integration of tactile sensors. One potential design for extending sensing surface area is to expand on the work of She _et al._[25] by using two or multiple cameras to create a continuous sensing surface. To our knowledge, no other GelSight sensor has used multiple cameras to create one continuous and compact sensing surface. Our novel design provides wide range GelSight sensing in a compact and soft anthropomorphic package.
## III Methods
### _Hardware_
The EndoFlex sensor is composed of an endoskeleton encased in silicone with two embedded cameras for continuous sensing (Fig. 2). Each endoskeleton was designed to be one continuous piece with a pair of rigid segments and flexures to form joints. This design minimizes the number of parts required to fabricate one finger when compared to traditional rigid fingers. The flexure design was chosen for its high compliance and low deformation of individual elements to reduce silicone delamination. We 3D printed the endoskeleton using an Onyx One printer with Markforged Onyx plastic for its combination of high strength and relatively low tensile modulus when compared to other extruded plastics. This combination of properties allowed minimal force loss during actuation.
A camera was mounted into each endoskeleton segment to prevent any shifting during actuation. Three sets of red, green, and blue LEDs were mounted with cyanoacrylate adhesive onto the rigid segment of the endoskeleton. They were spaced 90 degrees apart to create a colored light gradient for the GelSight algorithm. Finally, the endoskeleton was threaded with Piscifun Onyx Braided Fishing Line soaked in Smooth-On Universal Mold Release to reduce friction when
Fig. 2: A close-up view of an EndoFlex finger with an exploded view. Each finger operates independently with one degree of freedom and can be quickly replaced if damaged.
cast in silicone. We chose to use cable-driven actuation to reduce potential camera-view obstructions and also so that we could more easily integrate the camera into the finger skeleton.
A rigid three finger palm was designed with temporary fasteners to allow for fast replacement of damaged fingers or for future iterations. Fingers were positioned in a 'Y' pattern with two fingers and an opposing thumb. The pair of fingers was spaced thirty degrees apart to distribute grasping force without creating collisions. The palm was designed to have a rounded feature with a polyurethane foam layer to add grasping ability. A separate rigid plate was designed to be fastened onto the Panda robotic arm. Three Dynamixel AX-12A servos were mounted between the plate and the palm and served as the actuation method for the fingers through double axle spools. The double axle design allowed for actuated contraction and extension of each finger. The palm, plate and spools were all printed with Markforged Onyx plastic using a Markforged Onyx printer.
As part of our finger manufacturing process, which is fully shown in Fig. 3, a two part mold is designed for casting silicone to create the optically clear medium for the GelSight sensor. The mold was designed to hold the endoskeleton during the casting process which removed the need for fasteners or adhesives to hold the silicone layer. The mold had high curvature to create a rounded finger much like a human finger. One major benefit of the curved surface was the high reflection of lights within the silicone which aided in sensing by removing shadows of pressed objects. The mold design removed any air gap between the camera lens and cast silicone to minimize the refraction of light. The mold was produced using a Formlabs 2 SLA printer for its high resolution. To achieve the optical clarity required to use GelSight, the mold was incrementally sanded with sandpaper reaching 2000 grit.
To allow the silicone to compress when the tendons pulled the endoskeleton finger to a closed grasp position, we chose to synthesize a softer silicone for the finger. As a result, we used a ratio of 1 to 15 to 5 parts of XP-565 parts A and B, and plasticizer (Phenyl Trimethcone, Lotioncrafter). Decreasing the ratio of part A to B for the XP-565 is equivalent to adding less catalyst, which increases the softness of the silicone, while the addition of the plasticizer also causes the resulting cured silicone to have a softer texture.
Before pouring the silicone mixture into the mold, we used a paint brush to paint a thin layer of Inhibit-X (Smooth-On Inc). After waiting a few minutes for it to dry, we sprayed a layer of Ease Release 200 (Smooth-On Inc) on the mold. To create the sensing surface, we combined 2.5 parts 4 \(\mu\)m Aluminum cornflakes (Schlenck) with a mixture of 11 parts silicone ink catalyst and gray silicone ink base (Raw Materials Inc.) and 30 parts NOVOCS Gloss (Smooth-On Inc), and mixed it for a minute using an ultrasonicator. This mixture was then sprayed into the inside of the top mold with an airbrush and left to dry for at least 10 minutes before we fit the threaded endoskeleton inside of the mold and screwed the mold halves together. Remaining holes and the lips of the mold were covered in a thin layer of brushed-on silicone adhesive (Devcon), which created a seal for the mold and prevented any silicone leakage outside of the mold that could be caused by mold warping or other printing imperfections.
Once the main body silicone mixture had been degassed, we slowly poured the mixture into the prepared mold. The entire mold assembly was placed on top of a vibrating plate for 10 minutes to get rid of any bubbles in the camera-viewable areas. These bubbles may have been induced by the silicone pouring over the flexures, electronics, and other 3D printed parts inside of the mold. Some of the bubbles were retained along the side of the sensor surface, which is not viewable by the camera and did not negatively affect the sensor integrity.
Finally, the mold was placed inside of a oven at 125\({}^{\circ}\)F (52\({}^{\circ}\)C) for 12 to 15 hours. This temperature was chosen to prevent any of the electronics or inner structures from reaching their glass temperatures and causing delamination of the parts from the silicone. Once the finger was removed from the mold, the gray sensing membrane surface was no longer smooth and instead had a reticulated wrinkled texture (Fig. 4). This phenomena only occurred when we sprayed the paint on the mold first and did not occur if we chose to cure the finger first without the paint in the mold and spray the paint on the finger surface afterwards.
The modular fingers were then placed on our palm plate to create our completed gripper. We also note that this configuration can be changed to enable different types of grasps, although we chose an enveloping grasp to maximize
Fig. 4: A close up image of the reticulated wrinkle surface of the GelSight EndoFlex sensor. The width of one of the wrinkles is approximately 0.4 mm wide and was only created when we first sprayed the paint on the mold surface before casting silicone inside.
Fig. 3: The manufacturing process for the EndoFlex sensor including assembly of electronics and casting of silicone.
the amount of sensing the gripper could obtain from grasping an object in its palm.
### _Software_
Each finger was equipped with two Raspberry Pi Zero spy cameras with a 160\({}^{\circ}\) field of view, for a total of six cameras. All of the cameras were able to view a curved segment of the finger, which was illuminated by tri-directional LEDs. The finger segment images were individually streamed using the mjpg-streamer package and can be processed using OpenCV and fast poisson solver [29, 30] to get difference images and uncalibrated reconstruction images, as shown in Fig. 5.
## IV Experiment
To show the usefulness of having continuous sensing, we collected single grasps of various objects and performed a classification task based on the entire finger sensing region. Previous works show that object classification using finger tip sensing or low-resolution palm sensing is accurate, but only when the objects were in contact with the fingertips or multiple touches have been performed [1, 2].
Our grasping object set included three distinct objects from the YCB dataset: the Rubik's cube, one of the toy stacking cups, and a plastic orange [31]. These three objects are shown in Fig. 6. For each object, we collected approximately 500 different grasps using all six of the cameras inside the fingers to obtain a holistic, "full-hand" tactile view of the entire object. To capture many different grasps, we had assistants manually reorient each object randomly such that it could still be feasibly grasped with the gripper, which allowed different parts of the sensor images to capture different features of the object that was being grabbed. We also attempted grasps utilizing a couple of the fingers instead of all of the fingers in the cases that the third finger did not have a solid contact with the object in its hand.
For each set of six images we captured, we stitched them together into a 2 by 3 array and used them as inputs for a Resnet-50 neural net architecture with the three outputs as the objects we used for our grasping data set [32]. We chose to use stochastic gradient descent as our optimizer, with a learning rate of 1e-3 and a learning rate scheduler with a step size of 7 and a gamma set to 0.1. We also implemented data augmentation on the entire set of images to deal with potential inconsistent lighting or random noise output of the images, and to account for eventual wear and tear in the silicone over time. We split our data into training and validation sets in a 80% to 20% ratio. The complete neural net architecture is shown in Fig. 6.
### _Results_
**Grasping** The GelSight EndoFlex was able to easily and very securely grasp all of the objects in our object set. In particular, the polyethylene foam layer on the palm provided a compliant, deformable surface that the grasped objects could be pressed against. The hand was also able to grasp empty water bottles without crushing them, as well as heavier objects, like a drill with a battery, without dropping them. As expected, the compliance of the soft gel allowed us to grasp more fragile objects, while the rigid endoskeleton allowed the fingers to withstand the force and weight of a heavier object.
Each finger was also able to bend to around 60\({}^{\circ}\) at each flexure point using the Dynamixel motors. Because the silicone was quite soft and because we added human finger-inspired grooves along the flexures, when the fingers bent, the silicone was able to more easily compress around the sides. However, the silicone still obstructed some of the bending angle, and as a result, the endoskeleton finger was unable to bend to its full 90\({}^{\circ}\) range that it would have been able to otherwise. Furthermore, deepening the grooves to facilitate bending would have limited the sensing area and ultimately interfered with the continuous sensing. Nonetheless, this limitation in motion did not severely limit the hand's ability to grasp objects because the deformable silicone surface over the endoskeleton finger helped to accommodate any loss of motion with its compliance and softness.
Casting the mold while the finger was in a slightly bent position helped to prevent creases in the surface of the silicone when the finger bent. Doing so also prevented silicone creasing when the finger was straightened out since the sensing surface was pulled in tension. Unfortunately, over time, pulling the silicone finger in tension caused parts of the silicone in the base of the finger to slightly tear. We believe that this problem could potentially be mitigated by using a softer silicone with higher elongation.
Fig. 5: From left to right, we have raw sensor images of a 3.75 mm ball bearing array and a M2 screw, followed by their difference images from a reference image (no tactile contact), and the corresponding uncalibrated depth image.
Fig. 6: Neural net architecture for our single grasp classification. Once the object has been grasped, the six images are stitched together in a 2x3 array, thrown into our Resnet architecture and classified into a toy cup, an orange, or a Rubik’s cube.
Finally, given a different arrangement of fingers or with a finger that could behave more thumb-like with an added degree of freedom, we believe that these fingers have the potential to grasp an even larger variety of objects.
**Tactile Sensing** As designed, the finger was able to continuously sense along the entire length of the finger when it was in a "closed" position. The fingers were also able to sense along the sides as well, although some sensing was slightly lost at the very tips of the fingers.
Overall, the finger was able to provide extremely high resolution sensing and the raw sensor images were able to capture details that previous GelSight sensors could sense, but with additional sensing coverage due to its rounded shape and the wider camera field of view. However, the wider camera field of view and the curved shape caused some distortion in the sensing image, which is most apparent on the sides of the image frame.
Additionally, some of the sensing surfaces appeared to have distinct rings of lights around the different color channels instead of the blending we would have expected from using a Lambertian paint on the surface of the silicone gel. We believe that this phenomena could have been caused by slight delamination of the silicone from the LEDs. The addition of the air interface will cause the light to refract from the air to the silicone face and potentially cause these rings of light to form and prevent even blending of the light within the silicone. In particular, we noticed that when objects were pressed against these sensing surfaces, the light circles began to dissipate. Nonetheless, this did not affect the sensor resolution and the distinct features of the objects were still distinguishable as the tactile sensor had extremely high-resolution.
Finally, we noticed that the wrinkles, which were manufactured on some of the finger sensing surfaces, were helpful in preventing tears in the silicone membrane. Unlike the smoother sensing surfaces, it seems like the wrinkles helped to mitigate the high stress points caused by sharp corners poking into the sensor surface. The surfaces with wrinkles also felt like they had less friction than the smoother surfaces. Although the wrinkled surface made surface reconstruction difficult because the wrinkled texture appeared in difference images, they did not seem to negatively affect our object classification. The effect could also have been mitigated since we noticed that if enough pressure was put on the sensor surface, the wrinkles would smooth out slightly, which would not affect object classification results.
**Object Classification** Our object classification model was able to obtain 94.1% accuracy on our validation set. In live testing, which consisted of our robotic hand grasping the 3 objects ten times each, we were able to correctly classify 80% of the objects. The orange was able to be recognized 9 out of 10 times, while the classifier slightly struggled with distinguishing between the Rubik's cube and the toy cup (80% and 70% accuracy, respectively). We believe that the discrepancy in the validation set results and the live testing results could be due to slight tears that developed over the course of the data collection and testing. Regardless, the hand was able to only use a single grasp to recognize the identity of an object.
As we expected, the orange, which had the most distinguishable tactile features, was the easiest for our model to recognize. Not only was the orange covered in an unique bumpy skin texture, it also had a distinctive stem portion. On the other hand, unless the fingers directly pressed against a corner of the Rubik's cube or along multiple smaller cubes, it was hard to visually distinguish some of its edges from the edges at the bottom and top of the toy cup.
We believe that this confusion between the Rubik's cube and the toy cup could be mitigated by adding a palm, which could also provide additional sensing. The added sensing from a larger area on the palm could have helped capture more tactile details that may have been missed by the fingers. Regardless, the object classification using continuous sensing along the multi-fingered hand was fairly robust and able to perform well on our object set. Specifically, it could be useful for grabbing objects in the dark or in an occluded environment where external vision would not be useful or could not be used.
## V Conclusion and Discussion
In this paper, we present the novel design of a continuous high-resolution tactile sensor incorporated into a finger, which was then integrated into a human-like hand. The hand was then able to use these large sensing ranges to be able to somewhat accurately classify objects using a single grasp, which, to the authors' knowledge, has not been done before. The ability to identify an object with a single grasp is akin to the way we as humans are able to grab an object with some priors and without external vision and determine almost immediately what we are holding.
Although recent research has focused a lot on large range low-resolution tactile sensors or high-resolution fingertip sensors for dexterous manipulation, not much research has been done on high-resolution sensing across the majority of a finger's surface. Having this added sensing allows us to perform many useful classification tasks, and doing so in a soft, compliant gripper allows us to also safely and securely interact with objects and the surrounding environment. Sensors similar to the GelSight EndoFlex have the ability to be used for home-care robots or for human-robot interaction, where compliance and sensing are key to success.
Future work on this gripper involves adding a thumb-like joint, as well as full fingertip sensing, which can greatly improve the usability of the gripper for sensing and dexterous manipulation tasks. We can also continue to draw inspiration from GelSight sensors and add markers which could help track slip and shear or torsional forces along the surfaces of the finger. Overall, our novel endoskeleton finger design begins to solve the problem of designing human-inspired soft-rigid robotic hands with high-resolution sensing that are capable of performing more and more complicated tasks.
## VI Acknowledgements
This work was supported by funds from the Toyota Research Institute, the Office of Naval Research, and the SINTEF BIFROST (RCN313870) project. The authors would also like to thank James M. Bern and Megha H. Tippur for their helpful advice and design tips.
|